text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
A Hybrid Particle Swarm Optimization-Tuning Algorithm for the Prediction of Nanoparticle Morphology from Microscopic Images Aggregated nanoparticle structures are quite ubiquitous in aerosol and colloidal science, specifically in nanoparticle synthesis systems such as combustion processes where coagulation results in the formation of fractal-like structures. In addition to their size, morphology of the particles also plays a key role in defining various physicochemical properties. Electron microscopy based images are the most commonly used tools in visualizing these aggregates, and prediction of the 3-dimensional structures from the microscopic images is quite complex. Typically, 2-dimensional features from the images are compared to available structures in a database or regression equations are used to predict 3-dimensional morphological parameters including fractal dimension and pre-exponential factor. In this study, we propose a combination of evolutionary algorithm and forward tuning model to predict the best fit 3-dimensional structures of aggregates from their projection images. 2-dimensional features from a projection image are compared to the candidate projections generated using FracVAL code and optimized using Particle Swarm Optimization to obtain the 3-dimensional structure of the aggregate. Various 3-dimensional properties including hydrodynamic diameter and mobility diameter of the retrieved structures are then compared with the properties of the aggregate used to form the candidate projection image, to test the suitability of the algorithm. Results show that the hybrid algorithm can closely predict the 3-dimensional structures from the projection images with less than 10% difference in the predicted 3-dimensional properties including mobility diameter and radius of gyration. INTRODUCTION Deteriorating air quality levels are a great concern in several places across the world. Air pollution adversely affects human health, including respiratory illness, pulmonary and cardiovascular diseases, and premature mortality (Shiraiwa et al., 2017;West et al., 2016;O'Dell et al., 2019;David et al., 2019). Of great concern is the particulate matter (PM), especially those generated through anthropogenic sources, including various combustion processes and burning of fossil fuels (Wang et al., 2013;Bell et al., 2007;Dang et al., 2022;Martin et al., 2010;Mayer et al., 2020). The fine and ultrafine particles remain dispersed in the air for significantly long durations and the physicochemical properties of these suspended particles are defined by their size and morphological features. Nanoparticles formed from several engineering and combustion processes are often found in clusters or aggregations of individual spherical particles rather than as isolated spheres due to constant collisions and subsequent coagulation. These aggregates are usually ensembles of point-contacting or overlapping near-spherical particles and are referred to as fractal aggregates. Despite a complex shape, the morphology of these aggregated particles is commonly represented as mass fractals (Forrest and Witten, 1979;Jullien, 1987;Filippov et al., 2000;Cai et al., 1995) with the following relationship between the number of primary particles (also referred to as monomers) and the radius of gyration of aggregate: where N is the number of monomers in the aggregate, Rg is the radius of gyration, a is the primary particle radius, kf is the fractal prefactor/pre-exponential factor, and Df is the fractal dimension (Meakin, 1999;Jullien, 1987;Brasil et al., 2001). Df and kf are referred to as the morphological parameters, and the values typically are found to range between 1 to 3 and 0.6 to 4, respectively. Multiple studies have been done on understanding the physicochemical behaviour of quasifractal aggregates due to the inherent properties of fractal structures, such as large available surface area per unit volume, porous geometry, and ability to form extended networks (Altenhoff et al., 2020a;Filippov et al., 2000;Köylü et al., 1995). It has been well documented that the morphological features affect the formation and growth of aggregates and play a significant role in altering various physicochemical and transport properties Morán et al., 2018;Oh and Sorensen, 1997). Experimental determination of the morphology of the particles is quite complicated, and Transmission Electron Microscopy (TEM) is considered the gold standard to visualize the submicron sized aggregated particles (Manuputty et al., 2019;Palotás et al., 1996;Tian et al., 2006;Wang et al., 2016). Advancements in image analysis and computational tools have led to more significant improvements in predicting the morphological descriptors of the aggregates from their microscopic images (Brasil et al., 2001;Cabarcos et al., 2022;Wang et al., 2016). Most studies have attempted the identification of relevant 2-dimensional parameters in order to predict the 3-dimensional structures, most often the Fractal dimensions (Einar Kruis et al., 1994;Heinson et al., 2012;Verma et al., 2019). Einar Kruis et al. (1994) used automated particle recognition techniques with the TEM images for particle size distribution analysis, and Fractal dimensions of the aggregates were calculated. Pair correlation functions were derived from the microscopic images to describe the structure of the fractal aggregates using packing fraction density and fractal dimension (Heinson et al., 2012). Monte-Carlo methods and Langevin equations were combined to understand the aggregation processes in different flow regimes with computationally simulated aggregates (Morán et al., 2020). Thajudeen et al. (2015) developed a technique to predict the 3-dimensional structures of the aggregates from TEM images, by comparing them with test images in a database. Using a forward model, they created a large database of candidate 3-dimensional structures across a range of morphological descriptors and the number of primary particles. A few identified 2-dimensional features of the microscopic images were compared with the features of the candidate aggregates from the database to identify the best-fitting 3-dimensional structure. Comparison of the mobility diameters of the aggregate collected for the image analysis and the aggregate identified from the database proved this to be a promising method and was later extended for multiple applications (Jeon et al., 2016;Qiao et al., 2020). The efficacy of this method depends on the database, and covering the entire spectrum of the morphological properties and monomer number makes it cumbersome. Characterization of images and conversion into 3-dimensional structures were also used in growth and absorption rate measurements of iron oxide nanoparticles, and in understanding the effects of aggregate generation due to electrosurgical pencils in operation conditions (Jeon et al., 2016;Qiao et al., 2020). That Df, kf and N do not completely fix the morphology of an aggregate further complicates this problem. Regression equations to relate 2-dimensional and 3-dimensional features also depend on the candidate aggregates used for the study. In comparison, an optimization algorithm can focus on a narrower range of the fractal parameters providing better chances of replicating the 3-dimensional structure of an aggregate from a microscopic image. In this study, we propose a hybrid Particle Swarm Optimization-Forward Tuning algorithm combination that can predict the 3-dimensional structure of aggregates from their microscopic images. Relevant 2-dimensional parameters are identified from the microscopic image, and these are compared with the parameters calculated for synthetic projections of aggregates generated with tuning algorithms. FracVAL and FracMAP are two open-source codes for generating tailor-made aggregates for specific input of fractal parameters (Morán et al., 2019;Chakrabarty et al., 2009). Due to better features, including the option to generate aggregates with polydisperse monomers, FracVAL was used extensively in this study. The details of the algorithms and the relevant 2-dimensional and 3-dimensional features used in the study are explained in the next section. The proposed algorithm was tested with synthetic images created using FracVAL. The results obtained for various test cases are explained in Section 3, followed by a brief conclusion and the future possibilities of the proposed algorithm. FracVAL -A Tunable Algorithm for Cluster-cluster Aggregation Tuning algorithms are significant as they provide ways of systematically studying the effect of different morphological parameters on various physicochemical properties. Several tuning algorithms have been developed in the past that provide structures of fractal aggregates for pre-defined morphological parameters (Chakrabarty et al., 2009;Morán et al., 2019). These algorithms also allow the possibility of numerically generating a large number of fractal aggregates with low computational time. In this study, we use the capability of FracVAL as a tuning algorithm. In addition to the capability of generating large aggregates with thousands of monomers, it can also be used to generate aggregates consisting of polydisperse primary particles. The source code is also freely available, providing the opportunity to incorporate any modifications, including the effect of overlap between the primary particles. This algorithm generates aggregates with predefined fractal dimension Df and the fractal prefactor kf and preserves them during the generation process of fractal aggregate. In the first step, a particle-cluster aggregation algorithm is used to obtain a limited number of sub-clusters containing approximately equal number of monomers (Nsub). The sub-clusters generated are prepared into pairs that can aggregate to form larger clusters subsequently. The process is performed until there is a single large cluster. This method ensures the preservation of both Df and kf by ensuring that Eq. (1) is valid at all times. For the input of Df, kf, N and primary particle size, the algorithm provides the coordinates of the primary particles forming the aggregate. There is a degree of randomness associated with the process, due to which the likelihood of obtaining the exact same aggregate for the same input parameters is quite low. Analysis of the Aggregate Images Typical prediction of 3-dimensional aggregate structure from the microscopic image of aerosol aggregate starts with the extraction of relevant 2-dimensional geometrical properties. Projected area, perimeter, average primary particle radius, longest length across the aggregate, number of primary particles, 2-dimensional fractal dimension etc., are the more commonly used image features (Brasil et al., 1998;Köylü et al., 1995;Chakrabarty et al., 2009;Thajudeen et al., 2015). The morphological depiction of fractal aggregates is usually reduced to only two primary parameters, the mass fractal dimension, Df and the exponential prefactor kf. Contrary to what has been prevalently used, recent studies have reported that fractal dimension and prefactor value alone is insufficient to fix the shape and structure of aggregates (Heinson et al., 2012;Rottereau et al., 2004;Yon et al., 2021). Six 2-dimensional morphological properties are used in this study to describe the morphology of the projection, which are then be used to compare the input image to the projection of simulated aggregate, assisting in the retrieval of the best 3-dimensional aggregate candidate. All the properties are calculated with the box-counting method, where the number of squares required to cover the entire aggregate image is used for determining the relevant 2-dimensional parameters (Forrest and Witten, 1979;Mc Donald and Biswas, 2004;Altenhoff et al., 2020b). As the size of the boxes decreases, the number of boxes required to cover the image increases and thereby the accuracy of the method. Different versions of the box-counting method are extensively used in measuring relevant 2-dimensional properties of the aggregates from their microscopic images as represented in Fig. 1. Projected Area (Aproj) The projected areas of the aggregates were calculated using the number of boxes (Nin) and the area of each box, given by: Perimeter (P) Perimeter was calculated by using Eq. (3) after identifying the number of boxes on the edges of the image or projection (Nedge), Maximum Length (Lmax) The longest length across the aggregate (Lmax) was determined by selecting the pair of boxes with the farthest location out of all the box locations on the image or projection. Maximum Width (Wmax) It is the maximum width perpendicular to the estimated maximum length and is calculated by examining the orthogonality between the locations of any selected pair of boxes with that of the boxes forming the maximum length. The distance between the selected pair of boxes is calculated using Eq. (4). 2-dimensional Radius of Gyration (Rg,2D) The 2-dimensional radius of gyration for the image is calculated from the area of a unit square box (Asq) and the coordinates of each of the squares (xi, yi), given as ( ) where Ri is the distance of the i th box from the origin of the system and Rcom is the center of mass or center of area for the particular case, with n = 2 for the 2-dimensional image. 2-dimensional Fractal dimension (Df,2D) Df,2D is another important 2-dimensional property to compare the similarity between the input image and the computationally generated fractal aggregate image. Df,2D is calculated using the box-counting method as where the number of boxes increases as length (scaling) of the box decreases and the 2-dimensional Fractal dimension is determined by calculating the slope as depicted in Fig. 1. Particle Swarm Optimization (PSO) PSO is one of the most commonly used bio-inspired optimization algorithms. First introduced by Kennedy and Eberhart in 1995 (Kennedy and Eberhart, 1995), it mimics the collective motion of flock of birds where each bird gains from the collective experience of the group. PSO has been widely used in several applications across various engineering disciplines (Mangat and Vig, 2014;Yuan et al., 2010). In the constant search for the best solution, the method moves the particles with a certain velocity calculated in every iteration. The motion of each particle is influenced by its own best-known position (local best) and the best-known position in the space search (global best). The final result expected is that the particle swarm converges to the best solution, which is problem-dependent. The updated positions of each of the particles in the swarm are given by: where 'FP' is replaced by all three fractal parameters kf, Df, and N, resulting in three equations of velocity and three equations of location. These six equations are used to update swarm locations in terms of kf, Df, and N. Swarm particles move based on 1 , t i FP P + , the position vector that updates particle location, and 1 , t i FP V + , the velocity vector that updates particle direction where 'i' corresponds to the swarm particle number and 't' corresponds to the iteration, V is the velocity, P represents the position of the swarm particle. The control parameters c1, c2 are the local and global acceleration coefficient, w is the inertia coefficient, and r1, r2 are uniformly distributed random numbers between 0 to 1. PSO uses control parameters to update the population and choose the direction of the flock. The inertial coefficient was varied between 0.2 to 0.9, while the local and global acceleration coefficient were varied between 1.5 to 2.0. The velocity vector 1 , t i FP V + is determined by the Pbest (local best) and Gbest (global best). Pbest is the best location of a particular swarm particle, in this case, the personal best parameters kf, Df, and N as coordinates of the specific particle. This best location is verified with the computation of fitness function 'f' value after comparing the input image to the generated aggregate projection. This aggregate is created using the FracVAL with the location of swarm particles (fractal parameters kf, Df, and N). Gbest is the best location discovered among all swarm particles, and it aids in changing the direction of the entire swarm. Gbest represents the best fitness function value corresponding to a particular morphological set of kf, Df, and N. Both of these Pbest and Gbest values could change after an iteration or time step 't' if the current iteration yields better results than the previous iteration. Because it is desirable to search both local and global space without bias, the values of c1 and c2 are kept same in all the simulations. If c1 is greater than c2, particles will be drawn to their own best, resulting in prolonged wandering in search space; while, if c2 is greater than c1, global best will dominate the search, causing particles to move early to optima (Liu et al., 2012;Sun et al., 2010). Hybrid PSO-FracVAL algorithm The details of the combined algorithm/code are provided in Fig. 2, with the specifics of the information flow. The 2-dimensional features of the input image are calculated using the box-counting method explained before. Although microscopic images are the intended input images, to test the efficacy of the proposed method, "synthetic" images are generated using the test aggregates created using FracVAL. The initial population in the PSO algorithm is created using random values of Df, kf, and N, within the set limits. In this study, P corresponds to the coordinate positions of Df, kf, and N, with With each iteration, the corresponding Df, kf, and N values of the swarm particles were updated using Eq. (8) and Eq. (9). For each swarm particle, the corresponding morphological parameters are passed on to FracVAL code to generate candidate aggregate structures. The process is repeated until the FracVAL is able to generate fractal structures. For each of the structures, multiple projections are made on random planes and the relevant 2-dimensional properties are calculated. The objective of the algorithm is to minimize the fitness function given by: where all the properties with subscript 'T' are the true/input values and the ones without 'T' are the 2-dimensional parameters calculated from the projections of the aggregates generated through FracVAL. The fitness function value 'f' is the difference in the 2-dimensional features of the input image and the best-fit projection. The best fit structure corresponds to the case with the least fitness function value. In most of the simulations, the fitness function values attained values less than 10 -4 after 200 iterations, and 200 iterations were used as the stopping criteria for all the simulations. These final fitness function values in all the simulations are shown in Tables S3 and S6 in the supplementary information. Appropriate limits for kf, Df, and N were provided as constraints in the simulations. kf values were constrained between 0.6 and 2.6, while Df values were between 1.1 and 2.6. The range of the number of monomers was modified depending on the number of primary particles visible in the image. The lower limit was kept closer to this, while the upper limit was set with a sufficient factor of safety. The fitness function is calculated for each of the swarm particles, and the velocity and positions are updated based on the local best and global best solutions. At each iteration, the updated set of swarm positions (Df, kf, and N) are communicated to FracVAL code for the generation of candidate fractal structures. The aggregate details corresponding to the best solution for the fitness function are stored separately since the same set of input morphological parameters does not completely fix the aggregate structure in the case of fractal aggregates. All simulations were run for a sufficient time until the number of iterations reached a specific number or the value of the fitness function was reduced below a specified limit. 20 swarm particles were used in all the test cases in this study. Fig. 2. Flow chart of the proposed PSO-FracVAL algorithm for the prediction of 3-dimensional aggregate structure from their microscopic images. Comparison of the 3-dimensional Properties of the Predicted Aggregate Structures The suitability of the method as a retrieval algorithm was evaluated, corresponding to various features of the predicted structure. For each of the retrieved structures, 3-dimensional properties including 3-dimensional radius of gyration (Rg), anisotropy factor (Aij), hydrodynamic radius (Rh), orientationally averaged projected area (PA), and mobility diameter (dm) were compared against the corresponding values for the test aggregates. The details of the 3-dimensional parameters used are provided in this section. The 3-dimensional radius of gyration was calculated using Eq. (5) and Eq. (6) with N = 3 (for 3-dimensional structure). Heinson et al. (2010) showed that for Diffusion Limited Cluster Aggregates (DLCA), even with increased anisotropy, the fractal dimension remained more or less the same. They also discussed the relevance of anisotropy factor in defining the anisotropy of fractal structures. The anisotropy index can be calculated from the Inertia tensor and is defined by the ratio of squares of the principal radii of Gyration. The Inertia tensor (T) was calculated from the coordinates of the monomers for each aggregate as: Anisotropy factor The radius of gyration and cluster anisotropy was calculated by diagonalization of the abovecalculated T as: where Aij is the anisotropic factor, and Ri and Rj are the principal radii of gyration, respectively in the i th and j th direction. Mobility diameter Mobility diameter is one of the most commonly used equivalent diameters for the characterization of aerosol nanoparticles. Mechanical mobility is a proportionality constant between the velocity of the particle (v) and the resistance force (F) acting on it (Thajudeen et al., 2015). Mobility (B) can be defined as: Since typical combustion generated nanoparticles have sizes in the range of the background mean free path (λ), the effects of the slip correction factor should also be considered. The calculation of mobility for non-spherical particles has been fairly well established (Thajudeen et al., 2015), and the expression for the mobility is given by: ( ) where, µ is the dynamic viscosity, Rh is the hydrodynamic radius, and PA is the orientationally averaged projected area. The value of mobility requires knowledge of the hydrodynamic radii and projected areas of measured aggregates, which are difficult to determine by direct means. Mobility of a non-spherical particle can also be given in terms of the mobility diameter (dm) as: The details of calculation of Rh and PA are reported in prior studies (Thajudeen et al., 2015;Zhang et al., 2012). With the calculation of Rh and PA, Eq. (16) and Eq. (17) were combined to obtain the mobility diameter of the aggregate structure. RESULTS AND DISCUSSION Throughout this study, computational projection images made from 3-dimensional structures were used as the test cases, thereby enabling direct comparisons with the predicted structure and the structure used for making the projections. Aggregate structures were generated using FracVAL based on the input morphological parameter set, and projections on random planes were taken as the cases for testing the algorithm. The proposed method was tested on multiple datasets with {N, Df, kf} ranging between N = 25-500, Df = 1.3-2.4, and kf = 0.9-1.6. All the test aggregates used in the study are composed of monodisperse monomers with radius of 15 nm, with only point contact between the monomers. The size of the monomer has no direct bearing on the results. A simple visualization of the input aggregate structure, test projection case, output projection and the best fit aggregate is provided in Fig. S1. The retrieval of the morphological parameters and the 3-dimensional structure is expected to be more accurate for aggregates with low Df, and for Df > 2, the overlap between particles is more pronounced in the projections. The convergence of the coordinates for the swarm particles for one of the simulations is shown in Fig. 3. At the start of the simulations, the swarm coordinates {N, Df, kf} span across the possible range and as the simulation proceeds, they converge to a narrower range. Since the same coordinates {N, Df, kf} do not necessarily mean that the same aggregate structure can be generated, it is difficult for the entire swarm population to converge to the same final result. Retrieval Using Five 2-dimensional Parameters The suitability of the proposed hybrid algorithm was first tested for aggregates, based on the comparison of five 2-dimensional parameters including area (Aproj), perimeter (P), maximum endto-end length of the projection (Lmax), maximum width (Wmax) and 2-dimensional radius of gyration (Rg,2D). These were initially used since they were the 2-dimensional features chosen in a prior study where the images were compared to synthetic images in a database (Thajudeen et al., 2015). For the initial test cases, the number of primary particles for the 3-dimensional structures was fixed as 25. The retrieved Df, kf, and N values are shown in Table 1 show that the predicted morphological parameters are quite similar to the actual values of the aggregates used to create the projections. As expected, the accuracy decreases for Df ≥ 2.0. Comparison of the 3-dimensional properties of the aggregate used for testing and the corresponding predicted values are shown in Fig. 4. Fig. 3. Convergence of the initial population of birds to the best location or solution depicting the best fractal parameters. . 4. Comparison of the 3-dimensional properties of the best fit structure predicted with the properties of the aggregates (Table 1) used to generate the computational aggregate image. The 3-dimensional properties are modified for easier comparison, with the input values plotted on x-axis and the best fit values plotted on the y-axis, for easier comparison. 1:1 line and lines with 10% margins are provided for reference. The orientationally averaged projected areas are represented as equivalent radii values. The results clearly show that the best-fit aggregate image closely matches the 3-dimensional properties of the aggregate used to generate the test images. The results suggest that the algorithm is quite promising, especially for the cases with Df < 2. The error seems to be increasing in cases with higher Df, and most notably shown in the anisotropy values. Retrieval Using Six 2-dimensional Parameters An important aspect is the number of 2-dimensional parameters required to possibly uniquely represent a projection image. Increasing the number of independent 2-dimensional parameters for comparison is expected to improve the accuracy of the proposed method for the morphological parameters. In addition to the five 2-dimensional parameters used in the first simulations, Df,2D was used as the relevant 2-dimensional parameters to compare the project images. Df,2D was also expected to provide additional information on the range of Df of the 3-dimensional aggregate. Multiple studies have looked at the relationship between 2-dimensional and 3-dimensional fractal dimensions of aggregate structures, and this is often used in the prediction of morphological structures. A recent study has explored this in detail, where a relationship was arrived at for multiple projections of the computationally generated aggregates (Wang et al., 2022). In our work, the relationship between Df,3D and Df,2D of computationally generated aggregate and their projections, respectively, was explored. The results for some sample test cases of various Df,3D are shown in Fig. 5, where Df,2D values are calculated for different projections of the same aggregate. It is clear from the results that the Df,2D values of all projections of a fractal aggregate are in a narrow range, which can be used to narrow down the search space of Df, thereby improving the simulation time. The computational images used for simulations, summarized in Table 1, were also used as input cases to test the algorithm after considering Df,2D also as input and the results of the predicated morphological parameters are shown in Table 2, and the 3-dimensional properties are plotted in Fig. 6. The codes were not parallelized in this study and the simulation time mainly depended on the morphological features communicated to FracVAL. For 200 iterations, and input image corresponding to an aggregate with Df, kf and N values of 1.8, 1.3 and 25, respectively, the simulation time was roughly 1 hour (serial code) in a workstation with Intel Xeon Gold processor 6240R. After the successful test of the algorithm with the images of smaller aggregates with N of 25, simulations were carried out for various 3-dimensional aggregates with fractal parameters {N, Df, kf} ranging between, N = 25-500, Df = 1.3-2.4, and kf = 0.9-1.6. The details of the morphological parameters used are provided in Tables S1 and S4 in the Supplementary information. The constraints for the morphological parameters in the PSO code were modified based on the input 2-dimensional features. Each case is simulated with particular constraints considering the available information from the input image as the Df,2D to narrow down the range of Df,3D search space, and the number of monomers visible in the image to restrict the monomer numbers in the PSO code. For each of the cases, simulations were run for 200 iterations and 20 swarm particles. The PSO control coefficients were tested with different values but mainly with the inertia coefficient varying between 0.2 to 0.9 and the acceleration coefficients were kept constant at 2.0. The coordinates of the aggregate with the least fitness function value are used to calculate various 3-dimensional properties of the aggregate, which were then compared with the 3-dimensional properties of the aggregate used to generate the sample image. The comparison of the 3-dimensional properties of the input and the best fit aggregate in all the cases are shown in Fig. 7. To represent the orientationally averaged projected area, its equivalent radii is calculated and shown in the plot. The comparison of the 2-dimensional properties corresponding to the input image and the predicted structure are shown in Fig. 8. For Figs. 7 and 8, 1:1 line and lines with 10% margin are shown for comparison. Since the method is based on comparing 2-dimensional morphological properties, the algorithm compares these parameters to those of images generated by random orientation of the tailor-made 3-dimensional aggregates generated using FracVAL. For multiple combinations of morphological features, the 2-dimensional features are likely to be close to the features of the input images Fig. 7. Comparison of the 3-dimensional properties of the best fit structure predicted with the properties of the aggregate used to generate the computational aggregate image. Morphological parameters are varied in the ranges of N = 25-500, Df = 1.3-2.4, and kf = 0.9-1.6. Fig. 8. Comparison of the 2-dimensional features input to the simulations and the corresponding best-fit features as predicted by the hybrid algorithm. Morphological features for the test aggregates are varied in the ranges N = 25-500, Df = 1.3-2.4, and kf = 0.9-1.6 used for retrieval. More 2-dimensional features should be added to reduce the impact of this problem. Regardless of whether the majority of the 2-dimensional parameters are well recovered, dispersion might still exist. One way to improve this is to assign weights to different 2-dimensional parameters or to add more 2-dimensional parameters which would be explored in the future. In all the simulations, the fitness value reached less than 0.001 and converged to 0.0001 in some cases. The proposed method was also compared with a few prior publications on predicting the morphological features from microscopic images (Thajudeen et al., 2015;Lee and Kramer, 2004;Chakrabarty et al., 2011aChakrabarty et al., , 2011b. Most of the reported studies are focused on predicting Df,3D and/or the number of monomers in the aggregate structure and very few studies have proposed methods for predicting the best fit 3-dimensional aggregate structures. A detailed comparison with the database method proposed in Thajudeen et al. (2015) was carried out for a number of test cases and the results are summarized in Table S7 in the supplementary information. It is to be noted that the same databased as reported in the 2015 study was used for comparison and the performance of the database method could be improved with a more populated database. For example, the error from the database method is high when the number of monomers in the aggregate are over 100. This reinforces the main advantage of the hybrid algorithm where localized search is possible depending on the relevant morphological search space. The proposed method was also compared with other reported techniques (Wang et al., 2022;Lee and Kramer, 2004), where Df,3D of the aggregate structures were predicted from the input images. The results are summarized in Fig. 9, and it shows that the optimization algorithm can predict the 3-dimensional morphological feature quite accurately for the test cases. In Fig. 10, the number of monomers in the aggregate structure predicted from the input images are compared. The results are quite similar to the regression equations from Chakrabarty et al. (2011aChakrabarty et al. ( , 2011b, but the regression equations assume that Df,3D values are known apriori. This can however be used to accurately define the search space for the number of monomers. The comparison shows that the proposed method is able to predict the fractal parameters with an error margin of less than 10%. It is quite clear from the results that the hybrid algorithm is a promising technique in predicting the 3-dimensional structures from microscopic images. Although the proposed method is able to closely reproduce relevant 3-dimensional properties of the aggregate, there is variation in the deduced anisotropy factor which is shown in the Table S2 and S5 in the Supplementary Information. This may be improved by using more swarm particles or by increasing the number of projections of each aggregate used for comparison. The impact of this property on various 3-dimensional properties of the retrieved aggregate structure needs to be investigated further. The algorithm should also be tested for aggregates with over 1000 monomers. This could be implemented after parallelizing the PSO code. CONCLUSION In this study, a hybrid retrieval algorithm is proposed to predict the 3-dimensional structures of aggregated nanoparticles from their microscopic images. Particle Swarm Optimization was combined with FracVAL, a tuning aggregate generation algorithm to predict the 3-dimensional structures based on six 2-dimensional projection features. 2-dimensional features from the test image are input to the PSO algorithm, from which the initial population of swarm is generated based on random sets of Df, kf, and N values. Corresponding fractal structures are generated using FracVAL, and 2-dimensional features for these aggregates are compared with the input values. The algorithm then proceeds to find the optimal set of Df, kf, and N values to obtain the best-fit projection image compared to the input image. The algorithm is validated by comparing various 3-dimensional properties of the aggregate used to create the input image and the best fit aggregate structure, including anisotropic factor, hydrodynamic radius, orientationally averaged projected area, mobility diameter, and the 3-dimensional radius of gyration. All 3-dimensional properties, with the exception of the anisotropy factor, are predicted within an error margin of 10% by the hybrid algorithm. In all the simulations, artificially generated images from computationally generated aggregates as used as the input. A wide range of Df, kf, and N values were used to test the suitability of the proposed hybrid technique. The results clearly show that the proposed algorithm is a promising method for retrieving morphological features from the projection images of fractal structures. The method does not require any extensive database for comparing the projections, and a more detailed search can be done by narrowing the search space. As expected, the accuracy is lower for particles with Df > 2.0 and there is scope for improvement in this aspect. In the current study, only aggregates with point contact between the monodisperse monomers are considered. This could be extended to test aggregates with polydispersity in the monomer sizes as well as varying degrees of overlap between the monomers. The hybrid algorithm can further be parallelized for faster execution times, and other optimization techniques may be used to improve its efficiency.
7,984.2
2023-01-01T00:00:00.000
[ "Materials Science" ]
Tool Based on the Network Method for the Verification against Failure by Piping on Retaining Structures In the design of retaining structures, different geotechnical phenomena must be studied so they can be classified as safe. One of them is pipping, which is a physical process related to seepage under the structure. It leads to unstable situations that might finally end in a failure of the structure. As a way to quantify this risk, an accepted calculation is to compare the critical and the estimated hydraulic gradient. This comparison depends on the geometrical scenario, geotechnical parameters and flow conditions. However, the majority of the available solutions, such as formulations and graphics, have been developed only considering isotropic soils, which means that no realistic results can be obtained since media are commonly anisotropic. The aim of this paper is to provide a methodology with which an estimation of the average exit gradient can be obtained employing a computational model based on the network method. It consists on the analogy between electrical quantities (voltage and intensity) and geotechnical variables, which are water head and groundwater flow. The safety factor is calculated in the same way whether the considered soil is isotropic or anisotropic, and, in this way, the structure can be classified as safe from a geotechnical point of view. Introduction In geotechnics and ground engineering, one of the most common aims is to control water flow both provisionally and permanently. In order to achieve this objective, retaining structures are built in the course of a river or a stream, or in excavations affected by water table. These structures can be concrete and earth dams, which are commonly thought to remain for a long time, or coffer dams, which are employed in sites to work in dry. However, whether building a permanent or a provisional structure, this must be designed according to different geotechnical phenomena (for example sliding, as happened in Aznalcóllar [1], or failure due to poor foundation soil, as in Saint Francis [2]), so it can be classified as safe. Among these phenomena is piping or heaving [3], a process that involves groundwater flow under the structure and leads to a situation that might not be steady and eventually end in failure. As a mean to quantify the risk of pipping, two different values are compared: on the one hand, the estimated gradient, which depends on the geometry of the designed retaining structure and the piezometry under it (this can be studied with the flow net graphic); on the other hand, the critical gradient, which is a fictional number where the specific weight of the soil and the water are involved. When calculating the estimated gradient, theoretical universal solution graphics can be used [4]. However, these have been developed only for isotropic soils. Therefore, these solutions do not reflect the reality, since the majority of soils have an anisotropic behaviour. Standards [5] also present methodologies to study whether the structure is safe or not, which depends on the estimated gradient and the total and the pore pressure in the most dangerous zones for this phenomenon. To employ this formulation, a deep knowledge of the process is needed, and this includes understanding the behaviour of the flow when running through anisotropic media. In order to obtain all the necessary data to use formulations and compare results with theoretical ones, a methodology based on the network method [6] to simulate the flow through porous media under retaining structures has been developed. Network method is a simulation technique applicable to different physical phenomena, such as flow through porous media, soil consolidation [7], solute transport [8] and heat transfer [9]. This model employs the electrical analogy of the variables of the problem. That is, the equivalence of the groundwater head, h, with the electrical voltage, V, as well as the equivalence of the groundwater flux, Q, with the intensity, I. This is feasible since the governing equations in both cases are similar (constitutive equations). In this way, each cell in which the problem is discretized is transformed into a circuit with four resistors whose resistance values depend on the permeability of the medium, as well as the size of that specific cell. Once all the circuits are solved by using Ngspice [10], a specific free software for solving electrical circuits, the solutions are voltage and intensity for all the cells. From this, all the graphical and numerical results are obtained. In this work, the risk of pipping and heaving is studied for a sheet pile structure in both isotropic and anisotropic media according to the estimated exit gradient obtained by Harr [4], the formulation presented in Eurocode [5] and our simulations. Therefore, the effect of anisotropy in the existing methodology can be observed. Studied Problem Along this paper, the following example is employed to obtain and compare results according to the methodology presented in Harr and Eurocode. Figure 1 presents the geometrical variables of the modelled problem. That is, a sheet pile of a negligible thickness with a buried length in an almost infinite medium. Upstream and downstream the structure there is a water head difference that induces the groundwater flow. According to Figure 1 the dimensions are: a: upstream length, in meters, 50. b: downstream length, in meters, 50. h1: upstream water head, in meters, 5. h2: downstream water head, in meters, 0. h: water head difference, h1-h2, in meters, 5. H: stratum thickness, in meters, 50. d: sheet pile buried length, in meters, 6. When referring to the hydrogeological properties of the medium, they change if the soil is isotropic or anisotropic. For the isotropic medium, both permeabilities, kx and ky, take the same value, 0.1 mm/s, this is 10-4 m/s (which corresponds to a medium sand). Nevertheless, the anisotropic soil presents the same kx but lower vertical one, 10 -6 m/s. Finally, for both cases, the soil unit weight below phreatic level (γsat) is the same, 20 kN/m 3 , and with water unit weight (γw) of 10 kN/m 3 , effective weight unit (γ' = γsat-γw) is 10 kN/m 3 . 1. Harr Solutions In 'Groundwater and seepage', Harr presents different universal graphics and equations to obtain the value of the exit gradient, IE right after the retaining structure. In the example studied throughout this work, this gradient is calculated as shown in Figure 2 According to Harr, for this kind of structures, a universal solution exists for IE, considering an isotropic soil, which is presented in Eq. 1. Where IE is the exit gradient, s is the sheet pile buried length, which has been previously named as d, and h is the water head different. Therefore, the value of IE for the problem here presented must be ≈ 0.318 * 5 6 = 0.265. As the only solutions presented in this book are for isotropic soils, this value of IE is later compared with the results obtained with the new methodology for both kinds of media. The critical gradient, Ic, is defined as a comparison of the gravity forces of a submerged mass of soil (that is, the weight due to the particles minus the weight of the volume of water displaced by the soil particles), and the seepage forces in that same mass of soil due to the water flow. As the volume of the studied area is the same, this comparison is reduced to Eq. 2. Where IC is the critical gradient, γ' is the effective weight unit, and γw is the water weight unit. According to the properties of the problem, Ic can be calculated as = ′ = 10 / 3 10 The safety factor for this phenomenon is then calculated as in Eq. 3 The safety factor according to Harr is 3.74. Nevertheless, this exit gradient is not the most harmful one for sheet piles structures, since the maximum gradients are found at the toe of the structure (IT) on the downstream side ( Figure 2, green circle). In this way, Harr proposed a solution that involves the gradient in the whole buried length of the sheet pile, obtaining an average gradient in the area presented in Figure 3. This method is the one employed for the study of heaving or piping phenomenon and the calculation of the safety factor of the sheet pile dam with the solution presented in this paper. 2. Eurocode-7 Solutions: Verification against Failure of Hydraulic Heave In this document, the stability of the structure is studied with two different equations. In both of them, destabilising actions are compared to stabilising actions in order to study the safety of the sheet pile. As the two kinds of actions are affected by a multiplier factor (partial factors presented in the Standard) that increase their value if they are negative actions and decrease it when they are positive ones, no safety factors are calculated. Formulations are those in Eq. 4 and 5. In Eq. 4, udst;d is the pore pressure (uk) at the bottom of the structure (Figure 2, green circle) affected by a multiplier factor (γG,dst) that increases this value, as it is a destabilising pressure. For this example, γG,dst is 1.35, and uk depends on the solution of the flow net and, therefore, should take a different value whether an isotropic or anisotropic medium is studied. Still in Eq. 4, σstb;d is the stabilising total vertical stress (sstb;k) at the bottom of the structure, again affected by a multiplier factor (γG,stb) that, in this case, decreases its value, since this is a stabilising pressure. Here, γG,stb is 0.9 and sstb;k is a constant value for the problem, because it is not affected by the permeability of the studied medium. Therefore, according to Eq. 6 ; = , * ; = 0.9 * 120 2 = 108 (6) Thus, the value of the maximum admissible pore pressure uk can be obtained, according to Eq. 7. In a similar way, Eq. 5 can be studied. Here, Sdst;d is the seepage force (Sk) in a soil column along the buried sheet pile length with an infinitesimal thickness, also affected by γG,dst, which increases this value. The seepage force, Sk, is calculated as the product of the average gradient along the pile length, which depends on the hydrology of the given problem (this is, if the soil is isotropic or not), the water weight unit and the volume of the fictional column of negligible thickness (V). Also in Eq. 5, G'stb;d is the submerged weight (G'k) of this fictional column, affected by γG,stb, which decreases the stabilising force. G'k keeps constant in this document, since the weight of the column does not change if the soil is isotropic or anisotropic. In this way, Eq. 8 is obtained. ′ = ′ * = 10 * Since V appears in both sides of Eq. 5, it can be removed, leading to Eq. 9. * 10 * 1.35 = 13.5 ≤ 10 * 0.9 = 9 From Eq. 9, the maximum value of the average gradient, i, takes the value presented in Eq. 10. All in all, two comparisons must be carried out when employing Eurocode 7: the one involving the value of the pore pressure (Eq. 7), and another one, which obtains gradient (Eq. 10). Values uk and i vary according to the problem, and even if the geometry is the same, solutions are different for isotropic and anisotropic soils. 1. Electrical Analogy The Network Method is a technique to simulate different physical phenomena, including flow through porous media under retaining structures. This method is based on the electrical analogy, this is, making some or all of the variables involved in the studied problem equivalent to electrical quantities. In this case, the problem variables that are calculated are water head and water flux, which are equivalent to voltage and intensity respectively. This is feasible since the governing equation in both cases are equivalent, the only difference is the variables that are involved. Therefore, the first step to employ this method is to discretize the problem geometry in cells. Each of these cells is transformed in a circuit with four resistors, two in vertical direction and two in the horizontal one, which resistance values depend on the horizontal and vertical permeabilities and the size of the proper cell due to the chosen discretization. Figure 4 shows the nomenclature of the elemental volume, while Figure 5 presents a typical circuit in the model. Moreover, boundary conditions must also be translated into electrical quantities. For example, impervious borders are simulated as resistors with very high resistance values (almost infinite), and the constant water head upstream and downstream the sheet pile are modelled in each of those cells with a battery which provides a voltage which is equal to the head value. In Figure 6, the boundary conditions are shown, as well as the devices modelling them. Once all these data are transformed and the circuits are created, they are introduced in Ngspice, a specific software for solving electrical circuits. The raw solution of these are the voltage in each cell, as well as its vertical and horizontal intensity. In this way, as the equivalence with the problem variables is immediate, the values of water head and flux are obtained. From the results, two different kind of solutions can be calculated: 1) Graphical solutions, as flow nets, which show the behaviour of the water flow through the porous medium presenting the equipotential and the stream function lines. Figure 7 shows the flow net for the isotropic soil, while Figure 8 shows the same for the anisotropic problem. It is visible that, because of this lower vertical permeability, Figure 8 presents equipotential and stream function lines closer to the surface, since it is more difficult for the water to flow in the vertical direction. 2) Numerical solutions, such as total water flux or characteristic lengths. These are obtained by mathematical manipulation of the raw results provided by Ngspice. Among all these solutions, exit gradient (IE), average exit gradient (i) and gradient at the toe of the structure (IT) are the one of interest for this work. 2. Results and Comparisons Two values are obtained for each of the studied gradients, one for the isotropic soil and another one for the anisotropic medium. The first comparison presented in this paper is the exit gradient according to Harr with the one obtained with the here presented network method. Previously shown here in Eq. 1, the value IE is 0.265. Once the simulation for the isotropic example has been carried out, IE turned to have a value of 0.258. This is very close to the theoretical one, taking into account that it is highly influenced by the discretization around the sheet pile, since IE has been calculated in the upper cell closest to sheet pile in the downstream side, as shown in Figure 9 (marked with a red circle). When simulating the anisotropic problem, however, this exit gradient value changes and goes up to 0.31. This shows that, when a lower vertical permeability appears, exit gradients tend to increase their values. If now the safety factors are calculated employing these two IE obtained by the simulations, SF for isotropic problem is 3.88, while for the anisotropic problem this value is 3.23. In any case, according to this criterion, the studied sheet pile appears to be safe. Table 1 shows the theoretical and network method values of IE, as well as the deviation between the theoretical and simulated values and between isotropic and anisotropic scenarios. In Table 1, we can see that, as previous commented, the difference between the theoretical and the simulated value in the isotropic soil is soil is almost negligible. However, the deviation is remarkable when comparing any of the values for isotropic problem with that of the anisotropic soil. Nevertheless, as also commented in this paper, this zone is not where the most harmful gradients appear. This can be demonstrated for the two presented examples. If instead of studying the cell in Figure 6, a similar calculation is carried out for the two cells at the toe of the structure in the same column, higher values of gradient are obtained for both simulations. For the isotropic soil, IT takes a value of 3.81 (SF = 0.26), while this value is 3.44 in the anisotropic medium (SF = 0.29). If this criterion is followed, none of the cases seems to show a safe structure. Table 2 shows toe gradient values for both problems, as well as the deviation between them. Table 2 shows how the anisotropy of soils affects the toe gradient, since the simulations have led to a difference of almost 10%. Since both values, IE and IT, present such a difference, an average value is decided to be applied. In the proposed method, this value is calculated obtaining a gradient for each of the column of cells conforming the horizontal length d/2, and then calculating the average gradient of all of them. The gradient of each column (ICj) is calculated as presented in Eq. 11, and i in Eq. 12. where htoe,j is water head at the cell at the depth of the sheet pile and in the column number j, n is number of columns for d/2, and dxj: horizontal length of the cell in columns number j. Following the described calculations, the isotropic problem presents an average exit gradient, i, of 0.3 (SF = 3.36), while the anisotropic one has a value of 0.39 (SF = 2.57). Therefore, it is visible that, although in both cases the safety factor is greater than 1, the anisotropic option has a lower safety factor. The importance of considering the anisotropy of the soil is proved with the examples here presented. The difference between the values of i obtained with the simulation is presented in Table 3. In this case, when considering a larger area where calculating the gradient, the effect of anisotropy is evident, with a deviation of 30%. Taking up Eq. 6 and 7 when describing the methodology employed by the Eurocode-7, the values of uk and i must be calculated. For both variables, the water head in the cell next to the toe (htoe) downstream the structure is needed. In the first example, htoe = 2.1 m, while in the anisotropic one, htoe = 2.219 m. Although these values are taken from the centre of the cell, not from the lower border, since the discretization is small, the position considered in the calculations is d. In this way, i is calculated as in Eq. 13. Therefore, for the isotropic example, i = 0.35, and in the anisotropic one, i =0.37. According to Eq. 10, in both cases the structure seems to be safe. The deviations, which, in this case would show the reliability of the retaining structure, are presented in Table 4. In this case, the comparison between the value obtained following Eurocode-7 and those from the simulations shows that the structure is very safe according to this methodology. Moreover, values of isotropic and anisotropic problems are very similar. For the next verification, uk must be calculated following Eq. 14. ≈ (ℎ + ) * In this case, for the first example, uk = 81.00 kPa, while in the second one uk = 82.19 kPa. So, both scenarios are slightly over the maximum value of pore pressure obtained in Eq. 7, presenting the anisotropic case the highest value. Table 5 presents the values of uk obtained following Eurocode-7 and those from the simulations of the isotropic and the anisotropic scenarios, as well as the difference among them. According to Table 5, the deviation between the theoretical values and those from the simulations is low (below 3% either in isotropic or anisotropic medium), and the difference when considering both conductivity ratios is even lower. Results obtained from the indications in Eurocode-7 also show that, at least for the examples here presented, these would work whether isotropic or anisotropic soil is being considered. The percentages that have been calculated in Tables 4 and 5 corroborate so. Final comments and conclusions The use of the network method has led to the development of a numerical model for the simulation of the groundwater flow in isotropic and anisotropic soils. This tool gives the user the necessary information to study if retaining structures are geotechnically safe. If the same structure is studied in an isotropic and an anisotropic medium, we can see that considering anisotropy is important for the determination of its safety, since the safety factors were lower for the anisotropic scenario. Studying the toe gradient instead of the exit gradient leads to lower values of SF because piping is more likely to occur in this zone; nevertheless, since this would only happen in one point, employing the average gradient giver a more reliable idea of what is happening in a larger area. Moreover, the use of the pore pressure instead of the gradient leads to safer solutions, as they seem more restrictive. To conclude, whether isotropic or anisotropic is considered, Eurocode-7 seems to give accurate and safe results for piping process in retaining structures.
4,915.4
2021-03-03T00:00:00.000
[ "Geology" ]
Dense U-net Based on Patch-Based Learning for Retinal Vessel Segmentation Various retinal vessel segmentation methods based on convolutional neural networks were proposed recently, and Dense U-net as a new semantic segmentation network was successfully applied to scene segmentation. Retinal vessel is tiny, and the features of retinal vessel can be learned effectively by the patch-based learning strategy. In this study, we proposed a new retinal vessel segmentation framework based on Dense U-net and the patch-based learning strategy. In the process of training, training patches were obtained by random extraction strategy, Dense U-net was adopted as a training network, and random transformation was used as a data augmentation strategy. In the process of testing, test images were divided into image patches, test patches were predicted by training model, and the segmentation result can be reconstructed by overlapping-patches sequential reconstruction strategy. This proposed method was applied to public datasets DRIVE and STARE, and retinal vessel segmentation was performed. Sensitivity (Se), specificity (Sp), accuracy (Acc), and area under each curve (AUC) were adopted as evaluation metrics to verify the effectiveness of proposed method. Compared with state-of-the-art methods including the unsupervised, supervised, and convolutional neural network (CNN) methods, the result demonstrated that our approach is competitive in these evaluation metrics. This method can obtain a better segmentation result than specialists, and has clinical application value. Introduction Retinal vessel segmentation has a great clinical application value for diagnosing hypertension, arteriosclerosis, cardiovascular disease, glaucoma, and diabetic retinopathy [1]. Various retinal vessel segmentation methods have been proposed recently, and these methods can be categorized as unsupervised and supervised approaches according to whether the manually labeled ground truth is used or not. For the unsupervised methods, multi-scale enhanced-vessel filtering, multi-threshold vessel detection, matched filtering, morphological transformations, and model-based algorithms are predominant. The entropy of some particular antennas with a pre-fractal shape, harmonic sierpinski gasket and weierstrass-mandelbrot fractal function were studied, and the result indicated that their entropy is linked with the fractal geometrical shape and physical performance [2][3][4]. Multi-scale enhanced-vessel filtering using second order local structure feature was proposed, and the vessel and vessel-like pattern was enhanced by Frangi et al. in 1998 [5]. Three-dimensional (3D) multi-scale line filter was applied to the segmentation of brain vessel, bronchi, and liver vessel by Sato et al. in 1998 [6]. A general vessel segmentation framework based on adaptive local threshold, the automatic determining of the local optimal threshold by the verification-based multi-threshold probing strategy, and the retinal vessel segmentation were completed by Jiang et al. in 2003 [7]. A locally adaptive derivative filter was designed, and filter-based segmentation method was proposed for retinal vessel segmentation by Zhang et al. in 2016 [8]. A combination of shifted filter responses (COSFIRE) operator was used to detect retinal vessel or vessel-like pattern, the improved COSFIRE was designed and applied to the segmentation of retinal vessel by Azzopardi et al. in 2015 [9]. A new infinite parameter active contour model with hybrid region information was designed, and it was applied to the segmentation of retinal vessel by Zhao et al. in 2015 [10]. Level set method based on regional energy-fitting information and shape priori probability was proposed to segment blood vessel by Liang et al. in 2018 [11]. The unsupervised methods always design filters that are sensitive to vessel and vessel-like patterns, and it will lead to blood vessels not being fully identified and wrongly identified vessel-like pseudo patterns. The unsupervised methods depended on parameters settings; unsuitable parameters settings will produce low-quality segmentation results. For the supervised methods, firstly, the features of retinal vessel were selected and extracted. Secondly, ground truth was used to train the classifier. Lastly, retinal vessels were identified by use of a classifier. The features a of retinal vessel can be extracted by Gabor transform, discrete wavelet transform [12,13], vessel filtering, Gaussian filtering, and so on. Traditional machine learning methods such as k-nearest neighbor, adaboost, random forest, and support vector machine were used to train the classifier [14]. Orlando et al. proposed a fully connected conditional random field model, using a structured output support vector machine to learn model parameters, and performed retinal vessel segmentation [15]. Zhang et al. extracted the features by vessel filtering and wavelet transform strategy, applied the random forest training strategy learn the classifier's parameters, and performed retinal vessel segmentation [16]. For the traditional machine learning methods, feature selection has great influence on segmentation accuracy, and the independent features with high vessel recognition rate is the critical step. The features need to be selected manually according to the experiment; feature automatic selection remains a hot topic. Convolutional neural networks (CNNs) have drawn more and more attention, since they can automatically learn complex hierarchies of features from input data [17]. CNNs were widely applied to image classification, recognition, and segmentation [18]. Fully convolutional networks (FCN) as semantic segmentation network were proposed by Long et al., including designing the skip architecture that combines semantic information for a deep, coarse layer with appearance information from a shallow; the semantic segmentation task was completed by FCN [19]. The U-net model was proposed by Ronneberger et al. in 2015 [20], which designed a contracting path and an expansive path which combined captured context with precise localization; this model was successfully applied to biomedical image segmentation. However, the public dataset for retinal vessel is limited. U-net cannot achieve perfect vessel segmentation results using the training and prediction strategy based on the entire image. Brancati et al. divided the retinal vessel images into patches, proposed a U-net based on patch-based learning strategy, and achieved a perfect segmentation result by [21]. A multi-scale fully convolutional neural network was proposed to cope with the varying width and direction of vessel structure in the fundus images, and then the stationary wavelet transform was used to provide multi-scale analysis; the rotation was used as data augmentation and retinal vessel segmentation was performed by Oliveira et al. [22]. A novel reinforcement sample learning scheme was proposed to train CNN with fewer iterations of epochs and less training time; retinal vessel segmentation was performed by Guo et al. in 2018 [23]. A retinal vessel segmentation method based on convolutional neural network (CNN) and fully connected conditional random fields (CRFs) was proposed by Hu et al. in 2018 [24], and an improved cross-entropy loss function was designed to solve the class-unbalance problem. The densely connected convolutional network (DenseNet) [25,26] and Inception-ResNet [27] were proposed in the past two years. Dense block can encourage feature reuse and alleviate the vanishing-gradient problem; the layers are directly connected with all of their preceding layers within each dense block. DenseNet utilized dense blocks and improved classification performance. Dense U-net as semantic segmentation network was proposed and applied to scene segmentation by Jégou S. et al. in 2017 [28]. In their study, the fully connect layers of DenseNet were dropped and the skip architecture was used to combines semantic information for a deep, coarse layer with appearance information from a shallow. Inspired by the fact that U-net can improve segmentation accuracy of retinal vessel by the patch-based training and testing strategy, we proposed a new retinal vessel segmentation framework based on Dense U-net and patch-based learning strategy. In this segmentation framework, retinal vessel images were divided into image patches as training data by random extraction strategy. Dense U-net was used as network model, and the model parameters were learned by training data. In this model, loss function based on dice coefficient was designed, and was optimized by stochastic gradient descent (SGD). The proposed method was applied to public datasets DRIVE and STARE, and retinal vessel segmentation was performed. Sensitivity (Se), specificity (Sp), accuracy (Acc), and area under each curve (AUC) were adopted as evaluation metrics to verify the effectiveness of proposed method. Compared with state-of-the-art methods including the unsupervised, supervised, and CNN methods, the result demonstrated that the proposed method is competitive in these evaluation metrics. The contributions of our work were elaborated as follows: (1) We proposed the retinal vessel segmentation framework based on Dense U-net and patch-based learning strategy. (2) Random transformation was used as data augmentation strategy to improve the network generalization ability. The rest of this paper is organized as follows: Section 2 presents the proposed method; Section 3 analyzes and discusses the experiment result; Section 4 concludes this study. Method In this study, we proposed the retinal vessel segmentation framework based on Dense U-net and patch-based learning strategy. This framework was shown in Figure 1; it contains training and testing in two stages. and improved classification performance. Dense U-net as semantic segmentation network was proposed and applied to scene segmentation by Jégou S. et al. in 2017 [28]. In their study, the fully connect layers of DenseNet were dropped and the skip architecture was used to combines semantic information for a deep, coarse layer with appearance information from a shallow. Inspired by the fact that U-net can improve segmentation accuracy of retinal vessel by the patchbased training and testing strategy, we proposed a new retinal vessel segmentation framework based on Dense U-net and patch-based learning strategy. In this segmentation framework, retinal vessel images were divided into image patches as training data by random extraction strategy. Dense U-net was used as network model, and the model parameters were learned by training data. In this model, loss function based on dice coefficient was designed, and was optimized by stochastic gradient descent (SGD). The proposed method was applied to public datasets DRIVE and STARE, and retinal vessel segmentation was performed. Sensitivity (Se), specificity (Sp), accuracy (Acc), and area under each curve (AUC) were adopted as evaluation metrics to verify the effectiveness of proposed method. Compared with state-of-the-art methods including the unsupervised, supervised, and CNN methods, the result demonstrated that the proposed method is competitive in these evaluation metrics. The contributions of our work were elaborated as follows: (1) We proposed the retinal vessel segmentation framework based on Dense U-net and patchbased learning strategy. (2) Random transformation was used as data augmentation strategy to improve the network generalization ability. The rest of this paper is organized as follows: Section 2 presents the proposed method; Section 3 analyzes and discusses the experiment result; Section 4 concludes this study. Method In this study, we proposed the retinal vessel segmentation framework based on Dense U-net and patch-based learning strategy. This framework was shown in Figure 1; it contains training and testing in two stages. In the training stage, the source retinal vessel image was converted into a gray image, and data normalization was used as an image preprocessing strategy. The image patches can be obtained as training data by random extraction strategy. Dense U-net was used as network model, loss function In the training stage, the source retinal vessel image was converted into a gray image, and data normalization was used as an image preprocessing strategy. The image patches can be obtained as training data by random extraction strategy. Dense U-net was used as network model, loss function based on dice coefficient was optimized by stochastic gradient descent (SGD), and the model weight parameters were learned by training data. In the test stage, the test images were processed by the same preprocessing strategy, test patches were obtained by overlapping extraction strategy, and the segmentation results were obtained by overlapping-patches sequential reconstruction strategy. Patches Extraction For fundus images, the retinal vessel manual segmentation was both error-prone and time consuming, and ground truth of the retinal vessel was limited. In our approach, patch-based learning strategy was used in the process of training and testing. The training and labeled patches were extracted from training and labeled images by random extraction strategy, respectively, and these patches were used as training data to train model parameters. The testing patches were extracted from testing images by overlapping extraction strategy, and the predicted result was reconstructed by overlapping-patches sequential reconstruction strategy. In the process of training, the patches were extracted randomly from training images, and the number of patches for each image was the same. Randomly extracted strategy was described by Algorithm 1. The judging strategy that the central coordinates of image patch was inside the field of view (FOV) is shown in Figure 2a, and the randomly extracted image patches are shown in Figure 2b. based on dice coefficient was optimized by stochastic gradient descent (SGD), and the model weight parameters were learned by training data. In the test stage, the test images were processed by the same preprocessing strategy, test patches were obtained by overlapping extraction strategy, and the segmentation results were obtained by overlapping-patches sequential reconstruction strategy. Patches Extraction For fundus images, the retinal vessel manual segmentation was both error-prone and time consuming, and ground truth of the retinal vessel was limited. In our approach, patch-based learning strategy was used in the process of training and testing. The training and labeled patches were extracted from training and labeled images by random extraction strategy, respectively, and these patches were used as training data to train model parameters. The testing patches were extracted from testing images by overlapping extraction strategy, and the predicted result was reconstructed by overlapping-patches sequential reconstruction strategy. In the process of training, the patches were extracted randomly from training images, and the number of patches for each image was the same. Randomly extracted strategy was described by Algorithm 1. The judging strategy that the central coordinates of image patch was inside the field of view (FOV) is shown in Figure 2a, and the randomly extracted image patches are shown in Figure 2b. In the process of testing, each testing image was divided into several testing patches by overlapping extraction strategy and the number of testing patches for each image were calculated with Equation 1: In the process of testing, each testing image was divided into several testing patches by overlapping extraction strategy and the number of testing patches for each image were calculated with Equation (1): where img_h and img_w are the size of source image, patch_h and patch_w are the size of extracted patch, stride_h and stride_w are stride length, and the operator is rounded down to the nearest integer. The overlapping extracted patches were predicted by the training model, the retinal vessel segmentation result was reconstructed by overlapping-patches sequential reconstruction strategy, and the reconstruction strategy is described by Algorithm 2. In Algorithm 2, N_patches_h, N_patches_w, N_patches_img were calculated by Equations (2)-(4): where img_h and img_w are the size of image and stride_h and stride_w are stride length. full_pro and full_sum are the probability and frequency summation of pixels that belonged to image patches, and the image patches were obtained by overlapping extraction strategy. final_avg as the final segmentation result was calculated with Equation (5): Dense U-net Architecture Convolutional neural networks can learn the higher-level features from the characteristics of the lower-level layer, and then drop the low-level features. The low re-use rate of features cannot effectively improve the network's learning ability, thus, improving the utilization rate of features is more significant than adding the depth of networks. In order to improve the utilization rate of features, a dense block was designed, and the layers were directly connected with all of their preceding layers within each dense block. DenseNet improved classification performance using dense block. DenseNet was extended to fully convolutional networks for semantic segmentation named as Dense U-net, which was applied to scene segmentation. However, the retinal blood vessel is tiny: the width of a blood vessel is multi-pixel or even single-pixel. The features of retinal blood vessels can be learned effectively by using a patch-based learning strategy, and the segmentation accuracy of a retinal vessel by U-net depending on patch-based learning strategy is higher than U-net. Thus, Dense U-net using the patch-based learning strategy was proposed as a retinal vessel segmentation framework. Dense U-net was used as training network, and it is shown in Figure 3a. The randomly extracted image patches were used as training data, whose resolution is 48×48. The model output is the predicted result, and it represents vessel segmentation result. Dense U-net consists of a contracting path (left side) and an expansive path (right side); it contains dense block, a transition layer, and concatenation. Dense Block In traditional CNN, the output of l th layer can be calculated by a non-linear transformation strategy, which is defined by Equation (6): where x l is the output of l th layer, x l−1 is the output of (l − 1) th layer, and H is defined as a convolution followed by rectified linear unit (ReLU) and dropout. In order to reuse the previous features, ResNets [24] designed residual block, which adds a skip-connection that bypasses the non-linear transformations, and it is defined by Equation (7): where H is defined as the repetition of a residual block, consisting of batch normalization (BN), followed by ReLU and a convolution. DenseNet [20] designed dense block, which can use all of preceding features in a feedforward fashion, defined by Equation (8): where [. . .] represents the concatenation operation and H i is defined as a composite function that consists of three consecutive operations: batch normalization (BN), followed by a rectified linear unit (ReLU) and a 3 × 3 convolution (Conv). The dense block is shown in Figure 3b, and it has l layers. Dense block strongly encourages the reuse of features and makes all layers in the architecture receive a direct supervision signal. It will produce k feature-maps by transition function for each layer; k is defined as the growth rate of network. Suppose that the channel of feature maps in input layer is k 0 , the channel of output feature maps will be k 0 + k × (l − 1). Growth rate can regulate the contribution of new information for each layer to the global feature maps. Transition Layer The layers between dense blocks were named transition layers, and contain transitions down and transitions up. Transition down layer is defined in Figure 3c, and it consists of these consecutive operations: BN, followed by a ReLU, a 1 × 1 Conv, and a 2 × 2 average pooling for down sampling. The transition up layer was implemented by a 2 × 2 up sampling. Loss Function The pixels can be categorized into vessel and non-vessel; the statistical result indicates that only 10% of the pixels were retinal vessels for the fundus image. The vessel and non-vessel pixels are highly imbalanced ratio [29]. If it was not considered in the process of designing loss function, the learning process would be inclined to segment non-vascular region. The learning process will be trapped in the local minima of loss function, and the vessel pixels are often lost or only partially identified. Loss function based on class-balanced cross-entropy was proposed by Xie et al. [30]; however, the loss value is influenced by the weight coefficient. In our approach, a novel loss function based on the dice coefficient [31] was adopted, ranging from 0 and 1. The dice coefficient can be defined by Equation (9): where N is the number of label pixels, p i and g i are the predicted result and ground truth, respectively. This formulation can be differentiated yielding the gradient as follows: Data Augmentation and Preprocessing In data preprocessing, the training image (RGB) was converted into grayscale. Data normalization strategy was utilized, defined by Equation (11): where X and X * are grayscale image and normalization image and µ and σ are the mean value and standard deviation of all training images, respectively. In the process of retinal vessel segmentation, convolutional neural network methods can easily fall into overfitting [32]. Data augmentation was used to increase the training sets and improve the generalization ability of network model. In our approach, the resolution of extracted patches was 48 × 48, and the patches were used as the input of dense u-net. Non-linear transformation as data augmentation strategy was proposed by Simard [33], which was created by uniformly generating a random transformation field, defined by U(x, y) = rand(−1, +1). The data augmentation result is shown in Figure 4. Figure 4a is the source image patches, and Figure 4b is ground truth. The left is the original patch, and the right is the augmented patch. where N is the number of label pixels, i p and i g are the predicted result and ground truth, respectively. This formulation can be differentiated yielding the gradient as follows: Data Augmentation and Preprocessing In data preprocessing, the training image (RGB) was converted into grayscale . Data normalization strategy was utilized, defined by Equation 11: where X and X * are grayscale image and normalization image and μ and σ are the mean value and standard deviation of all training images, respectively. In the process of retinal vessel segmentation, convolutional neural network methods can easily fall into overfitting [32]. Data augmentation was used to increase the training sets and improve the generalization ability of network model. In our approach, the resolution of extracted patches was 48×48, and the patches were used as the input of dense u-net. Non-linear transformation as data augmentation strategy was proposed by Simard [33], which was created by uniformly generating a random transformation field, defined by ( , ) ( 1, 1) U x y rand = − + . The data augmentation result is shown in Figure 4. Figure 4a is the source image patches, and Figure 4b is ground truth. The left is the original patch, and the right is the augmented patch. Experiment Data The public datasets DRIVE [34] and STARE [35] were used to demonstrate the effectiveness of the proposed method. The DRIVE database contains training and testing sets. The training set Experiment Data The public datasets DRIVE [34] and STARE [35] were used to demonstrate the effectiveness of the proposed method. The DRIVE database contains training and testing sets. The training set Experiment Data The public datasets DRIVE [34] and STARE [35] were used to demonstrate the effectiveness of the proposed method. The DRIVE database contains training and testing sets. The training set contains source image, mask image, and ground truth; there were 20 source images, (RGB) whose resolution was 565 × 584. In the process of training, the image patches as training set were extracted from source images by using a random extraction strategy; the number of extracted patches was 40000, whose resolution was 48 × 48. A cross-validation strategy was utilized, and 10% of the training data was used as the validation set. In the process of testing, the testing images were divided into test patches by overlapping extraction strategy, and the extracted patches were used as a testing set. The parameters, using the overlapping extraction strategy, were set as follows: stride_height = 5 and stride_width = 5. The final segmentation result can be reconstructed by the overlapping-patches sequential reconstruction strategy. Two specialists manually segmented the testing images; the segmentation result by the first specialist was used as ground truth, the second was used as the gold standard of the first manual result. The STARE dataset also contains 20 images whose resolution is 700 × 605: the images were divided into 10 training and 10 testing image in order to validate the effectiveness of the proposed method. In the process of training and testing, the patch-based learning strategy and parameter setting were the same. All experiments were conducted on a Linux Mint 18 OS server, equipped with Intel Xeon Gold 6130 CPU, NVIDIA TITAN X GPU, 12 GB of RAM. Dense U-net was used as the network model and the parameters were set as follows: number of epoch = 150, growth rate = 16, number of dense block = 2, layers of each dense block = 5. SGD (learning rate = 0.01, momentum = 0.9) was selected as the optimization function of network model. Training time by proposed method was 2 h, and memory was 880 M. Evaluation Metrics There were four kinds of segmentation results including true positive (TP), false negative (FN), true negative (TN), and false positive (FP), based on the fact that each pixel can be segmented correctly or incorrectly. Four indicators were utilized as evaluation metrics, which can be defined as follows: Sensitivity (Se), Specificity (Sp), Accuracy (Acc), and the area under each curve (AUC). The first three indicators can reflect the segmentation ability of vessel pixels, non-vessel pixels, and all the pixels, respectively. AUC, which represent the area under the ROC curve, was also adopted as an evaluation metric for image segmentation, ranging from 0 to 1. Validation of the Proposed Method The effectiveness of the proposed method was demonstrated by public data DRIVE and STARE; the segmentation result of proposed method with dice loss function is shown in Figures 5 and 6. Figure 5 is the segmentation result for public data DRIVE; Figure 5a-d are color fundus image (test image), ground truth (specialist manual segmentation result), probability map for retinal vessel by proposed method, and binarization of probability map, respectively. Figure 6 is the segmentation result for public data STARE. The result demonstrated that retinal vessel segmentation can be performed with the proposed method. Se, Sp, Acc, and AUC were utilized as evaluation metrics for the segmentation result. Random transformation field was adopted as a data augmentation strategy; the base of the proposed method is that 40000 real patches were extracted as a training set. The base and augmented data were that 40000 real extracted patches and 40000 augmented patches were used as a training set; the statistical result is shown in Table 1. The segmentation result of the second observer was used as the gold standard. The result showed that Acc and AUC for segmentation on the DRIVE dataset increased from 0.9483 and 0.9686 to 0.9511 and 0.9740, respectively. Acc and AUC for the segmentation on the STARE dataset increased from 0.9508 and 0.9684 to 0.9538 and 0.9704, respectively. Random transformation field as a data augmentation strategy can improve the ability of retinal vessel identification. from 0.9483 and 0.9686 to 0.9511 and 0.9740, respectively. Acc and AUC for the segmentation on the STARE dataset increased from 0.9508 and 0.9684 to 0.9538 and 0.9704, respectively. Random transformation field as a data augmentation strategy can improve the ability of retinal vessel identification. Comparison with U-net Retinal vessel segmentation result by the proposed method was compared with the U-net based on patch-based learning strategy. In the contrast experiments, the image patches extraction strategy by these two methods was the same. In the process of training, the random extraction strategy was used to obtain training set, and in the process of testing, overlapping extraction and overlapping-patches sequential reconstruction strategies were used to obtain the final segmentation result. In the process of training, the number of extracted patches (40000) was the same for these two methods; 40000 augmented patches by random transformation field were produced, and 80000 image patches were used as training data. In order to evaluate these two methods fairly, the depth of two training networks = 3. SGD (learning rate = 0.01, momentum = 0.9) was selected as the optimization function by U-net, which was the same as proposed method. Because dense block strongly encourages the reuse of features and makes all layers in the architecture receive direct supervision signal, it has more parameters than the 'standard' U-net. More convolution layers were used at the same resolution by U-net to make sure that the numbers of trainable parameters by these two methods were approximately equal. In general, a fair comparison was made to evaluate these two methods. Figure 7 displays the local segmentation results by Dense U-net and U-net with dice loss function, respectively. The blue area is the segmentation result of fine retinal vessel, and the red area is the error segmentation result. The results demonstrated that more fine blood vessels can by segmented by the proposed method, and that a more accurate region prone to leakage and error segmentation can be segmented by the proposed method. Comparison with U-net Retinal vessel segmentation result by the proposed method was compared with the U-net based on patch-based learning strategy. In the contrast experiments, the image patches extraction strategy by these two methods was the same. In the process of training, the random extraction strategy was used to obtain training set, and in the process of testing, overlapping extraction and overlappingpatches sequential reconstruction strategies were used to obtain the final segmentation result. In the process of training, the number of extracted patches (40000) was the same for these two methods; 40000 augmented patches by random transformation field were produced, and 80000 image patches were used as training data. In order to evaluate these two methods fairly, the depth of two training networks = 3. SGD (learning rate = 0.01, momentum = 0.9) was selected as the optimization function by U-net, which was the same as proposed method. Because dense block strongly encourages the reuse of features and makes all layers in the architecture receive direct supervision signal, it has more parameters than the 'standard' U-net. More convolution layers were used at the same resolution by U-net to make sure that the numbers of trainable parameters by these two methods were approximately equal. In general, a fair comparison was made to evaluate these two methods. Figure 7 displays the local segmentation results by Dense U-net and U-net with dice loss function, respectively. The blue area is the segmentation result of fine retinal vessel, and the red area is the error segmentation result. The results demonstrated that more fine blood vessels can by segmented by the proposed method, and that a more accurate region prone to leakage and error segmentation can be segmented by the proposed method. The quantitative analysis based on evaluation metrics for public data DRIVE and STARE is shown in Table 2. For these two methods, the values of Se, Acc, and AUC using the dice loss function were higher than those when using the cross entropy loss function. Only Sp was lower when using the dice loss function compared to using the cross entropy loss function. This demonstrates that the segmentation accuracy using dice loss function was higher than that using cross entropy function. The result showed that for public DRIVE and STARE data, Se increased from 0.7937 and 0.7882 to 0.7986 and 0.7914, respectively. For Sp, Acc, and AUC values were approximately equal for these two methods using the dice loss function. Comparison with the State-of-the-art Methods The proposed method was compared with other state-of-the-art approaches, including unsupervised, supervised, and convolutional neural networks methods on public DRIVE and STARE datasets. The statistical result is shown in Table 3. Acc and AUC values were 0.9511 and 0.9740, respectively, for the DRIVE dataset and Acc and AUC values were 0.9538 and 0.9704, respectively for the STARE dataset by the proposed method. For the multi-scale convolutional neural network method proposed by Hu et al. [17], Sp and AUC value are the highest. In their study, Hu proposed an improved cross entropy loss function that was influenced by the weight coefficient, and applied CRFs as a post processing strategy to get the final binarization segmentation result. Their segmentation result was influenced by the weight coefficient, and this parameter needs to be set manually. For convolution neural network method with reinforcement sample learning strategy proposed by Guo et al. [18], the Se value was the highest, Sp and Acc value was the lowest, and the final segmentation result was the worst. For U-net based on patch-based learning strategy, Se, Sp, Acc and AUC value were not the highest; however, the segmentation result was the best in the comprehensive evaluation. For the proposed method, the Se value was higher than that of U-net, and Sp, Acc, and AUC value were close to U-net. This means that the segmentation result of proposed method is similar to U-net, and that the recognition rate of blood vessel is higher than U-net. Se, Sp, Acc, and AUC values by the proposed method were higher than the specialist result, which demonstrates that the segmentation result by the proposed method is better than specialist results, and that this method has clinical application value. Conclusions In this study, retinal vessel segmentation framework based on patch-based learning strategy and Dense U-net was proposed. The random extraction strategy was used to obtain image patches as training data, the Dense U-net was adopted as training network model, and the dice loss function was optimized by stochastic gradient descent (SGD). Random transformation field was used as a data augmentation strategy to enlarge the training data and improve the generalization ability. The proposed method was applied to public datasets DRIVE and STARE to complete the retinal vessel segmentation. Se, Sp, Acc, and AUC were adopted as evaluation metrics to demonstrate the effectiveness of the proposed method. The results demonstrated that the proposed method is competitive in these evaluation metrics. The segmentation accuracy by proposed method was higher than that of specialist, showing that this method has clinical application value. There is no post-processing strategy in this study, and the breakage of fine blood vessels was produced in the process of binarization. Therefore, post-processing strategy may also improve our results in future work.
7,712.6
2019-02-01T00:00:00.000
[ "Computer Science" ]
Optimizing Age Penalty in Time-Varying Networks with Markovian and Error-Prone Channel State In this paper, we consider a scenario where the base station (BS) collects time-sensitive data from multiple sensors through time-varying and error-prone channels. We characterize the data freshness at the terminal end through a class of monotone increasing functions related to Age of information (AoI). Our goal is to design an optimal policy to minimize the average age penalty of all sensors in infinite horizon under bandwidth and power constraint. By formulating the scheduling problem into a constrained Markov decision process (CMDP), we reveal the threshold structure for the optimal policy and approximate the optimal decision by solving a truncated linear programming (LP). Finally, a bandwidth-truncated policy is proposed to satisfy both power and bandwidth constraint. Through theoretical analysis and numerical simulations, we prove the proposed policy is asymptotic optimal in the large sensor regime. Introduction The requirements for data freshness in numerous emerging applications are becoming stricter [1,2]. However, the limited resources and bandwidth, together with the fading and error-prone channel characteristics, prevent the control terminal from obtaining the newest information. Moreover, the traditional optimization goals like low delay and high throughput cannot fully characterize the requirement of data freshness. Therefore, it is necessary to introduce new metrics to capture data freshness in such systems and design strategies to optimize the system performance in the presence of resource and environment restrictions. Recently, a popular metric, Age of information (AoI), has been proposed in [3] to measure the data freshness. Since then, the optimization of age performance under different systems has been a research hotspot. The simple point-to-point system model has been studied in [3][4][5][6][7][8][9][10][11]. When update packets are generated by external sources and are queued in a buffer before transmission, queuing theory can be used to analyze the performance of such system, see, e.g., in, [3][4][5][6][7][8]. In [3], it is shown that the optimum packet generation rate of a first-come-first-served (FCFS) system should achieve a trade-off between throughput and delay. In [8], dynamic pricing is used as an incentive to balance the AoI evolution and the monetary payments to the users. Other studies [9][10][11] consider the generate-at-will system without a queue. Energy constraints are studied in [10,11] to find the trade-off between the age performance and energy consumption. In [11], both offline and online heuristic policies are proposed to optimize the average AoI, which outperform the greedy approach. Apart from the point-to-point systems, scheduling strategies in the multi-user networks are studied in [12][13][14][15][16][17][18][19]. Different scheduling policies are studied in [12] to minimize the average AoI performance through unreliable channels, and Maatouk et al. verify the • We study the scheduling strategy for age penalty minimization in multi-sensor bandwidth constrained networks through time-varying and error-prone channel links with power limited sensors. To study a practical network, we model the channel to be a finite-state ergodic Markov chain. The packet loss probability and power consumption depend on the current channel state. Unlike previous work, we model the effect of data staleness in different scenarios via a class of monotone increasing function related to AoI. • Through relaxing the hard bandwidth constraint and Lagrange multipliers, we decouple the multi-sensor optimization problem into several single-sensor constrained Markov decision process (CMDP) problems. To deal with the potential infinite age penalty, we deduce the threshold structure of the optimal policy and then obtain the approximate optimal single-sensor scheduling decision by solving a truncated linear programming (LP). We prove the solution to the LP is asymptotic optimal when the truncated threshold is sufficiently large. • The sub-gradient ascend method is applied to find the optimal Lagrange multiplier to satisfy the relaxed bandwidth constraint. Finally, we propose the truncated stationary policy to meet the hard bandwidth constraint. The average performance of the strategy is verified through theoretical analysis and numerical simulations. The remainder of this paper is organized as follows. The network model, the AoI metric, and the age penalty function are introduced in Section 2. In Section 3, we formulate the primal scheduling problem, and then decouple it into independent single-sensor problems through bandwidth relaxation and Lagrange multipliers. The approximate optimal policy for single-sensor problem is obtained in Section 4 by solving an LP. In Section 5, the asymptotic optimal truncated policy is proposed. Finally, Section 6 provides simulation results to verify the performance of the proposed truncated policy, and Section 7 draws the conclusion. Notations: All the vectors and matrices are denoted in boldface. The probability of A given B is denoted as Pr(A|B). Let E π [X] be the expectation of variable X given π. The cardinality of a set Ω is written as |Ω|. Network Model In this work, we consider a BS collecting update packets from N time-sensitive sensors through time-varying channels. The time is slotted, and t ∈ {1, 2, ..., T} is used to denote the current slot index. Define u n (t) to be the scheduling decision for sensor n in slot t, where u n (t) = 1 means the sensor n chooses to send the newest packet, while u n (t) = 0 means idling the channel link. Assume all the scheduling behaviors take place at the beginning of each slot and the packet transmission delay through all the channel links is one slot. Due to the limited bandwidth capacity of the BS, the total number of sensors to be scheduled in one slot cannot be larger than M. Here, we assume M < N so that the problem is nontrivial: (1) To model the time varying effect, we assume that the channel link connecting the BS and each sensor n is an ergodic Q-state Markov chain. Denote q n (t) ∈ {1, 2, ..., Q} to be the channel state of link n in slot t. Without loss of generality, we assume that the channel state becomes more noisy as the state index becomes larger. Denote P n = {p n ij } to be the Markov state transition matrix of link n, and the entry p n ij means the probability of changing into state j in the next slot given the current state i, i.e., Due to different channel states, the sensors should use different energy for both saving energy and successful decoding of the packet at the receiver. We denote w(q) to be the energy consumption for scheduling when the channel state is q. Notice that the energy consumption tends to be larger as the channel state becomes more noisy to combat the channel fading, i.e., w(1) < w(2) < ... < w(Q). Besides, due to the power limit of each sensor n, the total average power consumption cannot exceed the upper bound, denoted by E n , i.e., where π is a scheduling policy. Given channel state q, we assume that there exists the probability of packet loss ε n,q through link n due to decoding error or inaccurate estimation. Age of Information and Age Penalty In the network described above, the BS wishes to obtain the freshest information for further process or accurate prediction. We model the data staleness at the terminal end as a monotone increasing age penalty function f (·) of Age of information (AoI). By definition, AoI is the difference between the current time slot and the time slot that the freshest data received by the BS is generated by the sensor [3]. Let x n (t) be the AoI of sensor n in slot t. According to the definition, if the sensor is scheduled in slot t and there is no packet loss, then x n (t + 1) = 1; otherwise, x n (t + 1) = x n (t) + 1. In conclusion, the AoI evolves as follows, Problem Formulation For given network, we measure the data freshness at the terminal side by computing the average age penalty under scheduling policy π, denoted by J(π), i.e., where x(0) = [x 1 (0), x 2 (0), ..., x N (0)] states the initial AoI of the system. In this work, we assume that the system is synchronized with all the sensors at the beginning, i.e., x n (0) = 1, ∀n, and thus omit x(0) in the further analysis. We denote Π CP to be the set of all possible causal policies whose decisions are only based on current and historic information while satisfying both bandwidth and power constraints. Then, our goal is to optimize Equation (5) by choosing a scheduling policy π ∈ Π CP . Therefore, the primal optimization problem can be written as Problem Decomposition Notice that Equation (6b) is an integer programming where the exponential growth rate of state and action space set obstacles in solving Problem 1. Therefore, we formulate a relaxed version of Problem 1, where the primal bandwidth constraint in every slot is replaced by a time-average bandwidth constraint. We will then show that the relaxed problem can be solved by sensor level decoupling, which greatly reduces the cardinality of the state and action space. Problem 2 (Relaxed Primal Scheduling Problem). Denote π * R to be the optimal policy of Problem 2. The following theorem ensures that the optimal policy of the relaxed problem is composed of several local optimal policies π * n , each of which depends on its own channel state and AoI evolution regardless of others. Theorem 1. The optimal policy of Problem 2 can be decoupled into local optimal policies, i.e., π * R = π * 1 ⊗ π * 2 ⊗ · · · ⊗ π * N . Each of the local policy π * n has the following properties. The proof of Theorem 1 is provided in Appendix A. In order to find the local optimal polices, we introduce a Lagrange multiplier λ ≥ 0 to eliminate the relaxed bandwidth constraint, and the Lagrange function is as follows, where the Lagrange multiplier λ can be seen as a scheduling penalty which will increase the function value if there are more than M sensors to be scheduled per slot in average. For fixed λ, we can now further decouple the relaxed scheduling problem into N single-sensor cross-layer designs, each of which has the corresponding power constraint in Equation (7c): As the resolution of each decoupled problem is independent of n, we omit the subscript n in further analysis. Constrained Markov Decision Process Formulation First, we notice that the decoupled problem is a constrained Markov decision process of which the elements (S, A, Pr(·|·), c(·)) and constraint are explained as follows. • State Space: The state of each sensor consists of two parts: the current AoI x(t) and channel state q(t). Thus, S = {x × q} is infinite but countable. • Action Space: There are two possible actions in the action space A for the scheduling policy, denoted by u(t). Action u(t) = 1 means the sensor chooses to schedule while u(t) = 0 means idling. Notice that here u(t) does not need to satisfy the bandwidth constraint. • Probability Transition Function: According to Equations (2) and (4), the probability transition function can be written out as follows. Step Cost: The one-step cost consists of two parts: the age penalty growth and scheduling penalty, which can be computed by And the one-step power consumption is c E (x(t), q(t), u(t)) = u(t)w(q(t)). Now our goal is to optimize the following average one-step cost, under the following average power constraint, Characterization of the Optimal Policy To search for the stationary optimal policy, we can further introduce another Lagrange multiplier ν ≥ 0 to eliminate the power constraint, i.e., The multiplier ν can be viewed as a power penalty, which will increase the Lagrange function once the average power consumption exceeds the constraint. Then minimizing the above Lagrange function for fixed ν becomes an MDP problem without any constraint. The following lemma ensures that the optimal stationary policy for the MDP problem has a threshold structure. Lemma 1. The optimal stationary policy of the unconstrained MDP problem has the threshold structure, i.e., given state (x, q), there exists a threshold τ q such that if x ≥ τ q , then it is optimal to schedule the sensor; otherwise, idling is the optimal action. Proof sketch: The complete proof is provided in Appendices B and C, which is similar to Theorem 2 in [23]. Despite the complex proof, the intuition is simple. As it is optimal to schedule the sensor in state (x, q), then it is also the optimal action in state (x , q), ∀x > x because the AoI is much bigger. Linear Programming Approximation Now, we focus on finding the optimal stationary policy. Denote ξ x,q to be the scheduling probability given state (x, q) and our goal is to find {ξ * x,q } that minimizes the objective function. In this part, we will approximate {ξ * x,q } by solving a truncated LP. According to Lemma 1, for the optimal stationary policy, we can set a threshold X > max q τ q and then we have ξ * x,q = 1, ∀x ≥ X. Next, we focus on searching for policies that possesses this threshold property as other policies are far from optimality. Let µ x,q be the steady distribution of state (x, q). Then, define a new variable y x,q µ x,q ξ x,q . The following theorem converts the CMDP problem into an infinite LP problem. Theorem 2. The single-sensor decoupled problem is equal to the following infinite LP problem. Proof. Let us consider the average cost of Equation (8a) by using µ x,q and y x,q . Invoking Equation (9), the one step cost of state (x, q) is either f (x) + λ when scheduling or f (x) when idling. Therefore, the time average cost can be computed as follows, Similarly, according to Equation (10), the time average power consumption is which is exactly the LHS of Equation (13e). Considering the property of steady probability distribution, Equation 13b,f are verified. Finally, notice that the evolution of state (x, q) forms a Markov chain as depicted in Figure 1 (top) for Q = 2 as an example. We use α x q,q = Pr((x + 1, q )|(x, q)) and β x q,q = Pr((1, q )|(x, q)) to denote the transition probability between the states, which can be computed as follows, According to the property of steady distribution, µ x,q equals to the sum of the steady distribution which can be transferred to µ x,q in the next slot times their transition probability. As depicted in Figure 1, µ 2,2 = µ 1,1 α 1 1,2 + µ 1,2 α 1 2,2 (see the dashed lines). Therefore, we can compute µ x,q as follows, which is equivalent to Equation 13c,d. Figure 1. Illustration of the state transition graph for Q = 2 channel states without (top) and with (bottom) AoI truncation with AoI threshold X = 3. The numbers in circles are channel state index q, and the number in rectangles are AoI index x. As the steady distribution is infinite, it is difficult to solve the problem exactly. Therefore, we approximate the optimization problem in Theorem 2 into a finite LP problem through truncation: After truncation, the optimal value of Equation (16a) is the lower bound of the objective function of the decoupled problem Equation (8a). The detailed proof is provided in Appendix D. The key concept is to set a threshold X and convert the Markov chain into a finite-state one (see Figure 1). Moreover, the following theorem guarantees the lower bound obtained by the above LP problem is tight when X is sufficiently large. Thus, the approximate optimal solution {μ * x,q ,ỹ * x,q } performs close to the exact optimal solution {µ * x,q , y * x,q }. Before displaying the theorem, first denote π * X and π * to be the scheduling policy according to the approximate optimal solution {μ * x,q ,ỹ * x,q } by setting threshold X and optimal one {µ * x,q , y * x,q }, respectively. Define J ∞ (π) to be the age and scheduling penalty of the primal problem under policy π and J X (π) is the approximate penalty when we set the age penalty f (x) = f (X), ∀x ≥ X. Then, according to Equations (13a) and (16a), we have Then, we have the following property, where τ max = max q τ q . As we see the above inequality, the difference between optimal solution of Theorem 2 and Problem 4 converges to 0 as the threshold X becomes sufficiently large. The entire proof is provided in Appendix E. After solving the above LP problem, we can obtain the approximate optimal scheduling probability {ξ * x,q } by setting a sufficiently large X and computing {μ * x,q } and {ỹ * x,q }. Moreover, analogical to the threshold structure described in Lemma 1, {ξ * x,q } also has the following property. Lemma 2. For any state (x, q) of each sensor, the optimal scheduling probability {ξ * x,q } is nondecreasing with x, i.e.,ξ * The proof technique is similar to Lemma 1, so it is omitted here. Multi-Sensor Problem Resolution By now, through relaxing, decoupling and truncation, we have obtained the approximate solution to the single-sensor decoupled problem for fixed scheduling penalty λ. In this section, we will go back to solve the multi-sensor problem, and propose a truncated policy to meet the hard bandwidth constraint in Equation (6b). The Relaxed Problem Resolution First, we should choose the optimal λ such that the relaxed bandwidth constraint can be fully leveraged. Denote g(λ) = min π L(π, λ) to be the Lagrange dual function, where we choose the approximate optimal policy π * (λ) by solving LP. Then, the dual function can be computed as follows, where g n (λ) = min π L n (π, λ) is the decoupled dual function for sensor n. According to the LP approximation, g n (λ) can be further written out as follows, where X n (λ) is the average age penalty bounded by ∑ X x=1 ∑ Q q=1 f (x)μ n, * x,q , and U n (λ) is the average scheduling probability, which equals to ∑ X x=1ỹ n, * x,q . According to the work in [24], the optimal Lagrange multiplier λ * satisfies If U n (λ * ) = M, then the optimal policy is just π(λ * ). Otherwise, the optimal policy is a mixture of two policies, denoted by π l and π u , which can be computed by Then, we apply the sub-gradient ascend method to find the optimal solution, where the sub-gradient can be computed as follows, where U(λ) = ∑ N n=1 U n (λ) is the total scheduling probability. Choose λ 0 = 0 as the starting point, and compute the average scheduling probability U(λ 0 ). If U(λ 0 ) < M, then it does not need to consider the bandwidth constraint, and thus the optimal solution has already been solved. Moreover, this optimal solution can also be viewed as the lower bound of the primal optimization problem, i.e., Otherwise, we need to increase the scheduling penalty through iterations. The update operation in iteration k can be written out as follows, where t k+1 is the step size in iteration k. Moreover, the step size is determined as follows, where γ ∈ (0, 1) is a constant. The determination of the step size above guarantees the algorithm converges from both sides. Therefore, after running the whole algorithm, we can obtain two different scheduling probabilities M l and M u : Their corresponding optimal polices are denoted as {μ n,l x,q ,ỹ n,l x,q } and {μ n,u x,q ,ỹ n,u x,q }. Then, the optimal stationary policy can be obtained by mixing these two policies: {μ n, * x,q ,ỹ n, * x,q } = θ{μ n,u x,q ,ỹ n,u x,q } + (1 − θ){μ n,l x,q ,ỹ n,l x,q }, where the mixed coefficient can be computed as follows, Now, we have obtained the optimal stationary policy of the relaxed scheduling problem. The algorithm flow chart is listed in Algorithm 1. Once we obtain {μ n, * x,q ,ỹ n, * x,q }, the optimal scheduling probability {ξ n, * x,q } can be computed as follows, , if x > X orμ n, * x,q = 0 orξ n, * x−1,q = 1; y n, * x,q µ n, * x,q , otherwise. Truncation for the Hard Bandwidth Constraint Finally, a bandwidth-truncated policyπ X is derived from the optimal stationary policy π * X to satisfy the hard bandwidth constraint in Equation (6b). Before introducing the truncated policy, first denote S(t) to be the set of sensors to be scheduled in slot t, and |S(t)| is the number of sensors to be scheduled in slot t. Then, the construction ofπ X is carried out as follows. • In slot t, compute the scheduling set S(t) according to the optimal stationary policy π * X . • If |S(t)| ≤ M, thenπ X schedules all these sensors as π * X does. • If |S(t)| > M, the hard bandwidth constraint is never satisfied. Therefore,π X randomly chooses M out of |S(t)| sensors to be scheduled in the current slot. The following theorem guarantees the asymptotic performance of the truncated policŷ π X compared with π * X on certain conditions. Theorem 4. Suppose the age penalty function is concave, and let κ = M N be a constant. If all the sensors and their channels are identical, i.e., the power constraint and the channel transition matrix are the same, then the truncated policyπ X and the optimal randomized policy π * X have the following property, lim N→∞ J(π X ) − J(π * X ) = 0. The whole proof is provided in Appendix F. Simulation Results In this section, we provide simulation results to verify the performance of the proposed policy. First, we study the average age penalty performance with different types of sensors with different bandwidth constraint and AoI truncation threshold X. Next, we study the detailed scheduling decision of each sensor. The average performance is obtained by simulating 10 5 consecutive slots. Average Age Penalty Performance In this part, we demonstrate the average performance of our proposed policy. We consider 4-state channel system, i.e., Q = 4. The age penalty function is chosen as f (x) = ln(x) unless otherwise specified. The transition matrix P n for each sensor is the same: Denote {η q } to be the steady distribution of the channel state. We consider that for each channel state q, the energy consumption w(q) = q. According to [12], the optimal policy to minimize the average AoI performance when all the sensors are identical is a greedy policy, which schedules the M sensors with the largest AoI and consumes the average power for each sensor E G = M N ∑ Q q=1 η q w(q). Therefore, define ρ n = E n E G to be the power constraint factor which describes the effects of power consumption constraint E n . Figure 2 demonstrates the average age penalty performance of the proposed policŷ π X as a number of sensors N, with bandwidth constraint M = {5, 15}, compared with the lower bound, the relaxed optimal policy π * X and the greedy policy. Set the threshold X = 20N M , where · is ceiling function. We assume that the probability of packet loss for each sensor is the same, denoted by ε: The power constraint factor of sensor n is ρ n = 0.2 + 1.4(n−1) N−1 . As seen in Figure 2, the proposed truncated policy performs closely to the relaxed optimal policy and the lower bound, and outperforms the greedy policy especially when N is large. According to Figure 2, the age penalty decreases by 18% and 23% from the greedy policy with N = 60 sensors when M = 5 and M = 15, respectively, under proposed policy. Moreover, as the threshold X becomes large, the difference between the average performance following policy π * X and the lower bound becomes indistinguishable. Therefore, the asymptotic performance described in Theorem 3 can be verified. Figure 3, we can see that the AoI-minimum policy cannot guarantee a good age penalty performance. Thus, it is necessary to consider different demand for data freshness to achieve better performance. Sensor Level Analysis and Threshold Structure Next, we analyze the scheduling decision of each sensor and their corresponding age penalty to provide some insights of optimal scheduling policies. We consider a system with N = 8 and M = 2. The transition matrix of channel state is the same as Equation (21), and power consumption w(q) = q. We set the threshold X = 80 to compute the proposed policy. First we consider the system with Q = 4 and age penalty function f (x) = ln x. Figure 6 analyzes how the power constraint influences age penalty of each sensor. The power constraint of sensor n is ρ n = 0.2n. From Figure 6, we can see that the proposed policy outperforms the greedy policy when the required power consumption is scarce, and performs similarly or a little worse when the factor ρ n > 1. This implies that our proposed policy chooses a more proper power allocation based on current channel state and AoI than the greedy policy by stimulating sources with scarce power budgets to be scheduled in better channel states. As the packet loss influences age penalty as well, Figure 7 considers sensors with different probability of packet loss, which can be written out as the following matrix {ε n,q }: We fix the power constraint factor ρ n = 0.6 for all sensors. Figure 7 shows that the average age penalty increases with the probability of packet loss. Moreover, the proposed policy combats with the packet loss better than the greedy policy as the proposed policy considers ε n,q when solving the LP problem, but the greedy policy does not. Next, we verify the threshold structure of the optimal scheduling policy. Figure 8 demonstrates the effect of bandwidth and packet loss on the scheduling threshold. The power constraint factor ρ n = 0.6, ∀n, and the packet loss probability is the same as Equation (22). Subfigures (a-c) demonstrate three of these sensors whose packet loss probability is as the title. For each of the three sensors, subfigures (d-f) consider the singlesensor system without bandwidth constraint and display their scheduling probability, respectively. Moreover, Figure 8 lists some of the thresholds given channel state q, e.g., in subfigure (a), the threshold of channel state q = 3 is x = 7, and the corresponding optimal scheduling probability is ξ 7,3 = 0.9963. From Figure 8, first we can see that all the six figures verify the non-decreasing property of the scheduling probability with AoI x(t) described in Lemma 2. Second, subfigures (a-c) demonstrate that the sensor with higher packet loss probability also has higher scheduling threshold. This implies that the sensors with more reliable channel should be given higher priority to scheduling than unreliable ones to minimize the average age penalty, since scheduling the more reliable channel under the same AoI is more likely to reduce the current age penalty. Third, by comparing subfigures (a) and (d), (b) and (e), and (c) and (f), the scheduling threshold varies more significantly for different channel states if there exists no bandwidth constraint. The sensors tend to update more often when the channel state is good, and idle when the channel state is bad. This is because the sensors can choose to update packets more frequently in good channel state to both save energy and increase the success probability of transmission without bandwidth constraint. Finally, we study the effects of age penalty function on threshold structure. Here, we consider a system with Q = 2, ρ n = 0.2n and three different penalty function, i.e., f (x) = ln x, f (x) = x, and f (x) = x 2 in Figure 9. We plot the scheduling decision of the sixth sensor. As is depicted in Figure 9, as the system has a higher restriction on data freshness such as exponential or quadratic function, the difference between thresholds of different states becomes small. In such situations, channel states play a weaker role because waiting for another slot to schedule tends to have unbearable age penalty. Conclusions In this paper, we consider the multi-sensor scheduling problem through an error-prone Markovian channel state. Through relaxing and decoupling, we propose a truncated policy to satisfy both the bandwidth and power constraints to minimize the average age penalty of all sensors in infinite horizon. We prove the asymptotic performance of the truncated policy in symmetric networks when the age penalty function is concave by choosing a sufficiently large threshold X. Through theoretical analysis and numerical simulations, we find that the age penalty function, packet loss probability, bandwidth constraint, and power constraint work altogether to influence the optimal scheduling decisions. Those who have more reliable channel state and enough power consumption tend to have higher scheduling priority. Conflicts of Interest: The authors declare no conflict of interest. Appendix A. Proof of Theorem 1 First, we notice that Problem 2 is equivalent to the following optimization problem, where we further introduce variables ν n to denote the local bandwidth constraint of sensor n. Problem A1 (Equivalent Relaxed Primal Scheduling Problem). For each feasible fixed local bandwidth constraint vector {ν n }, we can transfer the above problem into the following one by removing constraint Equations (A1c) and (A1e). Problem A2 (Relaxed Problem for Fixed {ν n }). Age * R (ν 1 , · · · , ν n ) = min Then, according to the work in [25], the optimal policy of the Problem 6 given {ν n } can be decoupled into several local policies. This is somewhat intuitive as the constraints and objective function of Problem 6 are decoupled for each sensor n. As for each feasible {ν n }, the optimal scheduling policy can be decoupled, recall that Age * R = min ν 1 ,··· ,ν N Age * R (ν 1 , · · · , ν N ), when {ν n } takes the optimum value {ν * n }, the property also holds. Appendix B. Proof of Lemma 1 Before we proceed to the proof, we make two definitions. For any two states s = (x, q) and s = (x , q) ∈ S, define a partial order ≤ s . We state that s ≤ s s if and only if x ≤ x . Moreover, we also define a partial order ≤ a on the action space A. We state that u ≤ a u if and only if u ≤ u . The monotonicity of the optimal action on the state space is true if the four following conditions hold: 1. If s ≤ s s , c(s, u) ≤ c(s , u) for any u ∈ A; 2. If s ≤ s s and u ≤ a u , then c(s, u) + c(s , u ) ≤ c(s , u) + c(s, u ); 4. If s ≤ s s and u ≤ a u , then: where s, s ∈ S, u, u ∈ A. s + ∈ S is the next state, and c(s, u) denotes the one-step cost given the state s and action u. Next, we consider a discounted cost MDP over a finite horizon: where β is discounted factor. And the corresponding Bellman equation is If the above four conditions are satisfied and the corresponding Bellman function is monotone increasing, then the one-step cost c(s, u) = c x (x, q, u) + νc E (x, q, u) is monotone and sub-modular in s and u, which shows there exists an optimal monotone policy for any finite-time horizon MDP. Using the vanishing discount approach in Theorem 5.5.4 in [26], the property of monotonicity is propagated to the time-average MDP. Before verifying the above four conditions, first we introduce the following lemma to ensure V k,β (x, q) is monotone increasing with x, whose proof is provided in Appendix C. Lemma A1. For fixed channel state q and discounted factor β, V k,β (x, q) is monotone increasing with x. Therefore, we only need to show the decoupled unconstrained problem satisfies the above four conditions. Notice that the one-step cost can be computed by According to the definition of partial order ≤ s and ≤ a , and Equation (A6), we can easily verify condition 1 and 3. Also, we have: According to Equation (A7) and the fact that V(x, q) < V(x , q), ∀x < x , it is also feasible to verify that both condition 2 and 4 hold. Appendix C. Proof of Lemma A1 In this part, we will prove that V k,β (x, q) in the finite-time horizon MDP is an increasing function with x. The key method of the proof is through induction. Invoking Equations (A8) and (A9), the above equality is equivalent to which is exactly equal to Equation (16e). Notice that the optimal action is to schedule when x ≥ X, i.e., ξ * x,q = 1. Thus, y * x,q = µ * x,q , ∀x ≥ X. Therefore, we have which is equal to Equation (16h). By now, through variable substitution, we have verified the constraints of the finite LP problem are equivalent to the ones of the original optimization problem. Therefore, the optimal solution to the finite LP problem is also feasible to the original problem. Now for the objective function, we have Therefore, we have verified that the decoupled single-sensor problem is approximate to the LP problem, which has the same constraint but can be the lower bound of the original problem. (A17) Therefore, ∑ Q q=1 ρ X x,q can be bounded as follows, where 1 is all-one vector. (a) holds because of Equation (A16). (b) holds due to Equation (A17). (c) holds because 1 is the eigenvector of P. Appendix F. Proof of Theorem 4 According to Lemma 2, the optimal policy for every decoupled single-sensor problem has the threshold structure. Let τ n q be the threshold of sensor n given channel state q. Denote Γ n = max q τ n q − min q τ n q to be the largest difference between different thresholds of sensor n, and Γ = max n Γ n . As all the sensors are identical, Γ does not change with N. Moreover, letS(t) = E π * X [S(t)]. Suppose that the sensor n is not scheduled underπ X when u n (t) = 1. Now consider the probability that it is still not scheduled in the next slot, which results from two reasons, i.e., it jumps into a state which has a higher scheduling threshold or there are still more than M sensors to be scheduled. Let p be the probability that the channel state jumps into a state having a higher scheduling threshold. Then, the probability of idling in the next slot, denoted by p idle can be computed by Notice that p can be upper bounded as p ≤ max n,q Q ∑ q =1,q =q p n qq = max n,q (1 − p n qq ). Therefore, p idle can be upper bounded by z, which can be computed as Therefore, it can be generalized that the probability that it is still not scheduled in the consecutive k slots is upper bounded by z (k−Γ n ) + , where (·) + = max(·, 0). Now, we bound the different performance ofπ X and π * X by introducing another policỹ π X . Underπ X , when |S(t)| > M, all these sensors are scheduled like π * X , but add an extra penalty a n x (t) for those sensors, which can be computed as a n where (a) holds because f (x) is concave. For simplicity, denote A = Γ + 1 1−z , which is a constant once the channel transition matrix and power constraint are fixed. As the age penalty function f (x) is concave, the average age penalty cost underπ X does not decrease compared withπ X . Then, the difference between J(π X ) and J(π * X ) can be bounded as Notice that when x > X, the optimal action is to schedule. Hence, the probability of x n (t) > X is upper bounded by (max n,q ε n,q ) x−X . For simplicity, let ρ = max n,q ε n,q . Then, ∀ > 0, there exists k = ln ln ρ such that the steady distribution µ n x,q , ∀x > X + k and n can be bounded as Then, we have (||S(t)| −S(t)| + |S(t) − M|) (||S(t)| −S(t)| + |S(t) − M|) I x n (t)>X+k f (x n (t)) . (A19) As f (x) is concave, it can be upper bounded by a linear function, i.e., f (x) ≤ mx, ∀x > X + k. Therefore, the second term in the above inequality can be further bounded as By choosing = N −1 and k = O(ln N), the second term in Equation (A19) converges to 0 as N becomes large. For the first term in Equation (A19), according to the work in [27], the expectation of ||S(t)| −S| has the following property, In addition, as policy π * X satisfies the relaxed bandwidth constraint, we have Therefore, when T → ∞, J(π X ) − J(π * X ) can be upper bounded by As the threshold X does not increase with N, J(π X ) − J(π * X ) converges to 0 as N becomes infinite. Thus, the asymptotic performance of the truncated policy has been proven.
8,894.2
2021-01-01T00:00:00.000
[ "Computer Science" ]
China White: Clinical Insights of an Evolving Designer Underground Drug Clandestine drug production, use, and exploitation present a social issue afflicting millions across the globe. “Designer opioids,” some known to the street as “China White” for their alleged purity, have been unveiled in the past years; noted to possess properties that can compromise well-being in a remarkably novel way. This class of designer opioids, the 4-phenylpiperidines, seems to be generating significant medical concerns due to its deadly clinical manifestations; which may be further complicated by the ease of access and its potent addictive properties. The potential drug adulteration with other chemical formulations further complicates the medical scenario, posing a serious challenge in the management of presumed overdoses. Furthermore, if the key clinical manifestations are overlooked due to the presence confounding signs and symptoms to medical practitioners, patient’s mortality may potentially increase even stronger. Thus, the purpose of this communication is to create awareness about these novel agents and their potentially devastating clinical complications. We strongly support the empiric treatment with naloxone per the currently established guidelines. In addition, we urge practitioners to carefully document findings relating to recovery time and historical data to aid in early detection, and precise decision-making in suspected intoxications cases. Introduction Clandestine drugs manufacturing is a historical issue affecting varied countries worldwide. In the sixties the main drugs for recreational use included cocaine, heroin, amphetamines and LSD; but during the seventies and eighties this issue escalated with the advent of the "designer drugs". A new class of chemical substances produced in underground laboratories, which offered an easier and cost-effective alternative to heroin production [1]. More recently, the fingerprints of lethal opioid overdoses has shown an interesting change over the last twenty-four months, namely in areas of Rhode Island and Pennsylvania [2,3], where approximately sixty-four deaths were recorded related to a new class of "designer" opioids that according to some news reports may be migrating further into the southeast [2]. The 4-phenylpiperidines, a cohort of synthetic opioids, from which these "designer narcotics" are derived, are related to a common anesthetic drug, known as fentanyl [2,4]. They are related to a common anesthetic drug, known as fentanyl. The latter acts as a robust μ-opioid receptor agonist within the central nervous system (CNS) causing exhilaration, respiratory depression and analgesia [4]. In the black market world, a spectrum of synthetic opioid derivates have been identified as "China White", named after their presumed purity [5]. Several official sources have equated this nomination to anecdotal evidence obtained from various reputable forums related to the use and handling of these synthetic opioids revealing that "China White" also refers to other members of the 4-phenylpiperidines. These include p-fluorofentanyl, α-methylfentanyl, and at times heroin, also is thought to be "pure" [2]. It appears that the name "China White," alludes to the purity of the substance more so than to its factual identity according to information noted in several of these subject forums [2]. However, considering the variability of compounds that may exist in a given sample, the inconsistency of dosages, and their potency (typically ranging from fifteen to three-hundred times the potency of morphine [6]); emergency rooms (ER) and intensive care units (ICU) may be facing a series of serious considerations in terms of the diagnosis and management of these new realm of modern-day opioid overdoses. Background Clinically, the classic teaching for the identification of isolated opioid overdose proposes a specific triad; which consists of (a) decreased respiratory drive as determined by a respiratory rate of less than twelve breaths per minute, (b) pupillary constriction on eye exam, and (c) altered mental Universal Journal of Clinical Medicine 3(1): 6-9, 2015 7 status. Altogether, these clinical manifestations are commonly known as the "heroin overdose syndrome" [7]. Once identified, naloxone, a strong μ-opioid receptor competitive antagonist [8,9], is administered as first-line therapy in progressive empiric dosing to achieve normalization of the respiratory rate and mental status [10], followed by the appropriate disposition based on global clinical assessment and response. This is followed by monitoring for four to six hours after clinical improvement is noted. Subsequent monitoring of secondary complications such as rhabdomyolysis, prolonged hypoxemia, myoglobinuric kidney failure, and hypothermia is frequently recommended in consideration for high-risk populations such as the opioid dependent and the elderly [11]. Evidently, modern-day designer opioid formulations pose potential challenges for acute care clinicians, primarily due to the lack of information on the exact constituents within these illicit substances. Furthermore, the experimental findings from animal models warrant improved diagnosis, treatment, monitoring, and reporting methods for suspected designer opioid toxicities [12]. Designer Opioid Facts A health alert issued by the Centers for Disease Control (CDC) in 2013 reported fourteen deaths between the ages of nineteen and fifty-seven attributed to the use of acetyl-fentanyl in the state of Rhode Island [2,13]; fifty of these cases were reported in Pennsylvania [3]. "Designer" opioids elaboration commonly involve the use of other substances (i.e. heroin), which often act as surplus adulterants, and may further complicate the medical scenario and clinical manifestations [14]. Both, formal literature and anecdotal reviews indicate, that acetyl-fentanyl chemical mixtures may include heroin constituents. Furthermore, the drug could be potentially packed as pills and sold on streets as oxycodone [14]. However, the most common form of the substance is powder, typically found in containers labeled "not for human use" or "experimental"; thereby it can be easily accessed from a socio-legal standpoint. Pharmacokinetics Related to fentanyl, 4-phenylpiperidines such as α-methylfentanyl, p-fluorofentanyl, and the iconic acetylfentanyl, are all thought to be products of contamination during key steps of fentanyl synthesis [15]. As such, the clandestine synthesis of any synthetic opioid should be suspected to have some proportion of each in the context of an acute intoxication, namely if any historical findings suggest the use of "China White" or other related narcotics. Additional findings that should prompt a similar suspicion are the presentation for acute opioid overdose in which the container is labeled as discussed previously, or in any local community where 4-phenylpiperidine synthesis represents a social concern. Fentanyl analgesic effects are known to be approximately eighty times the strength of morphine as determined by ED50 in clinical studies [6]. Compared to morphine, the popular acetyl-fentanyl is approximately five times stronger. However, α-methylfentanyl is nearly three-hundred times the potency of morphine. It should also be noted that given the variable affinity of these mixed compounds, a half-life is difficult to predict for multiple reasons. These include systemic distribution, hepatic function, renal function, GI transit (if taken orally), adulterants presence, and receptors saturation, among others [11]. Among the narcotic formulations, which exhibit highly predictable onset of effects, duration, and excretion times; naloxone has a shorter half-life compared to other drugs of ingestion. Thus, after the effects of naloxone begin to regress, a relapse of symptoms may take place in the setting of a longer-acting narcotic, as may be seen with 4-phenylpiperidine with slower onset of symptoms [11]. Diagnosis and Treatment The classic triad of pupillary constriction, altered mental status, and respiratory depression may often prompt the clinical suspicion of opioid toxicity. Moreover, given the propensity for many of these substances to be used as part of a greater comorbid paradigm (i.e., multidrug use), with the presence of adulterant substances [14], or in altered physiological stage (e.g., opioid dependent and elderly patients), pupillary constriction and mental status can often be misleading if absent [11]. Nevertheless, few substances other than opioids actually cause significant respiratory depression (to a rate of less than twelve), representing a more reliable manifestation of whether or not narcotics are a likely enough culprit to treat empirically for acute opioid intoxication [11]. Conventional treatment guidelines for opioid intoxication indicate that an initial dose of naloxone (0.04mg) must be administered, while searching for a notable increase in the respiratory rate of the patient. Which should be evident 2-3 minutes after the initial dose is administered. Thereafter, increasing doses of 0.5mg should be attempted if no response, followed by 2mg, 4mg, 10mg, and 15mg, respectively; until an increase in the ventilation rate is recognized. This establishes the responsiveness to the naloxone treatment protocol. At times, subsequent monitoring and repeated administrations may be suitable, including formal admission to ICU [11]. If intoxication with any other opioid is suspected, the treatment should proceed in accordance to aforementioned guidelines. That includes a complete and continuous clinical care as well. On the other hand, if "designer" opioids, "China White" or other "synthetic heroin" intoxication is suspected, current reports posit that many of these 4-phenylpiperidine molecules test "positive" for fentanyl on ELISA screening, but are not noted to be fentanyl on Gas Chromatography (GC) / mass spectrometry (MS) [16]. Therefore, CDC 8 China White: Clinical Insights of an Evolving Designer Underground Drug recommendations establish that ELISA screening should be obtained, and if positive, GC/MS may be performed for clinical assurance and validation [2,3]. Thus, the recommendations per this communication are to treat suspected intoxications according to the established guidelines, paying close attention to the documentation of specific clinical progressions. In cases of "designer" opioids, consider screening with ELISA assay for fentanyl, and if positive, pursue confirmatory testing with GC/MS to identify culprit substances that may correlate with time improvements after naloxone therapy; and time of relapse symptoms for the optimization of the clinical decision-making observational periods and monitoring practices. Summary We are living in times of much fortune; it manifests in different forms. These include, wealth, sovereignty, freedom, and perhaps even the limitless access to the internet resources is one of the major manifestations of our media extravaganza. Ironically, these web-based means contain all sorts of information, some of which offer a direct path to misfortune through the dissemination of material for the production of illicit synthetic substances or "designer" drugs [1,14]. Not too savvy, in an era where addictive disorders represent one of the world's major socioeconomic burdens, as they embody diseases, criminality and death [17]. Meanwhile, the death cases associated to these so-called "designer" drug practices are progressing almost inaudibly [2,3,13] . But, in the medical arena proactive clinical considerations ought to be taken when suspected opioid intoxication cases are encountered [18]. In most cases, a clinical suspicion may suffice to pursue empiric treatment with naloxone considering that there are no significant side effects to its use in such circumstances. This treatment protocol should proceed as recommended by the established treatment guidelines for opioid intoxication [11]. Though it is suggested that substance identification be pursued for what may eventually lead to more directed clinical decision-making with guided monitoring practices when potential intoxication cases are encountered. Initially, GC/MS may not be a practical option in a setting of acute opioid intoxication; however ELISA immunoassay screening is suggested based on anticipatory clinical suspicion [2,3,16]. In addition, it would be interesting to correlate the clinical progressions with both, naloxone regimen and "time to recovery" data, as they may play a significant role in the monitoring recommendations and the length of treatment when "designer" opioid use is suspected. In the context of added agents by users, there are other considerations such as fungemias, commonly acquired from the use of lemon juice as a liquefying agent, and other blood-related infections related to "cutting" agents that may be of therapeutic interest to treat patients with further clinical manifestations. These may include but are not limited to sepsis, hepatic failure, active/ongoing HIV/AIDS, or renal failure in the setting of secondary rhabdomyolysis. Altogether, these conditions have been widely documented in the literature regarding illicit intravenous drug use; and should be carefully considered when clinical opioid toxicity crosses paths with other acute clinical syndromes. Drug addiction is a terrible disease of modern society that warrants continued scientific attention, laws, and policy reform. The information presented herein adds to the growing body of evidence on drug addiction epidemics. Although opioid "designer" drugs are not currently a major addiction concern, it can potentially add to the existent substance dependence disorders chronicle, particularly if the clandestine heroin supplies decline. Proactively, drug addiction scientists, epidemiologists and physicians, ought to continue collaborating in more research investigations to deliver new insights for a better understanding of the addictive disorders, improved treatments with clinically sustainable options, but more importantly for the conveyance of a potential cure.
2,839.2
2015-01-01T00:00:00.000
[ "Biology" ]
The High Energy X-ray Probe (HEX-P): A New Window into Neutron Star Accretion Accreting neutron stars (NSs) represent a unique laboratory for probing the physics of accretion in the presence of strong magnetic fields ($B\gtrsim 10^8$ G). Additionally, the matter inside the NS itself exists in an ultra-dense, cold state that cannot be reproduced in Earth-based laboratories. Hence, observational studies of these objects are a way to probe the most extreme physical regimes. Here we present an overview of the field and discuss the most important outstanding problems related to NS accretion. We show how these open questions regarding accreting NSs in both low-mass and high-mass X-ray binary systems can be addressed with the High-Energy X-ray Probe (HEX-P) via simulated data. In particular, with the broad X-ray passband and improved sensitivity afforded by a low X-ray background, HEX-P will be able to 1) distinguish between competing continuum emission models; 2) provide tighter upper limits on NS radii via reflection modeling techniques that are independent and complementary to other existing methods; 3) constrain magnetic field geometry, plasma parameters, and accretion column emission patterns by characterizing fundamental and harmonic cyclotron lines and exploring their behavior with pulse phase; 4) directly measure the surface magnetic field strength of highly magnetized NSs at the lowest accretion luminosities; as well as 5) detect cyclotron line features in extragalactic sources and probe their dependence on luminosity in the super-Eddington regime in order to distinguish between geometrical evolution and accretion-induced decay of the magnetic field. In these ways HEX-P will provide an essential new tool for exploring the physics of NSs, their magnetic fields, and the physics of extreme accretion. INTRODUCTION 1.Accreting Neutron Stars Accretion is a ubiquitous process in the Universe, from the formation of stars and planets to supermassive black holes at the center of galaxies.X-ray binary systems are composed of a compact object (CO), either a black hole (BH) or neutron star (NS), that accretes from a stellar companion.These systems can be further categorized based on the mass of the stellar companion: low-mass and high-mass (see Longair 2011 for a more detailed review).Figure 1 shows a simple schematic for accretion onto NSs in several types of binary systems.Low-mass X-ray binaries (LMXBs) have a ≲ 1 M ⊙ companion that transfers matter via Roche-lobe overflow to form an accretion disk around the compact object.High-mass X-ray binaries (HMXBs) have a more massive (≳ 5 M ⊙ ), early-type stellar companion and accretion onto the CO can occur in various ways, including periodic Roche lobe overflow (such as when the CO passes through the decretion disk of a Be type star), or capture of material ejected through dense stellar winds.We note that there is a rare class of LMXBs that accrete from the slow winds of a late-type stellar companion and exhibit properties akin to HMXBs (see Bozzo et al. 2022 and references therein) 1 . NSs are the densest known objects with a surface in the Universe.The matter inside a NS exists in an ultra-dense, cold state that cannot be replicated in terrestrial laboratories.The only way to discern how matter behaves under these conditions is by determining the equation of state (EoS).The EoS sets the mass and radius of the NS through the Tolman-Oppenheimer-Volkoff equations (Tolman, 1934(Tolman, , 1939;;Oppenheimer and Volkoff, 1939) and astronomical measurements of NS masses and radii are therefore crucial for determining which theoretical EoS models are viable (see, e.g., Lattimer 2011).In particular, accretion onto NSs probes the properties and behavior of matter in the presence of a strong magnetic field (B ∼ 10 8−9 G for LMXBs and B ∼ 10 12 G for HMXBs; see Caballero and Wilms 2012). In order to fully encapsulate the accretion emission from these systems, a broad X-ray passband is necessary (see §2 for LMXBs and §3 for HMXBs).Currently, NuSTAR (Harrison et al., 2013) is the only focusing hard X-ray telescope with a passband from 3-80 keV.Energies below 3 keV have to be supplemented with other X-ray telescopes, such as NICER (Gendreau et al., 2012), Swift (Gehrels et al., 2004), or XMM-Newton (Jansen et al., 2001), to construct a broad energy passband.While observations can be scheduled to occur during the same observing period, often the data are not strictly simultaneous due to the different orbits of the missions resulting in various degrees of overlap.Since NS binary systems are highly variable, sometimes on time-scales comparable to or shorter than the typical exposure itself, simultaneous observations over a broad X-ray passband are invaluable for studying these systems. Figure 1.Simple schematic diagrams of several types of X-ray binary systems: (a) a LMXB system where the NS accretes from a stellar companion via Roche-lobe overflow, (b) a NS in a HMXB that accretes from material captured from stellar winds that are launched from the companion, and (c) a NS accreting from the decretion disk of a Be type stellar companion. The High Energy X-ray Probe: HEX-P The High Energy X-ray Probe (HEX-P; Madsen et al. 2023, in prep.) is a probe-class mission concept that offers sensitive broad-band X-ray coverage (0.2-80 keV) with exceptional spectral, timing and angular capabilities.It features two high-energy telescopes (HETs) that focus hard X-rays and one low-energy telescope (LET) that focuses lower-energy X-rays. The LET consists of a segmented mirror assembly coated with Ir on monocrystalline silicon that achieves an angular resolution of 3.5 ′′ , and a low-energy DEPFET detector, of the same type as the Wide Field Imager (WFI; Meidinger et al. 2020) onboard Athena (Nandra et al., 2013).It has 512 × 512 pixels that cover a field of view of 11.3 ′ × 11.3 ′ .The LET has an effective passband of 0.2-25 keV, and a full frame readout time of 2 ms, which can be operated in a 128 and 64 channel window mode for higher count-rates to mitigate pile-up and faster readout.Pile-up effects remain below an acceptable limit of ∼1% for fluxes up to ∼100 mCrab in the smallest window configuration.Excising the core of the PSF, a common practice in X-ray astronomy, will allow for observations of brighter sources, with a typical loss of up to ∼60% of the total photon counts. The HET consists of two co-aligned telescopes and detector modules.The optics are made of Nielectroformed full shell mirror substrates, leveraging the heritage of XMM-Newton, and coated with Pt/C and W/Si multilayers for an effective passband of 2-80 keV.The high-energy detectors are of the same type as flown on NuSTAR, and they consist of 16 CZT sensors per focal plane, tiled 4 × 4, for a total of 128 × 128 pixel spanning a field of view of 13.4 ′ × 13.4 ′ .This paper highlights interesting existing open questions about NSs and accretion in strong magnetic fields, and demonstrates HEX-P's unique ability to address these with the current best estimate (CBE) mission capabilities.All simulations presented here were produced with a set of response files that represents the observatory performance based on CBEs as of Spring 2023 (see Madsen et al. 2023, in prep.).The effective area is derived from raytracing calculations for the mirror design including obscuration by all known structures.The detector responses are based on simulations performed by the respective hardware groups, with an optical blocking filter for the LET and a Be window and thermal insulation for the HET.The LET background was derived from a GEANT4 simulation (Eraerds et al., 2021) of the WFI instrument, and the HET background was derived from a GEANT4 simulation of the NuSTAR instrument.Both assume HEX-P is in an L1 orbit.The broad X-ray passband and superior sensitivity will provide a unique opportunity to study accretion onto NSs across a wide range of energies, luminosity, and dynamical regimes. Background Persistently accreting NS LMXBs are divided into two types based upon characteristic shapes that are traced out in hardness-intensity diagrams and color-color diagrams (Hasinger and van der Klis, 1989): 'Z' sources and 'atoll' sources.Z sources trace out a Z-shaped pattern with three distinct branches: the horizontal, normal, and flaring branch (HB, NB, and FB, respectively), with the FB being the softest spectral state.They can be further divided into two subgroups, Sco-like and Cyg-like, based upon how much time they spend in the different branches.Sco-like sources spend little to no time in the HB and extended periods of time in the FB (> 12 hours), whereas Cyg-like sources have a strong HB and spend little time in the FB (Kuulkers et al., 1997).Atoll sources are either observed in the soft 'banana' state or hard 'island' state.A difference in mass accretion rate is thought to be the primary driver between the behavioral differences between atoll and Z sources.Atoll sources probe a lower range in Eddington luminosity (∼0.01-0.5 L Edd ) in comparison to the near-Eddington Z sources (∼0.5-1.0L Edd : van der Klis 2005).Further evidence that the average mass accretion rate is responsible for the observed difference between the two classes comes from observing transient NS LMXBs that have transitioned between the two classes during outburst (e.g., XTE J1701−462: Homan et al. 2010). NS LMXBs occupy a number of spectral states which vary considerably in terms of the models and spectral parameters needed to describe the continuum emission (Barret, 2001).In the very hard state, the spectrum is dominated by Comptonization that can be modeled with an absorbed power-law component with a photon index Γ ∼ 1 (Ludlam et al., 2016;Parikh et al., 2017) or two thermal Comptonization components assuming two distinct populations of seed photons from different plasma temperatures (Fiocchi et al., 2019).In the hard state, the spectrum is dominated by a hard Comptonization component with Γ = 1.5-2.0 and a soft thermal component arising from a single temperature blackbody component or multi-color disk blackbody with a temperature ≲ 1 keV (Barret et al., 2000;Church and Balucińska-Church, 2001).In the soft state, the spectrum becomes thermally dominated with weakly Comptonized emission.Model choices for the thermal and Comptonization components in the soft state vary in the literature leading to two classical descriptions.The "Eastern" model, after Mitsuda et al. (1989), uses a multi-color disk blackbody in combination with a Comptonized blackbody component, while the "Western" model, after White et al. (1988), uses a single-temperature blackbody and a Comptonized disk component.Lin et al. (2007) devised a hybrid model for hard and soft state spectra based upon RXTE observations of two transient atoll systems that resulted in a coherent picture of the spectral evolution (e.g., the thermal components follow the expected L x ∝ T 4 relation).In this model, the soft state assumes two thermal components (i.e., a single-temperature blackbody and a disk blackbody) and weak Comptonization accounts for the power-law component.This hybrid double thermal model has been used in many NS LMXBs studies (e.g., Cackett et al. 2008Cackett et al. , 2009;;Lin et al. 2010), though not exclusively.For instance, a recent study using thermal Comptonization from a blackbody and a multi-color accretion disk blackbody (akin to the "Eastern" model) to model the X-ray spectra of the atoll 4U 1820-30 found good agreement with the observed jet variability in this system (Marino et al., 2023).In the absence of multi-wavelength data to support a choice of continuum model, it is difficult to ascertain which prescription of the spectra is appropriate.Due to the soft spectral shape in these states, the source spectrum typically becomes background dominated above 30 keV even when observed with NuSTAR.A broad X-ray passband with large effective collecting area and low X-ray background that can observe a large number of sources at various luminosity levels and spectral states is needed to further our understanding of these sources. The accretion disks in these systems can be externally illuminated by hot electrons in the corona (Sunyaev et al., 1991) or from the thermal emission of the NS or boundary layer region (Popham and Sunyaev, 2001).We note that the exact geometry of the corona is unclear (see Degenaar et al. 2018 for some possible geometries), but X-ray polarization measurements with IXPE are beginning to shed light on coronal orientation and presence of boundary layer regions in some systems (e.g., Jayasurya et al. 2023;Ursini et al. 2023;Cocchi et al. 2023;Farinelli et al. 2023); building up our understanding of the accretion geometry when coupled with X-ray energy spectral studies.Regardless of the hard X-ray source, the disk reprocesses the hard X-ray emission and re-emits discrete emission lines superimposed upon a reprocessed continuum known as the 'reflection' spectrum (Ross and Fabian, 2005;García and Kallman, 2010).The spectrum is then relativistically blurred due to the extreme environment close to the NS, where accreting material reaches relativistic velocities as it falls into the NS's deep gravitational well (Fabian et al., 1989;Dauser et al., 2013).Studying reflection in NS LMXBs allows for properties of NSs and the disk itself to be measured, such as the NS magnetic field strength (Ibragimov and Poutanen, 2009;Cackett et al., 2009;Ludlam et al., 2017b), extent of the boundary layer (Popham and Sunyaev, 2001;King et al., 2016;Ludlam et al., 2021), and NS radius (Cackett et al., 2008;Ludlam et al., 2017aLudlam et al., , 2022)).Of particular interest is both the inner disk radius, R in , and dimensionless spin parameter, a = cJ/GM 2 (which is the mass normalized total angular momentum J of the CO); the latter is typically fixed in current studies due to limitations in data quality.The spin parameter has important consequences for both accreting NSs and BHs (see Connors et al. 2023, in prep., andPiotrowska et al. 2023, in prep., for HEX-P science with BH X-ray binaries and supermassive BHs, respectively).The spin sets the location of the innermost stable circular orbit (ISCO) where a higher spin corresponds to a smaller ISCO (Bardeen et al. 1972).Consequently, the position of the inner disk radius for higher spin values cannot be replicated by lower spin if the data is of sufficient quality to recover R in accurately.The majority of NSs in LMXBs have spin a ≲ 0.3 (Galloway et al., 2008;Miller et al., 2011) and the difference in the location of the ISCO decreases from 6 gravitational radii (R g = GM/c2 ) at a = 0 to 4.98 R g at a = 0.3 (see Fig. 2).Therefore, targeting an accreting NS with a higher spin, as indicated through a high measured spin frequency, can decrease the upper limit on NS radii obtained from reflection modeling by roughly 2-3 kilometers. In order to capture the entire reflection spectrum (i.e., the low-energy O VIII and iron (Fe) L emission lines near 1 keV (Madej et al., 2014;Ludlam et al., 2018Ludlam et al., , 2021)), the Fe K emission lines at 6.4-6.97 keV, and the Compton backscattering hump above 15 keV), determine the appropriate continuum model, and measure the absorption column along the line of sight, a broad X-ray passband is necessary.Hence, the broad X-ray sensitivity provided by the LET and HETs on HEX-P is crucial for studying accreting NS LMXBs. Simulated Science Cases for LMXBs All simulations in this section were conducted in xspec (Arnaud, 1996) via the 'fakeit' command, which draws photons from a randomized seed distribution, with V07 of the HEX-P response files selecting an 80% PSF correction, assuming the data would be extracted from a 15 arcsec region for the HET and 8 arcsec region for the LET, as well as the anticipated background for the telescope at L1.The simulated spectra were grouped via grppha to have a minimum of 25 counts per bin to allow for the use of χ 2 statistics.The flux for each of the following simulations can be found in Table S1 in the Supplemental Materials. Distinguishing between continuum models To demonstrate HEX-P's ability to distinguish between different continuum models, we base our simulations on the models used to fit simultaneous NICER and NuSTAR observations of the accreting atoll 4U 1735−44 published in Ludlam et al. (2020).Model 1 is based on the hybrid double thermal continuum model of Lin et al. (2007) while Model 2 is based on the "Eastern" model of Mitsuda et al. (1989).Both models provide an adequate description of the data in the 0.3-30 keV energy band (the NuSTAR observations were background dominated above 30 keV) when neglecting the reflection spectrum, but data at higher energies would distinguish between these two continuum models.We take the continuum model and parameter values for Model 1 and Model 2 from Table 1 of Ludlam et al. (2020) as input for the HEX-P simulation; the exact values can be found in Table S2 of the Supplementary Materials.The overall models 2 are as follows: (a) Model 1: tbabs*(diskbb+bbody+pow) and (b) Model 2: tbabs*(diskbb+nthcomp).Note that we fix the absorption column index to a value between the ones used in Model 1 and Model 2, N H = 4 × 10 21 cm −2 , to focus on the difference in the spectral shape due to the model component choices rather than a difference in absorption column between the models.We chose an exposure time of 20 ks, roughly equivalent to the exposure time of the NuSTAR observation in the aforementioned study. The simulated HEX-P spectra remain above the background in the HET bands up to 80 keV, though they dive into the LET background below 1 keV.This could be remedied by increasing the exposure time, though, notably, the spectrum above 30 keV is where the models diverge.The reduced χ 2 is ≤ 1.03 when fitting each simulation with its own input model.Attempting to fit each simulated spectrum with the opposing model description provides an inadequate fit with ∆χ 2 > 1000 for the same number of degrees of freedom (d.o.f.). Figure 3 shows the HEX-P spectrum for the Model 2 input of a thermal disk with a Comptonized blackbody from a boundary layer region.For direct comparison, we show a simulated spectrum using Model 1 (the double thermal continuum model with weak Comptonization) with parameter values from fitting the simulated spectrum of Model 2 below 30 keV, since these models perform equally blue).The models diverge above 30 keV where currently available missions quickly become background dominated for the same exposure time.To emphasize this point, the same S/N near 30 keV could be achieved with NuSTAR in an exposure time of 96.6 ks.The broad X-ray passband and improved sensitivity of HEX-P will differentiate the models. well within this energy regime.This highlights that though these models are nearly identical below 30 keV, they diverge at higher energies inaccessible by current operating missions.The weakly Comptonized emission above 30 keV cannot be replicated by Model 2, and demonstrates the potential for a broadband X-ray mission like HEX-P to discern between competing continuum models that would otherwise equally describe the data below 30 keV. Neutron star radius constraints from relativistic reflection modeling To demonstrate that HEX-P would improve the radius constraints obtained by reflection modeling of NSs, we base our simulations on the results of a recent investigation of Cygnus X-2 that utilized simultaneous data from NICER and NuSTAR (Ludlam et al., 2022).This source is of particular interest since it has a dynamical NS mass measurement of M NS = 1.71 ± 0.21 M ⊙ (Casares et al., 2010).The source was analyzed in the NB, HB, and the vertex between those branches.Fixing the spin at a = 0, Ludlam et al. (2022) found that the inner disk radius remained close to R ISCO .In the current literature it is customary to find studies in which the spin of the NS has been fixed while fitting models to the data given the highly degenerate nature of R in and a.We set up our simulations using the Cygnus X-2 data in the NB modeled with the latest public release of the self-consistent reflection model tailored for thermal emission illuminating the accretion disk, relxillNS (v2.2:García et al. 2022).The input parameters for the simulation3 are provided in Table 1.We leave R in at 1 R ISCO for all simulations while varying the input spin since the disk remained consistent with this value while the source was in the various branches.We chose three spin values as test cases: non-rotating (a = 0), the highest value expected for a NS LMXB (a = 0.3), and a = 0.17 which is an Table 1.Input model parameters for simulating reflection model data with HEX-P.Values are taken from Table 3: RNS1 of Ludlam et al. (2022) for Cygnus X-2 in the NB and updated using the public version of relxillNS (v2.2).Additionally, we show one case of the recovered model parameters from a simulated HEX-P spectrum after performing a Markov Chain Monte Carlo (MCMC) analysis with a burn-in of 10 6 and chain length of 5 × 10 5 to emulate a standard analysis of the data.Errors are reported at the 1σ confidence level, though the errors for R in and a are drawn from the bivariate distribution between the two due to their correlated nature. Model A Fe 5.9 4.9 +0.7 approximation based on the spin frequency of the source (Mondal et al., 2018).We use an exposure time of 100 ks and investigate how well R in and spin can be recovered independently. Given the degenerate nature of the parameters of interest and that each spectrum simulated is created via a randomized photon generation, the results obtained can vary with each iteration.We therefore simulate 10 4 spectra per spin value to determine the likelihood of constraining R in and a.The LET data were modeled in the 0.3-15 keV energy band while the HET data were modeled in the 2-80 keV band.We impose an upper limit on the spin (a ≤ 0.7) when fitting the simulated spectra so as to not surpass the rotational break-up limit of a NS (Lattimer and Prakash, 2004).We fit each spectrum and then perform error scans to ensure that each iteration reached a minimum goodness of fit of χ 2 /d.o.f.≤ 1.1 prior to obtaining values for inner disk radius and spin from each simulated spectrum, thus building up the posterior distribution shown in Figure 4. Errors are drawn from the bivariate posterior distribution due to their correlated nature and reported at the 1σ confidence level.We find that for an input of a = 0, we recover a = −0.03+0.15 −0.17 and R in = 1.02 +0.07 −0.02 R ISCO .For an input of a = 0.17, we recover a = 0.17 +0.09 −0.16 and R in = 1.02 +0.05 −0.02 R ISCO .Lastly, for an input of a = 0.30, we recover a = 0.27 ± 0.13 and R in = 1.02 +0.05 −0.02 R ISCO .The ability to recover the input parameters improves at higher spin as the relativistic effects become stronger.Recall that the position of the inner disk radius for higher spin values cannot be replicated by lower spin, thus if the data is of sufficient quality to recover R in accurately this will become apparent.For example, an input of R in = 1 R ISCO for a spin of a = 0 corresponds to 6 R g .This value can be mimicked by a higher spin and R in beyond 1 R ISCO (e.g., 6 R g is equal to 1.2 R ISCO for a = 0.3).However, an input of R in = 1 R ISCO for a = 0.3 corresponds to 4.98 R g and cannot be replicated by a lower spin so long as the upper limit recovered when fitting the data returns a value of R in < 1.2 R ISCO .We note that Cygnus X-2 is a bright LMXB (∼0.7 Crab) and would thus need to be observed in a configuration that would reduce pile-up effects in the LET.However, we obtain consistent results at the 1σ level when performing the same exercise with just the HETs, which do not suffer from pile-up, though the 3σ level can no longer rule out a non-rotating NS as is the case when the LET is included.Hence the combined observing power of the HETs and LET provide an improvement over the current capabilities of existing missions. Figure 5 shows the improvement provided by HEX-P for reflection modeling of spinning NS LMXBs in comparison to simultaneous NICER and NuSTAR results with fixed spin.Additionally, the current best constraints from gravitational wave events of binary NS mergers and pulsar light curve modeling demonstrate how the various methods can work in concert to narrow down the allowable region for the EoS.Each method has its own underlying systematic uncertainties, so the firmest EoS constraints will require multiple measurement approaches.Utilizing the dynamical NS mass estimate of Cygnus X-2 (M NS = 1.71 ± 0.21 M ⊙ : Casares et al. 2010), Ludlam et al. (2022) reported an upper limit on the radius of R NS = 19.5 km for M NS = 1.92 M ⊙ and R NS = 15.3 km for M NS = 1.5 M ⊙ .The lower limit on spin and upper limit on R in return a conservative upper limit on the NS radius.From the HEX-P simulations we calculate an upper limit of R NS = 16.7 km for M NS = 1.92 M ⊙ and R NS = 13.2 km for M NS = 1.5 M ⊙ for an input of a = 0.3.This demonstrates the power of HEX-P to study rotating NSs with reflection studies.For comparison, we conducted the same simulation with NuSTAR response files.For the highest spin value of a = 0.3, NuSTAR recovers a = 0.67 +0.03 −0.54 and R in = 1.02 +0.16 −0.02 R ISCO .The conservative upper limit on the NS radius from the location of the inner disk radius is larger than 6 R g and therefore does not provide an improvement over current reflection studies that assume the spin is fixed at a = 0. Performing this test with the combination of NICER and NuSTAR marginally improves the recovered spin value to a = 0.68 +0.02 −0.50 , but with a larger upper limit on R in = 1.02 +0.18 −0.02 R ISCO , which is still > 1 km larger than the constraints obtained from HEX-P (i.e., R NS = 18.5 km for M NS = 1.92 M ⊙ and R NS = 14.4 km for M NS = 1.5 M ⊙ ). We demonstrate that these results do not depend on drawing from a posterior distribution of simulated spectra by conducting Markov Chain Monte Carlo (MCMC) analysis on one of the simulated HEX-P spectra with an input spin of a = 0.3.The results of this test are shown in Table 1 with the inclination fixed at the median value from optical and X-ray studies (Orosz and Kuulkers, 1999;Ludlam et al., 2022).The lower limit on spin of a = 0.1 and upper limit on R in = 1.12 R ISCO still provides an improvement over current NICER and NuSTAR analyses, which fix a = 0.The MCMC analysis finds an upper limit of R NS = 18 km for M NS = 1.92 M ⊙ and R NS = 14.1 km for M NS = 1.5 M ⊙ . For completeness we note that the Kerr metric is used to describe the space-time close to the NS in the relativistic reflection model.As the angular momentum (i.e., spin) increases, the NS can become oblate, thus causing deviations in space-time from the Kerr metric due to an induced quadrupole moment.The exact induced deviation depends upon the EoS of the NS, but this has an effect < 10% (Sibgatullin and Sunyaev, 1998) in the anticipated range of spin parameters for NS LMXBs (a ≲ 0.3).Given that our 1σ errors for R in are larger than this deviation from the Kerr metric, the HEX-P spectra would not be sensitive enough to measure this effect.However, our conservative upper limits on NS radius are still at larger radii than the deviation in the R ISCO and hence are not in conflict even if we are insensitive to an induced quadrupole moment.Therefore, reflection studies of accreting NS LMXBs with HEX-P would provide improved upper limits on NS radii in comparison to current studies. Background NSs in HMXBs are typically highly magnetized, with a dipolar magnetic field strength of ∼10 12 G.The pressure of the TeraGauss magnetic field disrupts the inflow of the accreting matter, channelling it onto the magnetic poles (Davies and Pringle, 1981).There, the kinetic energy of the matter is released as radiation in a highly anisotropic manner.Since the NS rotates, the sources are visible as X-ray pulsars.Studying this emission is crucial to understanding the physics of plasma accretion and the interaction of radiation with high magnetic fields that are orders of magnitude stronger than those achievable in laboratories on Earth.Due to their extreme surface magnetic field strength, quantum effects take place at the site of emission.In particular, the electron motion in the direction perpendicular to the magnetic field lines becomes quantized.This affects the electron scattering cross section, leading to a resonance at an energy that is proportional to the magnetic field strength (Meszaros, 1992).As a result, the spectra of X-ray pulsars (XRPs) can contain absorption line-like features called cyclotron resonant scattering features (CRSFs), or cyclotron lines, at an energy of where z g is the gravitational redshift (typically 0.3 for standard NS mass and radius) and B 12 is the NS (polar) magnetic field strength in units of 10 12 G, while n is the number of Landau levels involved4 .Cyclotron lines represent the most direct way to probe the NS magnetic field on or close to the NS surface. Up to now, cyclotron lines have been confirmed in the spectra of about 40 sources (Staubert et al., 2019). Figure 6 shows a schematic description of the formation of cyclotron harmonics. When observed at typical outburst luminosity of ≳ 10 36 erg s −1 , the spectra of XRPs have been described with phenomenological models (a power-law with high-energy cutoff) modified by interstellar absorption, an Fe Kα emission line around 6.4 keV, low-temperature blackbody components and, at times, a component called the 10 keV feature (Mushtukov and Tsygankov, 2022, and references therein).However, at lower luminosity accretion regimes, the spectrum often exhibits a transition to a double-hump shape (Tsygankov et al., 2019a) which has recently been explained in terms of a low-energy thermal component peaking at ∼5 keV and a high-energy Comptonized component peaking at ∼35 keV representing the broadened red wing of the cyclotron line (Sokolova-Lapa et al., 2021;Mushtukov et al., 2021).A dip between the two components was sometimes interpreted as a cyclotron line (see, e.g., Doroshenko et al., 2012, for the case of X Per), but is likely a continuum feature, as was shown for sources with a known fundamental cyclotron line at higher energies after the transition to a low-luminosity state (see, e.g., Tsygankov et al., 2019a).These low luminosity states of accretion characterized by two-hump spectra are generally associated with braking of the accretion flow in the NS atmosphere (Mushtukov et al. 2021, Sokolova-Lapa et al. 2021, although a model assuming an extended collisionless shock instead was recently proposed by Becker and Wolff 2022).Rapid deceleration of the plasma in the atmosphere proceeds mainly by Coulomb collisions and leads to the formation of an overheated outermost layer with electron temperatures of ∼30 keV.Such high temperatures significantly enhance Comptonization and together with resonant redistribution lead to the formation of the high-energy excess in the spectra around the cyclotron line.When the mass-accretion rate onto the poles is increased, the pressure of the emitted radiation starts affecting the dynamics of the accretion flow.The radiation becomes capable of decelerating the flow above the surface, reducing direct heating of the atmosphere and giving rise to a radiation-dominated shock (Basko and Sunyaev, 1976).The onset of this regime is typically associated with a critical luminosity (L crit ∼10 37 erg s −1 : Basko and Sunyaev, 1976;Becker et al., 2012;Mushtukov et al., 2015) and growth of the accretion column as an emitting structure.Modeling emission from accretion columns is a very challenging problem due to its dynamic nature, multi-dimensional geometry, and the necessity of including general relativistic effects.On the other hand, studying the low-luminosity regime with emission from the heated atmosphere of the polar cap allows easier access to fundamental parameters of accreting NSs (see § 3.2.3 for more details). Cyclotron lines exhibit variations with accretion luminosity, pulse phase, and over secular time scales.Variation of the cyclotron line centroid energy E cyc with X-ray luminosity L X can be exploited to gain insight into the accretion regime.For example, low-luminosity XRPs often show a positive correlation between E cyc and L X , while high-luminosity XRPs show a negative correlation (see, e.g., Klochkov et al. 2011).The two trends are assumed to be separated by the critical luminosity and thus are related to the transition between the different accretion regimes described above.On the other hand, studying the pulse-phase dependence of the E cyc (as well as the line depth and width) can shed light upon the geometrical configuration of the NS (Nishimura, 2011).Moreover, some sources show evidence of a pulse phase-transient cyclotron line; i.e., a CRSF that only appears at certain pulse phases (Klochkov et al., 2008;Kong et al., 2022).Finally, secular variation of the cyclotron line has been attributed to a geometric reconfiguration of the polar field or to a magnetic field burial due to continued accretion (Staubert et al., 2020;Bala et al., 2020). One of the biggest challenges for the detection and characterization of cyclotron lines is determining a robust broadband continuum model.The centroid energy of the line as well as its other parameters can show significantly different best-fit values when modelled with different continua.Due to this model-dependence, the very presence of a cyclotron line has been questioned at times (Di Salvo et al., 1998;Doroshenko et al., 2012Doroshenko et al., , 2020) ) as has its luminosity dependence (Müller et al., 2013).A similar argument also holds for the continuum best-fit values.The "true" continuum model can only be inferred if data cover a sufficiently wide energy passband to constrain absorption affecting the softer X-ray energies and the various spectral components at harder X-ray energies (Sokolova-Lapa et al., 2021;Malacaria et al., 2023a).Moreover, properly constraining the broadband continuum is necessary when multiple cyclotron line harmonics are present, as in the cases of 4U 0115+63 (Heindl et al., 2004) and V 0332+53 (Pottschmidt et al., 2005).In addition, observing the broadband continuum from accreting XRPs can help constraining physical parameters of the NSs thanks to the recent development of physically motivated spectral models (e.g., Farinelli et al. 2016;Becker and Wolff 2022, and references therein).This is especially important at low-luminosity, where the accretion flow free-falls on the NS surface and our understanding of the physical mechanisms is more uncertain concerning, e.g., how the flow transitions from radiation-dominated to gas-dominated, or the detailed production of seed photons from cyclotron, bremsstrahlung, and blackbody mechanisms at the site of impact.HEX-P will bring crucial contributions also in this field, as it will be able to probe broadband spectral emission from intrinsically dim sources whose required observing exposure with current facilities would be prohibitive. Last but not least measuring phase-dependent spectral components with the highest sensitivity is required in order to develop consistent physical models for the emission process that include X-ray polarization properties.Recent IXPE observations of accreting pulsars like Cen X-3, Vela X-1, EXO 2030+375 and others provided new constraints as they showed low, energy-dependent polarization degrees of <10% (Tsygankov et al., 2022;Forsblom et al., 2023;Malacaria et al., 2023b).Radiative transfer models of highly magnetized plasmas predict considerably higher polarization degrees of 50-80% (Meszaros et al., 1988;Caiazzo and Heyl, 2021;Sokolova-Lapa et al., 2023).Initial candidates for explaining the difference highlight the potential importance of the temperature structure of the plasma in the accretion column, of angle-dependent beaming, and of the propagation of X-rays in the magnetosphere for lowering the polarization degree.Furthermore it has been demonstrated that the X-ray continuum and cyclotron line profile can be expected to directly show subtle, complex imprints of polarization effects (Sokolova-Lapa et al., 2023).This is an intriguing possible reason for the occasionally observed and notoriously difficult to constrain "10 keV feature" mentioned above, as well as for the fact that some accreting pulsars do not show cyclotron lines. Advancing our understanding of HMXBs in general and cyclotron line sources in particular therefore requires broadband spectra with high sensitivity throughout the relevant energy passband, with a limited background as provided by focusing X-ray facilities, and medium spectral energy resolution (Wolff et al., 2019).Thanks to its leap in observational capabilities, HEX-P will be capable of overcoming all abovementioned challenges and push our understanding of accretion onto XRPs forward.In the next section, we simulate a few exemplary cases highlighting the gains enabled by HEX-P with respect to currently available X-ray facilities for which the investigated cyclotron lines centroid energy, accretion regime, or the necessary exposure time prevent a comprehensive study of the physical mechanisms at work. Simulated Science Cases for HMXBs The following simulations show HEX-P's potential to tackle different cyclotron line science cases using several example sources.Each source has been chosen to represent a specific science case, either because it allows us to observe a certain accretion regime and luminosity range or due to its specific cyclotron line properties, such as the existence of n > 1 harmonics.We begin with the prototypical persistent cyclotron line sources Cen X-3 and Vela X-1, which show moderate to high fluxes and luminosities, and allow for exquisitely detailed and sensitive parameter constraints with HEX-P.Then we focus on transient accreting pulsars for which HEX-P will provide unprecedented access to low fluxes, allowing study of extreme luminosity regimes, e.g., for GX 304-1 in quiescence or for super-Eddington outbursts of extragalactic sources like SMC X-2.Similar to § 2.2, all simulations were performed via the 'fakeit' command in xspec (Arnaud, 1996) or ISIS (Houck and Denicola, 2000), employing version v07 of the HEX-P response files, selecting an 80% PSF correction, assuming a 15 ′′ extraction region for the HETs and 8 ′′ for the LET, as well as taking the expected background at L1 into account.The flux for each of the following simulations can be found in Table S1 in the Supplemental Materials. Constraining magnetic field geometry from cyclotron line harmonics As discussed earlier, cyclotron lines result from transitions between quantized Landau energy levels of electrons in the presence of a magnetic field (Meszaros, 1992).Measuring the energy, width, and strength of cyclotron lines in HMXBs is crucial for understanding the underlying physics of these systems.The energy of cyclotron lines provides insights into the line-forming region in the accretion column.The width of cyclotron lines provides valuable information about the geometry, temperature distribution, and plasma conditions within the accretion column (see e.g., Becker et al. 2012;Staubert et al. 2014 and references therein).Broadening of cyclotron lines can be influenced by factors such as electron thermal motion, turbulence, and relativistic effects.In the last few decades, comprehensive analyses of HMXBs have directly unveiled many important properties of these systems (see, e.g., Staubert et al. 2019;Pradhan et al. 2021 and references therein).In addition to the fundamental, higher harmonics of the cyclotron line are also sometimes present in the X-ray spectrum.They arise due to higher-order interactions between X-rays and the Landau levels.By detecting and analyzing fundamental cyclotron lines and their higher harmonics, one can determine the magnetic field strength in different regions of the accretion column.Furthermore, the strength or intensity of fundamental cyclotron lines and their higher harmonics relative to the fundamental line provide insight into the scattering efficiency and the fraction of scattered photons, improving our understanding of the emission processes and properties of the compact object (e.g., see Alexander and Meszaros 1991;Schwarm et al. 2017 and references therein). Given the limited passband and / or higher background of instruments like RXTE and Suzaku, there have been few studies to date of cyclotron lines higher harmonics in HMXBs (see Table 3 of Pradhan et al., 2021).More recent instruments like NuSTAR and Insight/HXMT provide a significant improvement, but as we look into the future, these studies will be revolutionized by a mission with a broad passband coupled with very low background like HEX-P.In order to test this, we simulated a 20 ks HEX-P spectrum of Cen X-3 in xspec.Due to its bright nature and that Cen X-3 is a strong hard X-ray emitter (see Table S1 in the Supplemental Material), we concentrate on the HETs only which are devoid of pile-up although we have verified that the results below are consistent when including the LET.Cen X-3, an eclipsing HMXB, consists of an O star and a pulsar with a rotational period of ∼4.8 s (Chodil et al., 1967;Giacconi et al., 1971;Schreier et al., 1972).The X-ray spectrum of this source exhibits multiple emission lines (Fe, Si, Mg, Ne: Iaria et al. 2005;Tugay and Vasylenko 2009;Naik and Paul 2012;Aftab et al. 2019;Sanjurjo-Ferrín et al. 2021), and the X-ray emission has been detected beyond 70 keV.The fundamental cyclotron line occurs near 29 keV (Tomar et al., 2021) with a possible n 2 harmonic at ∼47 keV (Yang et al., 2023) as seen from recent HXMT results.With the passband of HEX-P extending up to 80 keV and significantly less background, we investigated the possibility of detecting an n 3 harmonic in the source. The baseline model for our simulation is NPEX (Equation 2; Mihara 1995;Makishima et al. 1999) and the values of model parameters are taken from Table 1 (ObsID P010131101602) of Yang et al. (2023).The show the residuals obtained when we set the strength of fundamental, n 2 harmonic, and n 3 harmonic to zero, respectively.We re-binned the data visually and provide arrows for the cyclotron line components to aid with clarity.For comparison, we provide a simulated NuSTAR spectrum for using the same model and exposure time which is shown in panel (e).The NuSTAR data are strongly background dominated at these flux levels, which makes constraining the higher energy harmonics difficult even at much longer exposure times (see text for more details).functional form of NPEX is given by where Γ 1 and Γ 2 are the negative and positive power-law indices, respectively.Γ 2 , which approximates a Wien hump, was fixed at 2.0, Γ 1 = 0.8, the folding energy E fold =6.9 keV, and the absorption column N H = 2.06× 10 22 cm −2 . The CRSFs are modeled with a multiplicative model of a line with a Gaussian optical depth profile with energy E i , strength d i , and width σ i .The fundamental line, i = 1, is at ∼29 keV with d 1 ∼ 1.4 and σ 1 = 7.6 keV.The n 2 harmonic line, i = 2, is at ∼47 keV, with d 2 ∼ 2.3 and σ 2 = 9.7 keV.The 2-75 keV flux is ∼1× 10 −8 erg cm −2 s −1 .In order to showcase the unparalleled accuracy of HEX-P in measuring magnetic fields correspondent to cyclotron line energies above 50 keV, we incorporated a representative example into our model: we include an n 3 harmonic line at 70 keV (i = 3), with the width (σ 3 ) and strength (d 3 ) set to match those of the n 2 harmonic line.By doing so, we investigate the ability of HEX-P to effectively detect these parameters in the n 3 harmonic line, from which we can constrain the magnetic field strength and details about the magnetic field structure. A 20 ks observation HEX-P is able to constrain the energy of the n 3 harmonic well, to within 7% (see Fig. 7).We also applied an F-test to calculate the probability of chance improvement, with and without the n 3 harmonic line, and found that the probability of chance improvement when adding the n 3 harmonic line is below 0.06%, confirming the robustness of the detection.We reiterate here that the clear improvement of HEX-P over its predecessors in terms of broadband coverage and sensitivity makes it possible to investigate higher harmonic features -which have clearly eluded NuSTAR (see e.g., Tomar et al., 2021). Since cyclotron lines result from the resonant scattering of photons by electrons whose energies are quantized into Landau levels by the strong magnetic field, and the quantized energy levels of the electrons are harmonically spaced, one would naively expect the energy of the n > 1 harmonics to be the integer multiples of the fundamental line.It has however been observed in various X-ray pulsars that this is generally not the case (see Yang et al. 2023;Orlandini et al. 2012 and discussions within).The discrepancy can be understood by assuming a difference in line formation mechanisms or in line-forming regions for the fundamental and higher harmonics.In the former case, the fundamental would primarily arise from resonant scattering, while the higher harmonics involve additional effects like multiple scattering and photon spawning (see e.g., Nishimura 2003;Schönherr et al. 2007).This, however, can not be the only reason because some systems show an an-harmonic spacing larger than predicted by this effect.One way to explain this difference is to take into account that the optical depths of the fundamental and the higher harmonics can be different if they are formed at different heights above the NS.The higher harmonics could be closer to the NS surface and the fundamental line situated at a height with weaker magnetic field strength (see, e.g., Fürst et al. 2018).Another possibility is to consider a displacement of the magnetic dipole, which would also explain the energy difference of the two lines if the lines originate from the different poles of the NS (Rodes-Roca et al., 2009).Therefore, a significant phase dependence of the strength of the fundamental and higher harmonics is expected, which is not possible to probe with the current data sets.Here, too, HEX-P can provide revolutionary new capabilities though (see section 3.2.2). Finally, note that while the n 2 harmonic of Cen X-3 at 47 keV technically falls within the energy range covered by NuSTAR, meaningful constraints could not be derived with NuSTAR (see panel (e) of Fig. 7) due to the dominant background in that energy range even when the exposure was set to 500 ks.However, with HEX-P, the parameters associated with the cyclotron line can be much better constrained, while also alleviating the degeneracies with the continuum.To emphasize the latter point, we present a contour plot in Figure 8 depicting the relationship between the fundamental cyclotron line energy and the cutoff energy for both HEX-P and NuSTAR data for the same model and exposure of 20 ks for Cen X-3.The plot clearly demonstrates that HEX-P provides significantly more stringent constraints compared to NuSTAR.Such accurate measurements in short exposure times will enable detailed pulse phase-resolved spectra to explore the accretion physics in the accretion column in unprecedented detail, as discussed in the following section. Note that Cen X-3 is highly variable, the X-ray flux varies by up-to two orders of magnitude even outside eclipses.Therefore, we also fit the actual NuSTAR data in bright flux state (ObsID 30101055002; exposure 21 ks), which has a flux an order of magnitude more than our simulations, and find the harmonics were not detected in the NuSTAR spectrum (see, (Tomar et al., 2021)).In order to make a comparison of HEX-P vs NuSTAR for this bright state of Cen X-3, we simulated an HET spectrum using the NuSTAR model in bright state, while keeping the parameters of harmonics as above.We find that, even for this bright state of Cen Figure 8. Contour plots (68%, 90% and 99% confidence levels) for cutoff energy versus the fundamental cyclotron line energy for Cen X-3 using the same model and exposure for NuSTAR and HEX-P.As evident from the plot, HEX-P will provide better constraints on the continuum and cyclotron line parameters; thereby mitigating degeneracies between the two.Note that the cutoff energies are slightly shifted to aid visualization. X-3, NuSTAR will be able to obtain the same signal-to-noise as HEX-P only if the exposure is increased by 20 times (i.e., 420 ks). Constraining accretion column emission geometry through pulse phase-resolved spectroscopy Vela X-1 is an archetypical HMXB.It has been well studied with all current and past X-ray missions (see Kretschmar et al., 2021, for a recent review).While not being exceptionally luminous (∼10 36 erg s −1 ), its close distance (d = 1.9 kpc) and X-ray eclipses make it an ideal system to study NS magnetic fields and their interaction with the stellar wind of the companion (i.e., the mass donor).The X-ray spectrum of Vela X-1 shows two CRSFs, the fundamental at around 25 keV and the n 2 harmonic around 55 keV (see Fürst et al., 2014, and references therein).These line energies are ideally covered by the HEX-P energy band.However, in Vela X-1, the fundamental line around 25 keV is much weaker and shallower than the n 2 harmonic line at 55 keV.In fact, the 25 keV line is so weak that there has been a long-standing discussion in the literature about its existence, which could only be settled once NuSTAR data were available (Fürst et al., 2014;Kreykenbohm et al., 2002;Maitra and Paul, 2013).The cyclotron lines in Vela X-1 therefore represent a good test case to explore HEX-P's sensitivity to broad and shallow spectral features. We performed simulations to study how well HEX-P can measure the energy, width, and depth of both CRSFs, in particular as a function of rotational phase of the NS.Variations of CRSF parameters as a Figure 9. Simulated Vela X-1 spectrum for the bin covering phases 0-0.1 (see Fig. 10 for the spectral values in that bin).The n 2 harmonic cyclotron line is clearly visible as a strong dip at the highest energies as indicated by the arrow.function of phase constrain the magnetic field geometry and emission geometry of the accretion column (see, e.g.Iwakiri et al., 2019;Liu et al., 2020). We base our simulations on the spectral fits of NuSTAR data presented by Diez et al. (2022).In particular, the continuum is modeled by a powerlaw with an exponential cutoff at high energies (using the model "FDcut" in xspec, Equation 3; Tanaka 1986), modified by neutral absorption column at low energies: where Γ is the power-law index, E cut is the cutoff energy, and E fold is the folding energy.In addition to the continuum model used by Diez et al. (2022) we also include soft X-ray emission lines at 6.4 keV (Fe Kα), 2.4 keV (S Kα), 1.8 keV (Si Kα), 1.4 keV (Mg Lα), and 0.9 keV (Ne IX), representing various atomic features in the spectrum (Diez et al., 2023).A simulated spectrum is shown in Figure 9. As mentioned in §3.2.1, CRSFs are modeled using a multiplicative line model with a Gaussian optical depth profile described by its energy E i , its strength d i , and its width σ i .Here the subscript i denotes either the fundamental line around 25 keV (i = 1) or the n 2 harmonic line around 55 keV (i = 2).The width σ 1 is set to 0.5 × σ 2 (Diez et al., 2022).Instead of relying on the (rather uncertain; see, e.g., Maitra et al. 2018) current knowledge of the phase-resolved behavior of the CRSF parameters, we simulate that the four most relevant parameters (E 1 , E 2 , d 1 , d 2 ) vary sinusoidally with random phase shifts to each other.This approach demonstrates the power of HEX-P to resolve small changes in any of the parameters, even for relatively weak lines.In particular, we assume that the fundamental line E 1 varies by about ±3.5 keV as a function of phase, while the n 2 harmonic line energy E 2 varies by ±5 keV.The strength d 1 and d 2 vary by ±0.3 keV and ±5.0 keV, respectively. In addition, we also allow the absorption column of the partial absorber to vary, with ±5 × 10 22 cm −2 around the average value of 32.1 × 10 22 cm −2 .While we do not necessarily expect that the absorption column will vary significantly as function of pulse phase, this variability highlights the capabilities of the HEX-P/LET to measure small changes in absorbing columns on time-scales as short as 5 ks.These variations have been observed in time-resolved spectroscopy and allow us to study the physical properties, like density and clump sizes of the accreted medium (Diez et al., 2023).At the same time, the changes in N H do not influence the high energy spectrum where the CRSFs are present. For our simulations, we assume that LET will be operated in a fast readout mode to avoid pile-up and with negligible deadtime.We simulate spectra with 50 ks exposure time each, and split the data into 10 phase-bins so that each one has an exposure time of 5 ks.We simulate 100 individual spectra for each phase-bin and calculate the standard deviation (SD) of that sample as uncertainties for each parameter.We additionally calculate the 90% and 10% percentile of the 90% uncertainties of each realization as a realistic estimate of the expected uncertainties in a real 50 ks observation.Using multiple realizations for each phase bin allows us to avoid issues with any one particular realization and shows that our uncertainties are properly sampled over the Poisson noise inherent to counting statistics.The results are shown in Figure 10. All parameters can be very well reconstructed, with the fundamental energy having uncertainties < 0.5 keV for phases where its strength is > 0.4 keV (i.e., phases 0.5-1.0).Even for weaker lines (despite having larger uncertainties of about ±1 keV), the line can be still significantly detected.This is a significant improvement over NuSTAR, for which Fürst et al. (2018) found that the energy was basically unconstrained for line strengths < 0.5 keV. The n 2 harmonic line energy can be very well constrained (with uncertainties <2 keV) for central energies of the line < 57 keV.Due to the exposure time of only 5 ks per phase-bin and the low cutoff energy of ∼25 keV, the line appears at the very edge of the useful range of the spectrum.Therefore, higher energies become more difficult to constrain, as most of the line is outside the useful passband; however, we still find typical constraints of ±3-4 keV, even for a simulated line energy of 61 keV. These results represent a significant improvement over previous phase-resolved studies of Vela X-1.For example, using RXTE, Kreykenbohm et al. (1999) found variations of the CRSF energies for both the fundamental and the n 2 harmonic line, but could only use 10 phase bins and still had average uncertainties of 2-3 keV for the fundamental line, and 5-10 keV for the n 2 harmonic.Recently, Liu et al. (2022) published results obtained on Vela X-1 with Insight/HXMT.Using 16 phase-bins for a ∼100 ks exposure, they found average uncertainties comparable to our HEX-P simulations for the n 2 harmonic line, but with significantly larger uncertainties (a few keV) for the fundamental line.We also note that Liu et al. (2022) found the n 2 harmonic line at energies between 40-50 keV, i.e., at much lower energies than simulated here and therefore in a part of the spectrum with a much higher signal-to-noise ratio. HEX-P observations would therefore be a big step forward in being able to obtain phase-resolved spectroscopy of bright HMXBs, where we can constrain the CRSF parameters, in particular the energy, much better than with existing instruments in shorter exposure times.This reduction in exposure time means that we can either observe more sources in less time or slice the phase-resolved spectra finer to obtain a more detailed look at the emission and magnetic field geometry of the accretion column (Nishimura, 2003;Schönherr et al., 2007;Schwarm et al., 2017). Constraining the surface magnetic field strength from quiescent observations of Be X-ray binaries Details regarding the formation of cyclotron lines in the spectra of accreting NSs in HMXBs are highly debated in the literature.The observed positive and negative correlations of the cyclotron energy with luminosity are often interpreted as a change of the dynamics of the accretion process in different accretion Figure 10.Absorption column and cyclotron line variability in Vela X-1, calculated from simulated phaseresolved HEX-P spectra using the results of Diez et al. (2022) with an average flux of 6.7×10 −9 erg s −1 cm −2 in the 3-80 keV band.The parameters from top to bottom are: absorption column N H , fundamental line energy (E 1 ), fundamental line strength (d 1 ), n 2 harmonic line energy (E 2 ), and n 2 harmonic line strength (d 2 ).Note that the fundamental line width is set to half of the n 2 harmonic width.The black uncertainties show the standard deviation for a sample of 100 simulations per phase bin.The orange dashed histogram indicates the expected 90% uncertainties on each individual realization.The green line indicates the input values for each phase bin.For details about the simulation set-up see § 3.2.2 and Table S3 in the Supplemental Material. regimes, distinguished by a critical luminosity where a radiation-dominated shock forms in the column above the NS surface.For high-luminosity regimes, when it is assumed that L X > L crit , several mechanisms for the formation of cyclotron lines have been suggested that explain the observed negative line energy versus luminosity correlation.These scenarios involve different locations for the line production: above the NS surface in an accretion column that is growing with increasing luminosity (see, e.g., Becker et al., 2012) or in the illuminated atmosphere of the NS at lower magnetic latitudes, where the column radiation is reprocessed (as suggested by Poutanen et al. 2013;see, however, Kylafis et al. 2021).For lower luminosities, L X ≲ L crit , the formation of cyclotron lines is usually attributed to comparatively lower heights in the accretion column.In this case, a positive line energy versus luminosity correlation can be explained by the formation of a collisionless shock (see, e.g., Rothschild et al., 2017;Vybornov et al., 2017) that is moving closer to the NS surface for higher luminosities or by the redshift due to bulk motion of the accretion flow (Nishimura, 2014;Mushtukov et al., 2015).However, to apply these models in a consistent way and to distinguish between their predictions, it is required to know the surface field at the magnetic pole of the NS.This can then serve as a reference to estimate, for example, the characteristic height of the line-forming region in the column or the velocity of the accretion flow near the surface. Recent evidence of accretion in Be X-ray binaries in quiescence, i.e., with L X ≪ L crit , and the simple emission region geometry expected in this case (i.e., a hot spot on the NS surface) together represent a unique opportunity to observe cyclotron lines at the energy corresponding to the surface value of the magnetic field.From current observational examples it seems that for this accretion state to occur, the magnetic field of the source might have to be sufficiently high, B ≳ 5 × 10 12 G, and the spin period might have to be comparatively long, ≳ 100 s (as, e.g., for GX 304−1 and GRO J1008−57: Tsygankov et al., 2019b;Lutovinov et al., 2021).The low flux level, however, makes the detection of cyclotron lines at high energies very challenging.GX 304−1 is one of the first sources which unambiguously exhibited stable quiescent accretion (Rouco Escorial et al., 2018) with a cyclotron line known from outburst observations (Mihara et al., 2010;Malacaria et al., 2015).The corresponding flux level in the 2-10 keV energy band, ∼0.4 mCrab, is below the sensitivity of Insight/HXMT.With NuSTAR we can access the high-energy emission, but are typically unable to constrain the turnover of the second hump of the characteristic double-hump spectrum (see §3.1) and the cyclotron line (Tsygankov et al. 2019b, Zainab et al., in prep.). The best way to probe the above mentioned science cases is to observe the source during the quiescent accretion regime, i.e., at a low-luminosity state that is typically below the detection threshold for X-ray all sky monitors and accessible only with pointed observations subsequent to an X-ray outburst.Such a regime is also of importance since the simplified physics of plasma stopping at the nearly-static NS atmosphere allows for more detailed modeling of emission processes.The tenuous flow of matter stops at the NS atmosphere by Coulomb collisions, resulting in a high temperature gradient from ∼30 keV at the top down to ∼2 keV in the lower layers, separated by only ∼10 m (see, e.g., Sokolova-Lapa et al., 2021).The intrinsic emission is mainly produced by magnetic bremsstrahlung (for sufficiently low magnetic fields, cyclotron photons are produced as well from collisional excitations of electrons moving in bulk; Mushtukov et al., 2021), which is then modified by Compton scattering.This regime of accretion allows, for the first time, to combine modeling of the temperature and density structure of the emission region with a joint simulation of the continuum and cyclotron line formation (Mushtukov et al., 2021;Sokolova-Lapa et al., 2021). In this way, existing physically-motivated models principally provide access to information about the field strength at the poles of accreting highly magnetized NSs.However, the limitations of current high-energy missions do not permit us to constrain the corresponding cyclotron lines in the spectra.HEX-P will open a In order to demonstrate HEX-P's capabilities to measure the surface magnetic field strength of a NS in quiescence, we simulate observations of the quiescent state of the Be X-ray binary GX 304−1, the first system for which a transition to the two-component spectrum was observed by NuSTAR (Tsygankov et al., 2019b).We use the physical model polcap presented by Sokolova-Lapa et al. (2021), which describes emission for the low-luminosity accretion regime.Here, we adopt an updated parametrization of the model (using the same underlying pre-calculated spectra; E. Sokolova-Lapa, priv. comm.).The parameters are the mass flux, Ṁ/πr 2 0 , where Ṁ is the mass-accretion rate and r 0 is the polar cap radius; the cyclotron energy, E cyc , corresponding to the polar magnetic field strength, B; and the normalization given in terms of r 2 0 /D 2 , where D is the distance to the source.We set the normalization and the mass flux based on the flux level and the shape of the spectrum as observed previously by NuSTAR.The magnetic field strength is set to B = 5.35 × 10 12 G, derived from the centroid energy of the cyclotron line observed during the previous outburst (∼50 keV, Jaisawal et al., 2016) and corrected by the gravitational redshift near the surface of the NS (z g ≈ 0.24, assuming standard NS parameters).The corresponding spectrum calculated with the polcap model and the exact values of the parameters used for the simulations are shown in Figure 11.Due to internal averaging over the emission angles to obtain the total flux in the NS rest-frame (see details in Sokolova-Lapa et al., 2021), the cyclotron line in the corresponding spectrum is located at ≈60 keV.Taking into account the gravitational redshift, the cyclotron line in the observed spectrum is therefore expected at ≈48 keV.We simulate a 60 ks observation for all three HEX-P instruments, combining the polcap model with tbabs (Wilms et al., 2000) to account for interstellar absorption (i.e., tbabs*polcap), fixing N H to the Galactic value of 1.1 × 10 22 cm −2 in the direction of the source.The resulting luminosity in the 1-80 keV range is ∼8 × 10 33 erg s −1 . We compare the HEX-P spectra against a NuSTAR observation of the same exposure.We first fit both, the simulated HEX-P and existing NuSTAR observations, using a model that includes two independent Comptonized components (comptt;Titarchuk, 1994), tbabs*(comptt 1 + comptt 2 ).This model, with or without a multiplicative Gaussian-like cyclotron line (gabs), is commonly used to describe the twocomponent spectra of low-luminosity states (see, e.g., Tsygankov et al., 2019a;Lutovinov et al., 2021;Doroshenko et al., 2021).Similarly to earlier analyses (Tsygankov et al., 2019b), we obtain a good description of the NuSTAR data with this model, with χ 2 red = 136.93/107= 1.28 for the best fit.For the simulated HEX-P data, the same model consistent of two absorbed Comptonized components provides a formally satisfactory fit (χ 2 red = 322.64/275= 1.17), however, the residuals are flat only at low and intermediate energies, but indicate a dip at around 40-60 keV, where the cyclotron line is expected from the underlying physical model (see Figure 11).The best fit for a model which includes an additional Gaussian absorption line at high energies to describe the cyclotron line, provides the line's centroid energy of 46.5 +4.1 −2.7 keV , width 8.2 +3.1 −2.1 keV , and strength 54 +41 −21 keV, with the χ 2 red = 261.23/276= 0.95.The centroid energy corresponds (within its uncertainties) to the redshifted cyclotron line from the physical model, which is expected to be at ≈60/(1 + z g ) keV ≈ 48 keV, assuming the standard NS parameters to estimate the redshift, z g = 0.24. Figure 12 shows the resulting spectra and the corresponding best-fit models for the real NuSTAR and the simulated HEX-P observations.This example shows that even at this exposure, which is relatively low for the quiescent state of Be X-ray binaries, we can constrain the cyclotron line energy and thus the surface magnetic field to within ∼20%.This capability is crucial for systems which have never been observed at high-luminosity states, that is, for which cyclotron lines have never been observed in the spectra.This class of Be X-ray binaries accreting in quiescence is actively growing, e.g., through NuSTAR follow-up of Be X-ray binary candidates found during the first eROSITA All Sky Survey (see, e.g., Doroshenko et al., 2022).More similar discoveries are expected as deeper X-ray survey data from eROSITA become available.The spectral shape of the latter sources, in particular, the location of the high-energy excess associated with the energies below the cyclotron resonance, suggests a similar magnetic field strength as in GX 304−1.The search for high-energy cyclotron lines, which makes it possible to unambiguously constrain the surface magnetic field of an accreting neutron star, requires superior sensitivity at hard X-rays, as will be provided by HEX-P. For a discussion of NS science that can be done with HEX-P for B-fields in excess of ∼10 14 G (i.e., magnetars), we refer the reader to Alford et al. 2023, (in prep.). Constraining super-critical accretion and the critical luminosity via cyclotron line evolution in extragalactic sources A correlation between cyclotron line energy and X-ray luminosity has been reported for a handful of X-ray pulsars (see Staubert et al., 2019, and above).The dependence of the line energy on luminosity can be attributed to changes in the height of the accretion column and can be positive or negative for low and high X-ray luminosity, respectively (Becker et al., 2012).However, in a couple of sources a secondary effect has been reported: for equal levels of luminosity, the energy of the cyclotron line can be different (up to 10% change) between the rise and the decay of an outburst or between different outbursts (e.g., V 0332+53, Cusumano et al. 2016 andSMC X-2, Jaisawal et al. 2023).Doroshenko et al. (2017) proposed that this effect is likely caused by a change of the emission region geometry (e.g., different combinations of height and width of the accretion column), while an alternative explanation is that it is due to accretion-induced decay of the NS's magnetic field (Cusumano et al., 2016;Jaisawal et al., 2023). Given this complex observational behavior, it is critical to detect CRSFs in more transient systems and to study their evolution over luminous outbursts.Although a significant population of HMXBs is found in the Milky Way, interpreting results from observing its members is often hampered by their uncertain distance and large foreground absorption.In that sense nearby galaxies like the star-forming Large and Small Magellanic Clouds (LMC and SMC) offer a unique laboratory to complement our studies of luminous Galactic HMXBs.Sources in these galaxies have well determined distances of ∼50 kpc (LMC) or ∼60 kpc (SMC) and low Galactic foreground absorption (∼10 20 cm −2 ) making them ideal targets for spectral and temporal studies during major outbursts.Based on past observations and recent statistics, an outburst that peaks above 2 × 10 38 erg s −1 occurs in the Magellanic Clouds every few years (e.g.Vasilopoulos et al., 2014;Koliopanos and Vasilopoulos, 2018;Maitra et al., 2018;Vasilopoulos et al., 2020).Such outbursts are brighter than the Eddington luminosity for a typical NS, more precisely, they are brighter than the critical luminosity5 , L crit ∼10 37 erg s −1 , for a typical accretion column (see §3.1).They are therefore called super-Eddington or super-critical outbursts.SMC X-2 is a good example to demonstrate HEX-P's capabilities to detect and follow the evolution of a CRSF during a bright outburst.This BeXRB pulsar has exhibited two outbursts which reached luminosities above the Eddington limit over the last decade, one in 2015 and another in 2022.Both outbursts were followed up with Swift/XRT, as well as with NuSTAR ToOs (three observations in 2015 and one in 2022), covering a broad range in luminosity of (2 − 6) × 10 38 erg s −1 (see Fig. 13, left panel).When comparing the CRSF energy for the two outbursts at the same luminosity level, the line energy was about 2 keV higher during the 2022 outburst than observed previously in 2015 (see Fig. 13, right panel).This corresponds to a difference in magnetic field strength of B ∼ 2 × 10 11 G, akin to the difference reported by Cusumano et al. (2016) for V 0332+53. In order to test HEX-P's capabilities to detect CRSF features for extragalactic sources and determine if the data would be sufficient to distinguish trends with luminosity, we performed several simulations based on the spectral properties of SMC X-2 as measured by NuSTAR (Jaisawal et al., 2023).For all simulations we used both HET and LET detectors.For the continuum we used an absorbed (tbabs with N H = 10 20 cm −2 ; Wilms et al., 2000) cut-off power-law model (i.e.cutoffpl) and a cyclotron absorption feature with the gabs model.We limited our simulations to the super-critical regime and a bolometric (Jaisawal et al., 2023).Note that the NuSTAR points on the trend 1 line are from the 2015 outburst, while the point on the trend 2 line was measured during the 2022 outburst; hence different cyclotron line energies can be measured even when the source is at nearly the same luminosity during outburst indicating an inherent difference between outbursts.We demonstrate HEX-P's capabilities to discern between different possible trends in cyclotron line energy as a function of luminosity in these types of systems in as little at 10 ks, though greatly improved for 50 ks exposures.luminosity of 1.5-6×10 38 ergs s −1 which yields an absorbed flux range of 2-8×10 −10 ergs cm −2 s −1 (1-50 keV).We simulated spectra where the CRSF energy varied with luminosity assuming two arbitrary inverse linear energy dependencies (trend 1 and trend 2 in Fig. 13).We then fit the spectra and derived the best-fit parameters for the line with corresponding uncertainties.The results are shown in Figure 13 (right panel).HEX-P can constrain the CRSF parameters (here the central line energy) at least two times better than NuSTAR and with an exposure time (10 ks) that is only between half and a fourth that of NuSTAR, as well as discern between different trend inputs.For a longer exposure time of 50 ks the line parameters can be measured with unparalleled accuracy at the distance of the SMC.Thus, HEX-P will be able to detect and follow cyclotron line evolution for super-critical outbursts in the Magellanic Clouds and thereby test competing physical models (i.e.similar to V 0332+53 Cusumano et al., 2016;Vybornov et al., 2018).Note that the flux regime for which these trends are simulated (see Table S1) and the science cases of the previous subsections suggest that trends in CRSF energy can also be probed at sub-critical through super-critical luminosities in galactic sources.For science that can be executed with HEX-P regarding pulsations and cyclotron lines in extragalactic ultraluminous X-ray (ULX) sources which reach even higher luminosities of > 100× their Eddington limit, we refer the reader to Bachetti et al. (accepted). In addition, it is expected that accreting XRPs crossing the critical luminosity should show a reversal of the cyclotron line energy versus luminosity correlation.To date only marginal evidence has been observed for such a trend reversal and in only a couple of sources (Doroshenko et al., 2017;Malacaria et al., 2022).As mentioned above, the Magellanic Clouds host comparatively many transient accreting pulsars displaying high-luminosity outbursts.HEX-P's ability to constrain the cyclotron line energy with unprecedented accuracy across orders of magnitude in luminosity (and thus over distinct accretion regimes) will provide us with the opportunity to trace more and well defined trend reversals.This will allow us to constrain the critical luminosity for a given source which, in turn, places constraints on physical parameters of the observed system, such as the NS magnetic field strength or source distance (Becker et al., 2012). CONCLUSION We have presented open questions about NSs and accretion onto strongly magnetized sources, and demonstrated HEX-P's unique ability to address these questions.The broad X-ray passband, improved sensitivity, and low X-ray background make HEX-P ideally suited to understand accretion in X-ray soft sources, down into low accretion rate regimes.In particular for LMXBs, we have shown that HEX-P observations will discriminate between competing continuum emission models that diverge above 30 keV; an energy band inaccessible to current focusing X-ray telescopes for these soft spectrum sources.Additionally, leveraging the improved sensitivity and broad X-ray passband, HEX-P will achieve tighter constraints on NS radius measurements through reflection modeling.These measurements are complementary and independent to other methodologies, and will narrow the allowed region on the NS mass-radius plane for viable EoS models for ultra-dense, cold matter, providing fundamental physics information in a regime inaccessible to terrestrial laboratories. For the case of NSs in HMXBs, HEX-P will vastly improve our ability to: 1) detect multiple cyclotron line features in a single observation, aiding in our understanding of the magnetic field strength in different regions of the accretion column and the poorly known physical mechanisms behind the formation of higher harmonic features; 2) obtain detailed phase-resolved spectra to track the dependence of CRSFs on pulse phase in order to explore the geometric configuration of the accretion column emission in unprecedented details; 3) constrain the surface magnetic field strength of accreting NSs at the lowest accretion regime and characterize the continuum spectral formation to provide updated physically-motivated spectral models for future observations; and 4) identify CRSFs in extragalactic sources and follow the evolution of their line energy as a function of luminosity in order to distinguish between competing theories regarding changing emission region geometry or accretion-induced decay of the NS B-field. HEX-P will provide a new avenue for testing magnetic field configuration, emission pattern, and accretion column physics close to the surface of NSs, as well as enhance our understanding of extreme accretion physics in both LMXBs and HMXBs. California Institute of Technology, under a contract with NASA.E.S.L. and J.W. acknowledge partial funding under Deutsche Forschungsgemeinschaft grant WI 1860/11-2 and Deutsches Zentrum für Luftund Raumfahrt grant 50 QR 2202.ACKNOWLEDGMENTS R.M.L. and A.W.S. would like to thank Dr. Kazuhiko Shinki for providing consultation on statistical methods of §2.2.2.The authors thank the reviewers for their detailed comments that enhanced the science cases presented within the manuscript. SUPPLEMENTAL MATERIAL Here we provide the input values used for creating the various HEX-P simulations that are presented in the accompanying manuscript.Additionally, we specify the flux levels of the simulations since the introduction mentions a pile-up limit for the LET in units of mCrab. Figure 2 . Figure 2. Position of the innermost stable circular orbit (ISCO) in units of gravitational radii versus the dimensionless spin parameter a of the CO.The hatched region indicates where the NS surpasses the rotational limit and would break apart.The horizontal dot-dashed lines indicate the corresponding values of 10 km and 12 km radius for a canonical NS mass of 1.4 M ⊙ .The horizontal dotted lines indicate the same radius values for a more massive NS of 2.0 M ⊙ . Figure 3 . Figure 3.Comparison of simulated 20 ks HEX-P (LET+2 HETs) observation for two different continuum models typically used to describe NS LMXB soft state spectra: a double thermal continuum model with weak Comptonization (Model 1: black) and an accretion disk with blackbody Comptonization (Model 2:blue).The models diverge above 30 keV where currently available missions quickly become background dominated for the same exposure time.To emphasize this point, the same S/N near 30 keV could be achieved with NuSTAR in an exposure time of 96.6 ks.The broad X-ray passband and improved sensitivity of HEX-P will differentiate the models. Figure 4 . Figure 4. Posterior distribution of a and R in for 10 4 iterations of simulating HEX-P spectra for three spin values: (a) a = 0.0, (b) a = 0.17, and (c) a = 0.3.The 1σ, 2σ, and 3σ contours are shown.The white points indicate the highest probability value.The dotted grey line indicates a constant line of 6 R g which corresponds to the ISCO for a = 0.The distribution tightens at high values of spin as relativistic effects become stronger.However, the distributions show that the data are able to trace out unique values of inner disk radii by trending towards lower values of R g at higher a. Figure 5 . Figure 5. Mass and radius constraints from reflection modeling compared to NS gravitational wave events and NICER pulsar light curve modeling.The darker solid orange region indicates the improved radius constraints for reflection modeling of HEX-P data based on Cygnus X-2 for the case of high a.Note that both R in and a are free parameters when fitting the HEX-P data, while the NuSTAR plus NICER analysis fixes a = 0 (lighter solid orange region in panel a; Ludlam et al. 2022).The solid gray region indicates where causality is violated (i.e., the sound speed within the NS exceeds the speed of light).Pulsar light curve modeling of NICER data for PSR J0740+6620 is indicated in teal (dashed lines: Miller et al. 2021; solid lines: Riley et al. 2021).Maroon indicates the results for light curve modeling of PSR J0030+0452 (dashed lines: Miller et al. 2019; solid lines: Riley et al. 2019).The black dotted line denotes the mass-radius constraints from the combined GW170817 (Abbott et al., 2019) and GW190425 (The LIGO Scientific Collaboration et al., 2020) signatures using a piece-wise polytropic model as reported in Raaijmakers et al. (2021).Confidence contours correspond to the 68% and 95% credible regions.Panel (b) shows select EoS models from Lattimer and Prakash (2001) to demonstrate the behavior of different internal compositions on the mass-radius plane. Figure 6 . Figure 6.Simplified depiction of the formation of cyclotron lines in the spectra of accreting highly magnetized neutron stars due to photon scattering off electrons, whose motion perpendicular to the field lines is restricted by Landau quantization.The latter leads to strong cyclotron resonances in the Compton scattering cross section, broadened by the electron thermal motion parallel to the magnetic field.The centroid energies of the cyclotron lines in observed spectra approximately correspond to the redshited energies of the resonances (with some correction for the scattering redistribution effects during the lines' formation).The gravitational redshift is dependent on the height of the line-forming region, h, above the surface of the neutron star. Figure 7 . Figure 7. Simulated 20 ks HET spectra for Cen X-3, as well as the best-fit model are shown in panel (a).Panels (b), (c), and (d)show the residuals obtained when we set the strength of fundamental, n 2 harmonic, and n 3 harmonic to zero, respectively.We re-binned the data visually and provide arrows for the cyclotron line components to aid with clarity.For comparison, we provide a simulated NuSTAR spectrum for using the same model and exposure time which is shown in panel (e).The NuSTAR data are strongly background dominated at these flux levels, which makes constraining the higher energy harmonics difficult even at much longer exposure times (see text for more details). Figure 11 . Figure 11.Model used to simulate the observation of the Be X-ray binary GX 304−1 in quiescence.The vertical orange line indicates the center of the cyclotron line.The cyclotron line and the continuum are calculated together using polarized radiative transfer simulations (Sokolova-Lapa et al., 2021).The low-energy "thermal" hump and the high-energy hump below the cyclotron line (the red wing of the cyclotron line) are typically observed as a "two-hump" spectrum from Be X-ray binaries in quiescence. Figure 12 . Figure12.NuSTAR observation (left, a) and simulated HEX-P (right, a) spectra of a 60 ks observation of the Be X-ray GX 304−1 in quiescence and corresponding best-fit models.Both panels b show residuals for the best-fit models.For the simulated HEX-P spectra, we also show the residuals for the case when the cyclotron line strength is set to zero (left, c).The high-energy cyclotron line can be constrained from the HEX-P data, with the centroid energy 46.5 +4.1 −2.7 keV. Figure 13 . Figure 13.Left: The 2015 and 2022 super-critical outbursts of SMC X-2 observed by Swift/XRT.The vertical lines mark the epochs of four NuSTAR observations of the system.Right: The cyclotron line dependence on luminosity.The star symbols mark the measured CRSF values for SMC X-2 for the NuSTAR observations(Jaisawal et al., 2023).Note that the NuSTAR points on the trend 1 line are from the 2015 outburst, while the point on the trend 2 line was measured during the 2022 outburst; hence different cyclotron line energies can be measured even when the source is at nearly the same luminosity during outburst indicating an inherent difference between outbursts.We demonstrate HEX-P's capabilities to discern between different possible trends in cyclotron line energy as a function of luminosity in these types of systems in as little at 10 ks, though greatly improved for 50 ks exposures. Table S2 . Input spectral parameters for the two continuum models used to demonstrate HEX-P's capabilities to distinguish continuum shapes in NS LMXBs in section 2.2.1.The int type parameter of nthcomp in Model 2 is set to 0 so that the seed photons come from a single-temperature blackbody.
19,007.8
2023-11-08T00:00:00.000
[ "Physics" ]
Model Averaging for Accelerated Failure Time Models with Missing Censoring Indicators : Model averaging has become a crucial statistical methodology, especially in situations where numerous models vie to elucidate a phenomenon. Over the past two decades, there has been substantial advancement in the theory of model averaging. However, a gap remains in the field regarding model averaging in the presence of missing censoring indicators. Therefore, in this paper, we present a new model-averaging method for accelerated failure time models with right censored data when censoring indicators are missing. The model-averaging weights are determined by minimizing the Mallows criterion. Under mild conditions, the calculated weights exhibit asymptotic optimality, leading to the model-averaging estimator achieving the lowest squared error asymptotically. Monte Carlo simulations demonstrate that the method proposed in this paper has lower mean squared errors compared to other model-selection and model-averaging methods. Finally, we conducted an empirical analysis using the real-world Acute Myeloid Leukemia (AML) dataset. The results of the empirical analysis demonstrate that the method proposed in this paper outperforms existing approaches in terms of predictive performance. Introduction In some practical scenarios, we often need to select useful models from a candidate model set.A popular approach to address this issue is model selection.Methods such as the Akaike Information Criterion (AIC) [1], Mallows' Cp [2] and Bayesian Information Criterion (BIC) [3] are designed to identify the best model.However, in cases where a single model does not receive strong support from the data, these model-selection methods may overlook valuable information from other candidate models, leading to issues of model-selection uncertainty and bias [4]. To tackle these challenges and enhance prediction accuracy, several model-averaging techniques have been developed to leverage all information from the candidate models.Taking inspiration from AIC and BIC, Buckland et al. [5] proposed smoothed AIC (SAIC) and smoothed BIC (SBIC) methods based on AIC and BIC, respectively.Hansen [6] introduced the Mallows model-averaging (MMA) estimator, obtaining weights through the minimization of Mallow's Cp criterion.The MMA estimator asymptotically attains the minimum squared error among the model-averaging estimators in its class.Subsequently, Wan et al. [7] relaxed the constraints of Hansen [6], allowing for non-nested candidate models and continuous weights.In practical applications, many datasets exhibit heteroscedasticity.Therefore, it is essential to explore model-averaging methods tailored for heteroscedastic settings.Firstly, Hensen and Racine [8] proposed Jackknife model averaging (JMA), which determines weights by minimizing a cross-validation criterion.JMA significantly reduces Mean Squared Error (MSE) compared to MMA when errors are heteroscedastic.Secondly, Liu and Okui [9] modified the MMA method proposed by Hensen [6] to make it suitable for heteroscedastic scenarios.Furthermore, Zhao et al. [10] extended [6]'s work by estimating the covariance matrix based on the weighted average of squared residuals corresponding to all candidate models.This approach improves the model average estimator under heteroskedasticity settings. In survival analysis, the accelerated failure time (AFT) model provides a straightforward description of how covariates directly impact survival time and has consequently garnered widespread attention.There are several parameter-estimation methods for the Accelerated Failure Time (AFT) model, including Miller's estimator [11], Buckley-James estimator [12], Koul-Susarla-Van Ryzin (KSV) estimator [13] and WLS estimator [14].However, all these methods assume that the censoring indicator is observable.Therefore, Wang and Dinse [15] improved the KSV estimator to make it adaptable to situations where the censoring indicator is missing. Under practical conditions, it is common to encounter situations where only the observed time is available and it is uncertain whether the event of interest has occurred.In such cases, data suffer from missingness in the censoring indicator.For example, in a clinical trial for lung cancer, a patient may die for unknown reasons and while the survival time is observed, it is uncertain whether the patient died specifically due to lung cancer.This situation leads to missingness in the censoring indicator.Previous studies have mainly addressed the issue of missingness in the censoring indicator under a specific model.Research on model averaging for right-censored data typically assumes that the censoring indicator is observable.Therefore, this paper adopts the inverse probability weighting method proposed by [15] to construct the response variable.Through appropriate weight-selection criteria, weights are chosen to build the model-averaged estimator for the accelerated failure time model.It significantly enhances the predictive performance of the model and mitigates the bias introduced by the selection of a single model.Compared to previous research, this paper makes two main contributions: First, it introduces a novel model-averaging method for the case of missingness in the censoring indicator.Second, the paper allows for heteroscedasticity and employs model-averaging techniques to estimate variance. The remaining sections of this paper are organized as follows.In Section 2, we commence by introducing the notation and progressively delineate the methodology and associated theoretical properties of the proposed model-averaging approach.In Section 3, we report the Monte Carlo simulation results.In Section 4, we assess the predictive performance of the proposed model-averaging method against other approaches using the real-world Acute Myeloid Leukemia (AML) dataset.In Section 5, we provide a comprehensive summary of the entire paper and suggest future research directions in this area.All theorem proofs will be presented in Appendix A. Methodology and Theoretical Property We denote where T represents the survival time and V denotes the censored time.X = (X ′ 1 , X ′ 2 , . . ., X ′ n ) ′ denotes the covariate matrix for n independent observations, where X i = (x i1 , x i2 , . . ., x ip ) .The accelerated failure time model can be expressed as follows: where e i is the random error with E(e i |X i ) = 0 and E(e 2 i |X i ) = σ 2 i .We assume that there are M candidate models in the candidate model set.Where the mth candidate model contains p m covariates.Following [7], these candidate model forms are non-nested.The mth candidate model is where X m is an n × p m dimensional full column-rank matrix, ′ , e m = (e m1 , . . . ,e mn ) ′ . In the case of right censored data, the response variable Y i might be censored, making it unobservable.We only observe (Z i , X i , δ i ), where Z i = min(Y i , C i ) and the censoring indicator δ i = I(Y i ≤ C i ).Define a missingness indicator ξ i which is 1 if δ i is observed and is 0 otherwise.When the censoring indicators are missing, the observed data are {Z i , X i , ξ i , ξ i δ i } .For simplicity, we set U i = (Z i , X i ) ′ .In this paper, similar to [15], we assume the missing mechanism for δ to be: This assumption is more stringent than the missing at random (MAR) condition yet less restrictive than the assumption of missing completely at random (MCAR). Koul et al. [13] introduced a method that involves synthetic data for constructing linear regression models.Wang and Dinse [15] extended [13]'s method to address the situation where censoring indicators are missing.In our work, we follow the approach proposed by [15] to construct a response in the form of inverse probability weighting, specifically: where It is easy to observe that under the missing data mechanism in this paper: Similar to Equation (2), we have: where E(e Wi |X i ) = 0 , σ 2 Wi = var(e Wi |X i ).This is expressed in matrix form as: where And then the weighted least squares estimator of β m : where where The model-averaging estimator of µ is defined as follows: for any w ∈ H M , where 2 , where ∥ • ∥ denotes the Euclidean norm.Then the risk function is defined as: where The derivation of ( 10) is as follows: Regarding the choice of weights, a natural approach is to minimize the risk function to obtain the optimal weights.However, as shown in Equation ( 11), we recognize that the risk function includes the unknowns µ, which makes it infeasible to directly minimize the risk function to obtain the optimal weights.Therefore, we replace µ with Y W and seek an unbiased estimator of the risk function as the criterion for weight selection. Define the criterion for weight selection as Wi .By disregarding a term that is independent of w, C G n (w) serves as an unbiased estimator of the risk function . In practice, m(•) , π(•) and G n (•) are usually unknown; therefore, we need to estimate them.Firstly regarding the estimation of m(u), it is usually estimated by the Logit model.Suppose m(u) is estimated by the parametric model m 0 (u; θ), where m 0 (u; θ) = e Uθ 1+e Uθ .By the maximum likelihood estimation method, we can obtain the parameter estimate θ n for the parameter θ. π(z) usually can be estimated nonparametrically by where W(•) is a kernel function and b n is a bandwidth sequence.Next, we define u(z) = E(δ|Z = z), u(z) estimated nonparametrically by , where K(•) is a kernel function and h n is a bandwidth sequence.We adopt the following estimator of G n (z): , where R i denotes the rank of Z i . Next, replacing m(•), π(•) and G n (•) with m 0 (•, •), π n (•) and G n (•), we have: And the corresponding weight selection criterion is as follows: where The weights for minimizing C G n (w) are given by: Then, we enumerate the necessary regularity conditions for the asymptotic optimality. m is an M × 1 unit vector in which the mth element is 1 and the others are 0.For some integer 1 ≤ J < ∞ and some positive constant k such that E(e ii denote the ith diagonal element of P m .There exists a constant c such that ρ Condition (C1) is utilized in [16] and it ensures that 1 − G n (t) is not equal to 0. Condition (C2) mandates that the conditional expectation of µ i remains within bounded limits, in line with assumptions seen in prior research, including [7,17].Condition (C3) is a requirement commonly found in model-averaging literature (e.g., [7,18]).Condition (C4) mandates the non-degeneracy of the covariance matrix Ω as n → ∞.Similar assumptions can also be found in [9,10].Similar to [15], Conditions (C5) and (C6) impose constraints on the bounds of m(•), π(•) and bandwidth, respectively.Condition (C7) is frequently employed in the analysis of the asymptotic optimality of cross-validation methods, as seen in prior works like [8]. Theorem 1 establishes the asymptotic optimality of the model-averaging procedure employing weights w, as its squared loss converges to that of the infeasible best possible model average estimator. In most cases, Ω is unknown and needs to be estimated.We estimate Ω using residuals derived from the model-averaging process: where σ 2 Wi = var( e Wi ).In the existing literature on model averaging, most estimates of variance are predominantly derived from the largest candidate model, as exemplified by works such as [6,16].In contrast, our approach, following [10], leverages information from all candidate models for estimation rather than relying on a single model.Such an estimation method is more robust.Replacing Ω by Ω(w) in (13), C(w) becomes The weights that minimize C G n (w) are as follows: This weight selection criterion C G n (w) is a cubic function of w. Simulation In the simulation study, we generate data from the accelerated failure time (AFT) model, log(T i ) = Y i = ∑ 1000 j=1 β j x ij + e i , where β j = 1/j 2 ; the observations of ) are generated from a multivariate normal distribution with zero mean and covariance matrix Σ = (σ ij ) with σ ij = 0.5 |i−j| .The errors e i follow normal distribution N(0, γ 2 (x 4 i2 + 0.01)).By varying the value of γ, we allow R 2 to range from 0.1 to 0.9.This variance specification closely resembles that of [8].However, we introduce a small constant, 0.01, to ensure that the variances remain strictly positive.The censoring time C i is generated from N(C 0 , 7).By varying the value of C 0 , we achieve censoring rates (CRs) of approximately 20%, 40%.We set sample sizes n = 150, 300.Here, our model configuration is set in a nested form, meaning the first m models include the first m regressors.The number of candidate models M was set to be ⌈3n 1/3 ⌉, where ⌈x⌉ denote the smallest integer greater than x. Based on the missing mechanism described in this paper, we assume that the probability of missing censoring indicators, denoted as 1 − π(z), is determined via a logistic model: log{ [15], we employed the uniform kernel function W(x) = 1 2 for |x| ≤ 1 and W(x) = 0 otherwise.Additionally, we used the biweight kernel function K(x) = 15 16 (1 − 2x 2 + x 4 ) for |x| ≤ 1 and K(x) = 0 otherwise.The bandwidths were b n = h n = n − 1 3 max(Z).We estimated m(u) under the logistic model: log{ m(u) 1−m(u) } = γ 1 + γ 2 z + γ 3 x.As highlighted by [19], when the data on δ are completely (or quasi-completely) separated, the maximum likelihood estimate of γ = (γ 1 , γ 2 , γ 3 ) does not exist.In our simulation setup, the number of covariates significantly exceeds the sample size.Therefore, we employ the lasso method to estimate the parameters. We compare the proposed Model-Averaging method for the Missing Censoring Indicators in the Heteroscedastic setting (HCIMA) with other classical model-selection and modelaveraging methods in this article.Brief descriptions of these methods are provided below: • The model-selection methods rely on AIC and BIC, where the AIC and BIC criterion for the mth model are defined as follows: • Model methods based on SAIC and SBIC: The weights for the mth candidate model are given by: where AIC m = AIC(m) − min(AIC) , BIC m = BIC(m) − min(BIC) .• Additionally, we compare our approach with the method that estimates the variance using the maximum candidate model (MCIMA).And the specifics of variance estimation and weight selection in their approach are as follows: where In the simulation, we utilize the Mean Squared Error (MSE) to evaluate the performance of various methods, where the MSE is defined as We present the mean of MSEs from 500 replications. Figures 1 and 2, respectively, show the Mean Squared Error (MSE) values for various methods across 500 repetitions under different censored rates and sample sizes, with missing rates of 20% and 40%.In terms of Mean Squared Error (MSE), our proposed HCIMA method outperforms other approaches.Additionally, the MCIMA method performs better than existing methods in all cases except for when compared to HCIMA.Furthermore, it is evident that SAIC and SBIC outperform their respective AIC and BIC counterparts, further highlighting the advantages of model-averaging methods.Comparing Figures 1 and 2, it is observed that the MSE at MR = 20% is slightly higher than at MR = 40%.The reason for this occurrence is that when ξ i = 1, δ i = 0, the signs of Y Wi and Z i are opposite.As MR increases, the occurrence of the ξ i = 1, δ i = 0 situation decreases.Although this result may seem counterintuitive, it does not affect the performance of the method proposed in this paper, which still keeps its advantages in this case. Real Data Analysis In this section, we assess the predictive performance of our proposed HCIMA method using the real Acute Myeloid Leukemia (AML) dataset.This dataset contains 672 samples, including 97 variables such as patient age, survival time, gender, race, mutation count, etc.For more specific information about this dataset, we refer the reader to https://www.cbioportal.org/study/clinicalData?id=aml_ohsu_2018 (accessed on 13 December 2023). We selected ten variables for analysis: Cause Of Death, Age, Sex, Overall Survival Status, Overall Survival Months (Survival Time), Number of Cumulative Treatment Stages, Cumulative Treatment Regimen Count, Mutation Count, Platelet Count and WBC (White Bloodcell Count).After removing rows with missing values, we retained a total of 396 samples.We treat samples with unknown causes of death as missing censoring indicators.Among these 396 samples, 76 have unknown causes of death and 167 samples are still alive after the clinical trial ends.Therefore, the missing rate is approximately 19% and the censoring rate is 42%.We focus on the impact of seven variables, excluding "Cause Of Death" and "Overall Survival Status" on Survival Time.Therefore, we can construct 2 7 − 1 = 127 non-nested candidate models. We randomly select data from n 0 samples as the training dataset, while the remaining n 1 = n − n 0 samples are used as the testing dataset.We set the training dataset size to 50%, 60%, 70% and 80% of the total dataset size, respectively.Following [16,20], we employed the normalized mean squared prediction error (NMSPE) as the performance metric: where µ i represents the predicted value and µ mi denotes the value of µ for the mth model.We calculate the mean, the standard deviation and the optimal rate of each method over these 1000 repetitions.Specifically, the optimal rate refers to the frequency at which the minimum value is achieved across these 1000 repetitions. Table 1 displays the mean, optimal rates and standard deviations of NMSPE for each method over 1000 repetitions.Consistent with the simulation results, the HCIMA method exhibits the lowest average NMSPE and standard deviation and the highest optimal rate.The MCIMA method also performs well, ranking second after HCIMA.This indicates that the proposed model-averaging methods in this paper demonstrate superior predictive performance compared to other approaches. Discussion To address the uncertainty in model selection and enhance predictive accuracy, this paper proposes a novel model-averaging approach for the accelerated failure time model with missing indicators.Moreover, we establish asymptotic optimality under certain mild conditions.In Monte Carlo simulations, the method proposed in this paper exhibits lower mean squared errors compared to other model-selection and model-averaging methods.Empirical results demonstrate that the proposed method has a lower NMSPE compared to other approaches, indicating its superior predictive performance.This further underscores the applicability of the proposed method to real-life data scenarios with missing censoring indicators. In this paper, we introduce the inverse probability weighted form of response variable proposed in [15].The primary advantage of this form of response variable lies in its double robustness, making it less susceptible to the impact of model misspecification (if π(•) or m(•) is misspecified).However, as mentioned in [15], its drawback, compared to synthetic response [13], regression calibration and imputation [15], is a larger variance.Yet, in practical scenarios, the harm caused by model misspecification often outweighs the harm of higher variance.Therefore, in our work, we follow the recommendation of [15] to use the inverse probability weighted form of the response variable.A future research direction is to further enhance this response variable for better applicability in the context of missing censoring indicators. As far as we know, there is currently very limited research on model averaging for missing censoring indicators.Therefore, there are still many questions that deserve further investigation.There is potential for extending our approach to high-dimensional data in terms of data and in terms of models, exploration into partial linear models, generalized linear models and other extensions could be pursued. . Under our specific conditions, we can prove this lemma using the same techniques as the proof of (A.3) in [7].Therefore, we omit the proof here. Lemma A3. Under Conditions where K is a constant.By Condition (C2), we have 1 n µ T µ = O p (1) and 1 n e T W e W = O p (1).According to [15] (1).Combined with Condition (C1), we have: Similar to the proof of Lemma 6.2 in [16], we have: . Furthermore, we can obtain . With the three lemmas mentioned above, we can now proceed to prove Theorem 1. According to Lemma A3, we have: . Combining this with Lemma A3, we can conclude that (A9) is of o p (1), which establishes the proof for (A3). Figure 1 . Figure 1.Mean Squared Errors (MSEs) of various methods under different sample sizes and censor rates at MR = 20%. Figure 2 . Figure 2. Mean Squared Errors (MSEs) of various methods under different sample sizes and censor rates at MR = 40%. Table 1 . The mean, optimal rate and standard deviation of NMSPE.
4,726.4
2024-02-22T00:00:00.000
[ "Mathematics" ]
Research on the Ranked Searchable Encryption Scheme Based on an Access Tree in IoTs With the continuous development of the Internet of things (IoTs), data security and privacy protection in the IoTs are becoming increasingly important. Aiming at the hugeness, redundancy, and heterogeneity of data in the IoTs, this paper proposes a ranked searchable encryption scheme based on an access tree. First, this solution introduces parameters such as the word position and word span into the calculation of the relevance score of keywords to build a more accurate document index. Secondly, this solution builds a semantic relationship graph based on mutual information to expand the query semantics, effectively improving the accuracy and recall rate during retrieval. Thirdly, the solution uses an access tree control structure to control user authority and realizes fine-grained access management to data by data owners in the IoTs. Finally, the safety analysis of this scheme and the efficiency comparison with other existing schemes are given. Introduction With the rapid development of IoTs technology, it has been used in all walks of life and has been widely recognized in various fields such as medical care, smart transportation, government work, smart home, and environmental monitoring [1][2][3][4][5]. At the same time, all kinds of information generated by users are increasing by a huge order of magnitude. Cloud storage is widely used due to its low cost and good scalability, and it solves the storage and management of this electronic data information by the IoTs. However, frequent privacy data leakage incidents have caused severe social impacts and disrupted economic development [6][7][8]. Therefore, how to protect user privacy and data security has become a technical bottleneck restricting the further development of IoT applications [9][10][11][12][13]. An effective way to solve data privacy leakage is to encrypt data and then store it on a cloud server. It can prevent unauthorized servers from accessing user data, and it can also effectively protect user data when the server is attacked. However, when users want to access their own data, because the cloud server stores encrypted data, these data no longer maintain the plaintext data structure before encryption, so the cloud server cannot effectively return the data searched by the user. In view of this situation, the easiest way is to download all encrypted data locally and then decrypt them one by one before searching. This method does not make full use of the computing power of the cloud server and wastes a lot of time and bandwidth power consumption, so that it cannot meet the actual needs of the cloud storage of the IoTs. Therefore, how to securely retrieve ciphertext data is an urgent problem to be solved in the IoTs [14]. The searchable encryption (SE) technology is a special encryption technology that can realize keyword ciphertext retrieval and ensure that attackers cannot obtain the keyword information queried by users through keyword ciphertext or search trapdoors [15]. At present, searchable encryption technologies mainly include symmetric searchable encryption (SSE) and asymmetric searchable encryption (ASE) [16,17]. In 2000, Song [18] proposed a single-keyword searchable encryption scheme based on symmetric key encryption, which searches the ciphertext of related keywords by linearly scanning the entire ciphertext document. In 2004, Dan et al. [19] proposed an asymmetric searchable encryption scheme (Public-Key Encryption with Keyword Search (PKES)) for mail routing application scenarios. After that, researchers have done a lot of research on this basis. In practical applications, users are usually more concerned with finding the top K documents most relevant to multiple keywords. In order to meet such demands, various multikeyword ranking searchable encryption schemes have been proposed in recent years. In 2011, Cao et al. [20] based on the secure KNN technology [21] first proposed a multikeyword ranking searchable encryption scheme based on vector inner product calculation. The scheme uses a 0/1 vector to represent each document and query vector. Compare the number of digits in the same position with a value of 1 to obtain the relevance score of the document, but this solution does not consider the importance of different keywords in the document. Therefore, Sun et al. [22] extended the scheme, introduced keyword weights when constructing document vectors and query vectors, and calculated the correlation through vector cosines to improve the accuracy of ranking. The above ranking searchable encryption schemes all focus on the precise search of keywords and do not take into account the semantic expansion of keywords, resulting in many documents that meet the query conditions not being retrieved. Yang et al. [23] proposed a fast multikeyword semantic ranking search scheme, which introduced the concept of domain weighted scoring into document scoring and semantically expanded search keywords to improve the accuracy of the document index. In addition, the document vector is divided into blocks to effectively filter a large number of irrelevant documents, which effectively improves the efficiency of the scheme. However, this solution does not involve access control and can only be limited to a single legitimate user's query and is not suitable for the needs of keywords being queried by multiple users in an actual environment. Sun et al. [24] proposed an attribute-based keyword search scheme, which only returns authorized documents to search users. However, the search results cannot be ranked. Li et al. [25] proposed an authorized multikeyword ranking search scheme based on encrypted cloud data using attribute-based encryption strategy and symmetric searchable encryption. This scheme satisfies the confidentiality of files, the unlinkability of trapdoors, and the resistance to collusion attacks. Moreover, the scheme can enable the same data to be queried by multiple users but does not consider the semantic relevance of search keywords. Therefore, the research on searchable encryption schemes for the cloud storage environment of the IoTs not only must protect the privacy of data to achieve the purpose of secret search but also ensure the efficiency of search efficiency. At the same time, it is also necessary to consider the situation of multiple users accessing search in the special application scenario of the IoT cloud storage environment. In the solution, the access tree is used to set user access per-missions, which allows only authorized users to retrieve cloud data and obtain the most relevant K documents. And based on the secure KNN method, the document is encrypted to ensure the security and correctness of index creation and trapdoor generation. The main contributions of this paper are as follows: (1) This paper introduces parameters such as word position and word span into the calculation of the relevance score of keywords and assigns more accurate weights to keywords at different positions in the document, thereby constructing a more accurate document index (2) This paper builds a semantic relationship graph based on mutual information to expand the query vector semantics, which effectively improves the precision and recall during retrieval (3) This paper involves multikeyword search and access control. The access tree is used to control user access rights. Only users whose attributes meet the access policy defined by the data owner can search encrypted data with multiple keywords, so as to realize the fine-grained access management of the data by the data owner in the IoTs Preliminaries 2.1. Vector Space Model. The vector space model [23] is the representation of the document set in the same vector space. Each document corresponds to a document vector, the dimension of the vector is equal to the length of the keyword collection, each dimension of the vector corresponds to a keyword and the value is equal to the weight of the corresponding keyword in that dimension. The user's query is also regarded as a vector in the same space, which is called the query vector. The keywords corresponding to each dimension of the vector are consistent with the document vector, and the vector dimension is the same as the document vector. The relevance score of the query and each file is equal to the value of the inner product of the document vector and the query vector. Word Span. Word span [19] refers to the distance between the first and last occurrence of a word or phrase in the document. The larger the word span, the more important the word is to the topic of the document. The word span can effectively reduce the impact of local keywords on document keyword extraction, because local keywords often become keywords in the entire document due to their high-frequency advantages, reducing the accuracy of keyword extraction. The word span formula is shown in formula (1). Among them, first ij is the location identifier where the keyword w j first appeared in the document f i , last ij is the 2 Wireless Communications and Mobile Computing location identifier where the keyword w j last appeared in the document f i , and sum ij is the total number of keywords in the document f i obtained after word segmentation processing. Word Position. The word position [26] refers to the area where a keyword appears in a document, which is of great value for judging the importance of the keyword. The title and abstract are the central ideas extracted by the author through the summary of the whole article, so the keywords appearing in these two positions are more important than those appearing in the main text. This article divides the word position into three parts: title, abstract (or first paragraph), and body. Here, let the position value area ij of the keyword w j in different areas of the document f i be set to 3, 2, and 1. There are two situations where a keyword appears multiple times: (1) If the same area appears multiple times, the record is not repeated (2) If it appears multiple times in different areas, the highest value is used The word position formula is shown in formula (2). 2.4. Relevance Score. This paper is based on the calculation method of TF-IDF (term frequency-inverse document frequency) to evaluate the importance of keywords in documents.TF represents the frequency of the keyword appearing in the document, and IDF represents the frequency of the inverse document, that is, the fewer the documents containing the keyword, the greater the IDF value, indicating that the keyword has a strong distinguishing ability. The TF-IDF formula is shown in formula (3). where tf ij represents the frequency of the keyword w j in the document f i , N represents the total number of all documents, and n j represents the number of documents containing the keyword w j . When calculating the relevance score of keywords, the word position and word span factors of the keywords should also be considered. Therefore, the correlation score formula used in this paper is shown in formula (4). Semantic Relation Graph. Mutual information allows users to analyze the correlation between keywords. Constructing a semantic relationship graph [27] based on mutual information to expand the query semantics can effectively improve the precision and recall during retrieval. For keywords x and y, their mutual information Iðx, yÞ [28] is expressed as shown in formula (5). where pðxÞ represents the probability of a document, and p ðx, yÞ represents the probability of a document containing both x and y. Then, normalize the information. where I max represents the maximum mutual information value in all Iðx, yÞ. Figure 1 is a small-scale semantic relationship graph GðV, EÞ, where node v represents the keyword and the edge weight e ij represents the normalized mutual information value of two related keywords v i , v j . 2.6. Access Tree. The scheme in this paper uses the access tree defined by the CP-ABE [29] scheme to represent the access structure. The access tree can be flexibly and efficiently applied to access authority control, which is defined as follows. Let ϒ denote the visit tree, and each nonleaf node in ϒ represents a threshold. If node x has num x child nodes and its threshold is k x , then, 0 < k x ≤ num x . When k x = 1, the node represents an OR gate. If k x = num x , it means the AND gate. Each leaf node in ϒ represents an attribute and the leaf node corresponds to k x = 1. When checking whether the user authority meets the access tree ϒ , let R be the root node of ϒ and let ϒ x be the subtree with node x as the root node. If the attribute set Att can satisfy the strategy represented by ϒ x , then, denote ϒ x ðAttÞ = 1. Calculate ϒ x ðAttÞ using the following recursive algorithm. If x is a nonleaf node, then, calculate ϒ x ′ðAttÞ for the child node x′ of x. Only when the number of child nodes satisfying ϒ x ′ðAttÞ = 1 is greater than or equal to k x , then, let ϒ x ðAttÞ = 1; otherwise, it is NULL. If the node is a leaf node, only if the corresponding attribute of the node is attrðxÞ ∈ Att, then, let ϒ x ðAttÞ = 1; otherwise, it is NULL. Bilinear Mapping. G, G T are two multiplicative cyclic group of prime order p, g is a generator of G, e : G × G ⟶ G T is a bilinear map [30] if three properties are satisfied: (1) Bilinear. For a, b ∈ Z p and ∀g 1 , Wireless Communications and Mobile Computing (3) Computability. There is an efficient algorithm computing eðg 1 , g 2 Þ, for any g 1 , g 2 ∈ G. It is said that e is an effective bilinear mapping from G to G T . Problem Description 3.1. System Model. The entities included in this program include data owners (Data Owner (DO)), data users (Data User (DU)), IoT cloud servers (IoT Cloud Server (CS)), and trusted institutions (Trust Authority (TA)). The system model is shown in Figure 2. (1) Data Owner. The data owner is responsible for encrypting the original document, establishing a secure index and uploading the ciphertext document and the secure index. First, the data owner extracts the keyword collection from the original document collection and encrypts it according to the keyword collection and the data access strategy to generate a security index and then uses the symmetric key to encrypt the original document collection to generate a ciphertext document collection. Finally, the ciphertext document collection and the security index are uploaded and stored to the cloud server together. (2) IoT Cloud Server. The IoT cloud server is mainly responsible for receiving and storing the data uploaded by the data owner and satisfying the query requests of authorized users. When receiving a user's query request, the cloud server first conducts permission review. If it is an authorized user, use the stored security index and trapdoor to calculate the similarity score of the document, search for related documents, sort the query results, and return the most relevant TOP-K document to the user. It is worth noting that only authorized users can perform a correct search and unauthorized users cannot obtain search results. (3) Trust Authority. It is mainly responsible for generating system keys and generating user private keys based on user attribute sets. (4) Data User. The data user submits a query request to the cloud storage server of the IoTs to query the files of interest. The user sends his own set of attributes to a trusted organization to obtain the user's private key and then uses the private key and query keywords to generate trapdoors and permission tags and upload them to the cloud server. Finally, authorized users can receive the most relevant TOP-K query results sent by the cloud server. Safety Requirements. This paper assumes that trust authority is completely credible. The cloud server is semihonest but curious. It can correctly execute the user's query request in accordance with the requirements of the plan and will not delete or modify the data uploaded by the data owner. But the cloud server is curious, and it may try to obtain other additional information from the security index and trapdoor. Therefore, the solution in this paper mainly considers the following 4 types of security requirements: (1) Confidentiality of Documents. The data owner does not want unauthorized entities (cloud servers or data users) to know the content of the documents, so the documents must be encrypted before they are sent to the cloud servers and the unauthorized entities do not have decryption keys. (2) Anonymity of Indexes and Trapdoors. The cloud server knows the ciphertext information stored by the data owner, including ciphertext document collection, security indexes, and trapdoors, but does not know the key. (3) Anonymous Access. Data users can access IoT data without giving their detailed identity information. (4) Collusion Resistance. Any two or more data users cannot collude to access the document. (1) Setupð1 κ Þ. TA runs the initialization algorithm and generates system master key MSK, index key IK, and system public parameter PK by inputting system security parameter κ. (2) KeyGenðMSK, AttÞ. This algorithm is the user's private key generation algorithm, which is executed by TA. The algorithm inputs the system master key MSK and user attribute set Att and outputs the user private key SK. (3) EncryptðIK, PK, FF, ϒ Þ. The data owner executes the encryption algorithm. The algorithm inputs the index key IK, the system public parameters PK, the plain text document collection FF, and the access tree ϒ and outputs the security index I and the cipher text document collection CC. (4) TrapdoorðW Q , IK, PK, SKÞ. The data user uses the algorithm to generate search credentials corresponding to the keywords that need to be queried. The algorithm inputs Wireless Communications and Mobile Computing the query keyword set W Q , index key IK, system public parameters PK, and user private key SK and outputs search credentials TP. (5) SearchðI, TD, KÞ. The keyword search algorithm is executed by the cloud server. The algorithm inputs the security index I, search credentials TP, and the parameter K and outputs the TOP-K documents most relevant to the query keyword set. It is worth noting that only users who meet the access control authority can get the correct results; otherwise, the search will fail. Scheme Description (1) Setupð1 κ Þ ⟶ fMSK, IK, PKg. TA randomly selects a large prime number pðp ∈ Z p Þ. Let G, G T be the multiplicative cyclic group whose generator are g and the order are p . TA generates a bilinear map e : G × G ⟶ G T and a hash function H 1 : f0, 1g * ⟶ G. In addition, TA randomly generates an m + ε-dimensional segmentation vector S and two ðm + εÞ × ðm + εÞ-dimensional invertible matrices fM 1 , M 2 g, where ε is the number of confusion bits and m is the number of keywords, and generate index key IK = fS, M 1 , M 2 g. Finally, TA randomly selects α, β ∈ Z p and generates system master key MSK = fα, βg and system public parameters P K = fg, G, G T , eðg, gÞ, eðg, gÞ α , H 1 , g α , g β g. (2) KeyGenðMSK, AttÞ ⟶ SK. TA selects a random number r ∈ Z p and randomly selects r j ∈ Z p for each attribute a j in the attribute set Att and finally generates the user's private key SK = fK = g ðα+rÞ/β ,∀a j ∈ Att : D j = g r H 1 ða j Þ r j , D j ′ = g r j g. The system transfers the user's private key SK to the data user. The data owner extracts keywords from the plaintext document collection FF = f f 1 , f 2 , ⋯, f m g to obtain the keyword collection W = fw 1 , w 2 , ⋯, w n g. The data owner uses the symmetric key ek to encrypt each document to obtain the ciphertext set CC = fc 1 , c 2 , ⋯, c m g. Based on the vector space model, the data owner generates a document vector D i for each document f i . If the document contains the keyword w j , use formula (4) to calculate the relevance score D i ½j = score ij of the keyword w j in the document. Otherwise, D i ½j = 0. The data owner expands each document vector D i from the m dimension to the m + ε dimension and sets D i ½m + t = η t , where 1 ≤ t ≤ ε and η t are random numbers with normal distribution Nðμ, σ 2 Þ. Then, the data owner splits each document D i into two vectors fD i ′ , D According to the access tree ϒ , a polynomial q x is selected for each node x in ϒ and the polynomial q x is generated as follows. Starting from the root node R of ϒ , use a recursive algorithm to run from top to bottom. For each node x, let the number of terms d x of the polynomial q x be one less than the threshold k x represented by the node, that is, d x = k x − 1. First, select s ∈ Z p randomly for the root node R and let q R ð0Þ = s, and then, randomly select the coefficients of other terms. For other nodes x, define the function parentðxÞ, indexðxÞ, the former represents the parent node of node x, and the latter represents the position of node x in the parent node. Let q x ð0Þ = q parentðxÞ ðindexðxÞÞ, and randomly select coefficients for the other terms of q x . According to the above algorithm, C v , C v ′ is generated for all nodes in ϒ , namely, I ϒ = fC = ek ⋅ eðg, gÞ αs , W 1 = g βs ,∀v ∈ ϒ , C v = g q v ð0Þ , C v ′ = H 1 ðattrðvÞÞ q v ð0Þ g. Finally, a safety index I = fI i , I ϒ g is generated. Finally, the data owner uploads the security index I and the ciphertext document collection CC to the cloud server. First, the data user performs semantic expansion on the keyword set W Q according to the semantic relationship graph to obtain the expanded keyword set W Q ′ . Based on the vector space model, the query vector Q is constructed. Here, if w j ∈ W Q , then let Q½j = 1; if the expansion word w j corresponds to one original keyword w i , then, Q½j = e ij ; similarly, if the expansion word w j corresponds to multiple original keywords w i , then, Q½j = fe ij g max . Finally, extend the query vector Q from the m dimension to the m + ε dimension and let Q½m + t = τ t , where τ t is a random number and 1 ≤ t ≤ ε. First, the data user divides the query vector Q into two vectors fQ ′ , Q ′ ′g according to the division vector S. This is the opposite of the document D i split method. If S½j = 1, j = 1, 2, ⋯, m + ε, then, Q ′ ½j = Q ′ ′½j = Q½j; if S½j = 0, let Q ′ ½j and Q ′ ′½j be random values and Q ′ ½j + Q ′ ′½j = Q½j. Finally, the data user encrypts Q ′ ½j and Q ′ ′½j with the system master key MSK to get the trapdoor TD = fM −1 1 Q ′ , M −1 2 Q ′ ′g. Then, randomly select θ ∈ Z p and generate search credentials TP = fTD, T 1 = K θ = g ðα+rÞθ/β ,∀a j ∈ Att : Finally, the data user sends the search credentials TP to the cloud server. If the cloud server receives the query request from the data user, it can perform the following steps: First, the cloud server first calculates whether the user attributes meet the access tree defined by the data owner. x is a node in the access tree ϒ , and the cloud server executes the following recursive algorithm: If node x is a leaf node, let a j = attrðxÞ be the attribute corresponding to node x. If a j ∈ Att, then, Otherwise, DecryptNodeðxÞ = NULL. If node x is a nonleaf node, calculate F z = DecryptNode ðzÞ for the child node z of node x. Let S x be the set of k x subnodes that satisfy F z ≠ NULL. If no such set S x , namely, DecryptNodeðxÞ = NULL, is found, it means that the access requirements are not met. Otherwise, calculate Using the Lagrangian interpolation theorem, V = DecryptNodeðRÞ = eðg, gÞ θrs can be obtained. Here, it is explained that the data user is an authorized user who can perform data query. The cloud server uses formula (9) to calculate the correlation between the security index I i and the query trapdoor TD and returns the TOP-K documents most relevant to the query keyword set to authorized users. If the user does not meet the access rights, the search fails and NULL is returned. The formula for calculating document relevance is shown in formula (9). If the user's attribute set Att satisfies part or all of ϒ , the 6 Wireless Communications and Mobile Computing user can obtain V = eðg, gÞ θrs according to formula (8) and calculate the document encryption key by formulas (10) and (11). Finally, the user uses ek to decrypt the obtained document to obtain a collection of plaintext documents. Safety Analysis 4.1. Confidentiality of Documents. The document is encrypted with a symmetric key before being uploaded to the cloud server, and only data users who meet the access policy defined by the data owner can search for the document and further obtain the decryption key to decrypt the obtained ciphertext document. Therefore, this solution guarantees the confidentiality of the document. where the segmentation vector S and the two invertible matrices M 1 , M 2 are the encryption keys of this scheme. It can be seen from the foregoing that in the above equations, M 1 , M 2 , D u ′ , and D u ′ ′ are all ðm + εÞ-dimensional vectors (here, ε is equal to 0), so there are 2m, n equations in a set containing n documents. However, there are 2m 2 unknowns in M 1 , M 2 , and 2mn unknowns in D u ′ ,D u ′ ′ . It is not feasible to solve such a system of equations in which the number of equations is less than the number of unknowns, so the cloud server cannot deduce M 1 , M 2 , D u ′, and D u ′ ′. Similarly, the query vector Q can be regarded as two mdimensional vectors fQ ′ , Q ′ ′g, that is, the number of unknowns is 2m. There are 2m 2 unknowns in M 1 ,M 2 . However, the number of equations for solving the query vector is only 2m, so the query vector Q and the invertible matrix M 1 , M 2 cannot be solved as well. Therefore, this scheme can ensure the safety of indexes and query vectors. Anonymous Access. The solution uses attributes as the minimum granularity of access control. When an access request is made, the IoTs does not care about the user's identity and only needs to verify whether the user's attributes meet the access structure and decide whether to provide the user with decrypted data. Collusion Resistance. Collusion resistance means that users with different attributes cannot decrypt the corresponding ciphertext even if they combine their private keys. In the searchable encryption scheme, it is required that even if users collude, they cannot search for unauthorized keyword ciphertexts. In this scheme, the system selects a random number r ∈ Z p for each attribute on the access tree. Since r is randomly distributed, the private keys of the same attribute in different networks are different, so that the secret value eðg, gÞ rθs that can be recovered is different. Therefore, this scheme has the property of anticollusion. Function and Safety Comparison. In this section, we compare the expression ability and supported functions of the proposed scheme with some existing schemes. The summary is shown in Table 1. Efficiency Analysis The following analyzes the computational cost performance of this scheme from the stages of private key generation, indexing, trapdoor, search, etc. and compares the efficiency of the scheme in the literature [31] with the scheme in this paper and then conducts an experimental simulation on the scheme, and the following situations can be ignored. (1) Index Generation Stage. For encrypting each document index, the data owner performs the multiplication of two ð m + εÞ-dimensional vector and ðm + εÞ × ðm + εÞ-dimensional matrix with a complexity of Oððm + εÞ 2 Þ, where m + ε is the number of keywords after expansion. Comparing the exponential operation and bilinear pairing operation on the group G, G T , the time spent on the matrix multiplication operation is negligible. (2) The Trapdoor Generation Stage. For calculating the encrypted query vector, the data user needs to perform the multiplication operation between the two ðm + εÞ-dimensional vector and the ðm + εÞ × ðm + εÞ-dimensional matrix and the time spent in the multiplication operation can also be ignored. (3) Search Stage. If the user meets the access rights, the cloud server performs a search. The main operation is the inner product operation of two ðm + εÞ-dimensional vectors. The computational complexity is Oðmðm + εÞÞ, where m is the number of the entire document collection. Also here, the time spent in the vector inner product operation can be ignored. Here, let T g and T gt denote the exponential operation of groups G and G T , respectively, T p denote bilinear pairing operation, T h denote the time of hash operation, n is the number of attributes in the system, s is the number of user attributes, jFj is the number of files, and jWj is the number of keywords. The efficiency comparison of the scheme in literature [31] and the scheme in this paper is shown in Table 2. In order to verify the effectiveness of the scheme, this paper compares the performance of the scheme in literature [31] with the performance of this scheme. We conduct real 7 Wireless Communications and Mobile Computing experiments on a Windows 10 64-bit operating system, Inter (R) CoreTM i7-7700 CPU @ 3.60 GHz and 8 GB RAM to study the true execution time. Here, we set the number of keywords in the dictionary to be the same as the number of query keywords in the trapdoor (jFj = jWj) and set the number of attributes in the system to be equal to the number of user attributes (n = s), and n = s = 10, jFj = jWj = 30. As shown in Figure 3, we found that compared with the computational cost in [31], in a large-scale data sharing system, the algorithm in this scheme is more computationally efficient, which means that this scheme is more effective and practical. As shown in Figure 4, we compare the execution time of the search operation of one single subindex. The computational overhead of the search phase is mainly affected by the number of user attributes. We see that the computational overhead of the search phase of these two schemes increases linearly with the increase in the number of user attributes. Conclusion Aiming at the special application scenario of the IoTs environment, this paper proposes an attribute-based multikeyword ranking search scheme. The scheme not only realizes the keyword search function based on semantic expansion but also realizes the user's access control function. The scheme takes into account the weight difference of different positions of keywords and introduces parameters such as word position and word span into the calculation of the relevance score of keywords to build a more accurate document index. Secondly, the scheme expands the query keywords semantically according to the semantic relationship graph to find more keywords with similar meanings, thereby effectively improving the precision and recall rate during retrieval. Again, the solution uses an access tree control structure to control the access authority of data users and realizes the fine-grained management of data owners based on attributes. Finally, the functional and security analyses and comparison of the scheme show that the scheme has document confidentiality, index and trapdoor anonymity, anonymous access, and resistance to collusion attacks. In addition, the efficiency of the scheme is theoretically analyzed and the analysis results show that this scheme has advantages over other schemes. Data Availability The data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest The authors declare that they have no conflicts of interest.
7,786.8
2021-10-31T00:00:00.000
[ "Computer Science" ]
STL-ATTLSTM: Vegetable Price Forecasting Using STL and Attention Mechanism-Based LSTM : It is di ffi cult to forecast vegetable prices because they are a ff ected by numerous factors, such as weather and crop production, and the time-series data have strong non-linear and non-stationary characteristics. To address these issues, we propose the STL-ATTLSTM (STL-Attention-based LSTM) model, which integrates the seasonal trend decomposition using the Loess (STL) preprocessing method and attention mechanism based on long short-term memory (LSTM). The proposed STL-ATTLSTM forecasts monthly vegetable prices using various types of information, such as vegetable prices, weather information of the main production areas Introduction Agricultural products account for a large proportion of the market as a necessity for daily consumption, and their prices play a critical part in consumer spending and agricultural household income (Statistics FAO, 2018) [1]. Agricultural product prices are determined by the supply and demand for the relevant year [2]. An oversupply of agricultural products causes vegetable prices to plummet, resulting in financial losses to agricultural households, whereas an undersupply of agricultural products increases prices, putting a burden on consumers. The imbalance of the supply and demand of agricultural products affects both farmers and consumers, and therefore, it is difficult for the government to make decisions that balance these factors [3]. The Ministry of Agriculture, Food and Rural Affairs (MAFRA), a governmental agency in South Korea, has been making efforts Related Work Time-series prediction has been used in many practical applications, such as financial forecasting and agricultural price forecasting [3,[9][10][11]. Traditional statistical and deep learning methods are commonly used for this forecasting. In this section, we investigate technology trends and shortcomings through studies on traditional vegetable price forecasting. Agricultural Price Forecasting Using Statistical Methods Various regression methods are used as traditional statistical methods. Models such as the autoregressive integrated moving average (ARIMA), generalized ARIMA, and seasonal ARIMA are typical. Assis and Remali [12] compared the prediction performance of various time-series methods for cocoa bean price forecasting; the experimental results showed that the generalized ARIMA model achieved the best performance. Adanacioglu and Yercan [13] forecast tomato prices in Turkey using seasonal ARIMA. They removed the high seasonality of tomatoes using a seasonal index. Ge and Wu [6] forecast corn prices using a multivariate linear regression model. In this study, the main effect of the supply-demand relationship was applied to the model, but the performance was still to a certain extent general in terms of the corn price changes. BV and Dakshayini [14] forecast tomato prices and demand using Holt Winter's model and compared its performance with the benchmark models, simple linear regression and multiple linear regression. Their results showed large variations between the forecast values and real values, and Holt Winter's model that considers seasonality showed the best performance. Apart from these studies, Darekar and Reddy [15], Jadhav et al. [16], and Pardhi et al. [17] forecast agricultural prices using the ARIMA model. Studies on agricultural price forecasting using statistical methods can handle general linear problems but have the disadvantage that the performance is not stable for non-linear price series. Agricultural Price Forecasting Using Machine Learning and Deep Learning Methods Machine learning and deep learning-based algorithms are new approaches to solving time-series prediction problems. These approaches have been found to produce more accurate results than traditional regression-based models [18,19]. In recent years, with the increase in agricultural prices' volatility, powerful learning models have been used to forecast prices. Minghua [18] forecast time-series price data of agricultural products using a back-propagation neural network and demonstrated the superiority of the proposed artificial neural network (ANN) model by comparing it with a statistical model. Wang et al. [20] forecast garlic prices with non-linear properties using a hybrid ARIMA support vector machine (SVM) model. The performance results showed that the proposed hybrid model achieved higher prediction accuracy than single ARIMA and SVM. Nasira and Hemageetha [21] forecast weekly and monthly tomato prices using the back-propagation neural network (BPNN) algorithm. Hemageetha and Nasira [22] forecast tomato prices using a radial basis function (RBF) neural network and proved the superiority of the proposed model by comparing its performance with the BPNN model. Li et al. [23] forecast weekly egg prices in China using a chaotic neural network and compared it to the ARIMA model. The results showed that the chaotic neural network achieved a higher non-linear fitting ability and better performance than ARIMA. In addition, hybrid models that integrate methods, such as time-series preprocessing and optimization, are often used in research on agricultural price forecasting instead of using a single model. Luo et al. [24] forecast Beijing Lentinus edodes mushroom prices by proposing four models: BPNN, RBF neural network, neural network based on genetic algorithm (GA), and an integrated model. The performance results showed that the performance of the BPNN was the lowest, the performance of the neural network (NN) based on the GA model was higher than that of the RBF neural network, and the integrated model achieved the best performance. Zhang et al. [25] forecast soybean prices in China by proposing a quantile regression-based RBF (QR-RBF) neural network model. In addition, to optimize the model, the performance was improved by applying a gradient descent with genetic algorithm (GDGA). This finding was also in agreement with previous studies [26,27]. Subhasree and Priya [28] forecast five crop prices in the Chinese market using a BPNN, RBF neural network, and GA based neural network and found that the GA-based neural network achieved the highest performance. Xiong et al., [3] proposed a seasonal trend decomposition using Loess (STL) -based extreme learning machine method and forecast cabbage, hot pepper, cucumber, kidney bean, and tomato prices in China. This study preprocessed time-series data using the STL method by considering the seasonal characteristics of various vegetables, and as a result, it successfully forecasts vegetable prices with high seasonality. Li et al. [29] forecast vegetable prices using a model that combined a Hodrick-Prescott (H-P) filter and a neural network. The study improved forecasting accuracy by decomposing trend and cyclical components in time-series data and recombining forecast values using the H-P filter. In a previous study, Jin et al. [30] forecast five monthly crop prices in the Korean market using the STL-LSTM (long short-term memory) model. This study proved that forecasting performance was improved by eliminating high seasonality in vegetable prices. Liu et al., [31] divided hog price data into trend and cyclical components, forecast them using the most similar sub-series search method, and recombined them. Then, they forecast the hog prices using a support vector regression (SVR) prediction model. The SVR algorithm can be used for non-linear time-series prediction and works well for small datasets [32,33]. Yoo [4] forecast Korean cabbage prices using the vector autoregressive method and the Bayesian structural time-series model. Climate factors and production were used, along with trends and seasonality of price data, and the importance of meteorological data was raised because the Korean cabbage is a crop grown in open fields. Chen et al. [34] forecast cabbage prices in the Chinese market by proposing a wavelet analysis-based LSTM model. Here, the wavelet method achieved higher forecasting accuracy than the single LSTM by removing noise from time-series data. Most studies on vegetable price forecasting using machine learning or deep learning algorithms use ANN and LSTM as their prediction models. However, for these models, an equal contribution is assigned to all input variables in the training process. An attention mechanism [35] has emerged to address this issue. The attention mechanism can calculate the importance of input variables by assigning higher weights to important input variables in the learning process. Recently, the attention mechanism has shown good performance in various fields, such as image classification, machine translation, and multimedia recommendation, and has begun to be applied to time-series data analysis. Qin et al. [10] applied a dual-stage attention-based recurrent neural network model to stock market data, which is time-series data. They efficiently predicted prices using feature attention and temporal attention, and it became possible to explain the correlations between input variables and results. Zhang et al. [36] efficiently addressed a long-term dependence issue by automatically selecting important input variables through financial time-series prediction using an attention-based LSTM model. Ran et al., [37] performed travel-time prediction using an attention mechanism-based LSTM. The attention-based LSTM model proposed through various experiments achieved better performance than other baseline models, and the attention mechanism could focus well on the differences of input features. Li et al. [11] proposed an evolutionary attention-based LSTM model and applied it to Beijing particulate matter 2.5 (ug/m 3 ) data. The study could resolve the relationship between local features in time-steps. Table 1 shows the models, plant types, input variable types, processing type (seasonal or trend), and whether to use a feature engineering method, which has been proposed in traditional studies on agricultural price forecasting. Summary and Contribution Studies on time-series prediction using conventional statistical methods show that the volatility and periodicity of time-series data can be effectively captured and explained. However, generally, statistical methods have the disadvantage of being unable to analyze the non-stationary and non-linear relationships of time-series data [36] or to handle numerous input variables. Machine learning and deep learning algorithms generally have the advantage of handling non-stationary and non-linear data well. When analyzing studies that applied conventional machine learning or deep learning, it can be seen that these algorithms achieved better performance than the conventional statistical methods in time-series prediction. In addition, when analyzing vegetable prices with high volatility, seasonality, and trend characteristics, preprocessing processes, such as filters and STL, are known to play a crucial role compared to using raw data directly, and they have recently appeared in time-series analyses. The contributions of this study are as follows: (1) vegetable prices are affected by many factors such as weather and import/export volume, but 15 out of 21 studies mainly used the price as an input variable. In this study, we used not only price but also incorporated weather, trading volume, and import/export data. (2) Previous studies used several prediction models, such as ARIMA, seasonal ARIMA, ANN, and SVR, but only two studies used LSTM, which achieved excellent performance in time-series prediction. In this study, we used the LSTM model to forecast vegetable prices. (3) Most of the studies were applied to the Chinese and Indian markets. Of these, the number of studies conducted on the Chinese market, 11, was the largest. In this study, we verified the performance of the proposed model by applying it to five crops: cabbage, radish, onion, hot pepper, and garlic, in the South Korean market. (4) Vegetable price data includes seasonality and trend components. However, only 8 of 21 previous studies covered seasonality or trend components. In this study, we dealt with the seasonality and trend of price data using the STL method. Further, we solved the prediction lag caused by a model that did not learn well owing to high volatility and proved the importance of the STL method by comparing its performance to a model without STL. (5) The importance of the input variables used in the prediction model differs. However, only two previous studies calculated the importance of input variables and applied them to the prediction model. In this study, the importance of each input variable is calculated using the attention mechanism, and vegetable prices are forecast based on this importance. The attention mechanism has recently begun to be used for time-series prediction, but it has not been used in any research on agricultural price forecasting. Time-Series Data Decomposition Using STL STL is a time-series decomposition method that aims to decompose time-series data Y t into trend (T t ), seasonal (S t ), and remainder components (R t ), which is expressed as Y t = T t + S t + R t . The STL algorithm consists of an outer loop and an inner loop. In the outer loop, robustness weights are assigned to each data point according to the remainder, reducing the influence of outliers. In the inner loop, the trend and seasonal components are updated, and the process is as follows. Step 1: Detrend. By removing the calculated trend component from the inner loop, Step 2: Cycle-subseries smoothing. The value of removing the trend component is broken into a cycle subseries. Each cycle subseries obtains the preliminary seasonal component S (k+1) t through the LOESS smoother. Step 3: Low-pass filtering of the smoothed cycle subseries. Any remaining trend T Step 5: De-seasonalizing. Y t − S (k+1) t Agriculture 2020, 10, 612 6 of 17 Step 6: Trend smoothing. The trend component T (k+1) t is obtained by applying the LOESS smoother to the value obtained by removing the seasonality in Step 5. STL has several advantages. First, the STL method has the advantage of being able to handle all types of seasonality, unlike the seasonal extraction in ARIMA time series (SEAT) [38] and X11 [39] methods. Second, although the seasonal component changes over time, the user can control the change rate. Third, as outliers do not greatly impact the decomposed trend and seasonal components, it is safe to use when there are outliers. LSTM Model Long short-term memory (LSTM) is a special type of recurrent neural network (RNN). RNNs have been successfully applied in various fields, such as speech recognition, language modeling, machine translation, image captioning, and text recognition. One of the advantages of RNN is that it can use previous step information to solve current step problems [40]. However, with an increase in the gap between the two types of information as the predicted sequence becomes longer, RNN has a long-term dependency problem that makes it difficult to connect contexts [41]. LSTM was proposed by Hochreiter et al. [42] to solve the long-term dependency and vanishing gradient problems. LSTM has an input gate, forget gate, output gate, and cell state that are interactive in a single neural network layer. The structure of LSTM is shown in Figure 1. The forget gate performs the operation shown in Equation (1). It receives the hidden state h t−1 (hidden state of previous time step) of the previous step and the input x t (input of current time-step) of the current step, and then performs matrix multiplication with the weight W f (learnable forget gate weights) of the forget gate. Next, after adding bias value b f (learnable forget gate bias), result f t (output of forget gate) is obtained through the sigmoid function. Because the sigmoid function ultimately produces a value between 0 and 1, the closer the calculated f t value is to 1, the more information is stored in C t−1 (cell state of previous time step, C means cell state) and the closer it is to 0, the more information is discarded in C t−1 . Agriculture 2020, 10, x FOR PEER REVIEW 7 of 18 The input gate performs the operations shown in Equation (2) and Equation (3) to determine the new information to be stored in the cell state. This process consists of two parts. In Equation (2), the first part, is the weight of the input gate; is the bias of the input gate, and these two values determine which value to update. In Equation (2), matrix multiplication is performed by multiplying ℎ and by weight ; bias is added, and then (output of input gate) is obtained through the sigmoid function for the added value. Equation (3), the second part, produces a candidate vector Ct that is added to the cell state. Matrix multiplication is performed by multiplying ℎ and by weight ; bias is added, and then Ct is obtained through the tanh function for the added value. The cell state of the previous time-step, , is updated through the calculated and Ct. The method of updating the cell state is as in Equation (4), the part to forget is forgotten by multiplying (output of forget gate), calculated in the forget gate, by the cell state of the previous step and then adding the new candidate ̃ . The dimensions of all variables in the Equation (t) are , The input gate performs the operations shown in Equations (2) and (3) to determine the new information to be stored in the cell state. This process consists of two parts. In Equation (2), the first part, W i is the weight of the input gate; b i is the bias of the input gate, and these two values determine which value to update. In Equation (2), matrix multiplication is performed by multiplying h t−1 and x t by weight W i ; bias b i is added, and then i t (output of input gate) is obtained through the sigmoid function for the added value. Equation (3), the second part, produces a candidate vector Ct that is added to the cell state. Matrix multiplication is performed by multiplying h t−1 and x t by weight W i ; bias b C is added, and then Ct is obtained through the tanh function for the added value. The cell state of the previous time-step, C t−1 , is updated through the calculated i t and Ct. The method of updating the cell state is as in Equation (4), the part to forget is forgotten by multiplying f t (output of forget gate), calculated in the forget gate, by the cell state of the previous step C t−1 and then adding the new candidate i t × Ct. The dimensions of all variables in the Equation (t) are R h , where the superscripts h refer to the number of hidden units in LSTM. Finally, the output gate determines the output. The calculation of the output gate is performed as in Equations (5) and (6). As seen in Equation (5), matrix multiplication is performed by multiplying [h t−1 , x t ] by the weight W o (learnable output gate weights) of the output gate, and the bias b o (learnable output gate bias) of the output gate is added. Then o t is obtained through the sigmoid function for the added value. For cell state C t , the tanh function is used to assign the cell state a value in [−1, 1]. This value and the o t obtained in Equation (5) are used to obtain the hidden state of the next time-step, h t , as shown in Equation (6). Attention Mechanism The attention mechanism was introduced in the sequence-to-sequence model for machine translation. The basic idea of the attention mechanism is to refer to the entire input sentence once more in the encoder each time the output word is predicted in the decoder. However, instead of referring to the entire input sentence at the same weight, it focuses more on the words that are related to the words to be predicted at that time. In this study, the attention layer was implemented by inspiration from the attention mechanism used in the seq2seq model. The operations performed in the attention layer are shown in Equations (7) and (8). First, matrix multiplication is performed by multiplying the three-dimensional input X by the weight W a and adding bias b a . Here, the dimensions of the input X refer to the batch size (number of samples to be applied for attention mechanism), time-step, and feature number. The input shape of W a is set to (feature_num, feature_num) to obtain the same number of outputs as the feature number, which is the third dimension of the input X; thus, W a X + b a can be considered an attention score. Next, attention weight A w is obtained through the softmax function for the attention score. A w is three-dimensional data with a shape (batch size, time-step, feature number) and has a probability distribution where the sum of each feature number dimension is 1. The average is calculated based on the time-step dimension, which is the second dimension of A w , and then data with a shape (batch size, 1, feature number) is obtained. Next, to make the shapes of all input X the same, the data of A w is repeated as many times as the number of time-steps based on the second dimension, and an A w is obtained that has the same shape as A w . The final attention weight A w obtained in this way is multiplied by input X, as shown in Equation (8), to obtain the weighted result A o . A o , the result of applying the attention weight obtained through the learning of W a and b a for each input variable, was used as an input to the LSTM model. To identify the importance of each feature before inputting it to the LSTM model using this method, a dot-product attention operation was added to calculate the attention weight. By adding the attention layer, it is possible to identify which input variable has a significant impact on model prediction through the weight of each input variable. Proposed STL-ATTLSTM Method The STL-ATTLSTM model proposed in this study is composed of data preprocessing, price prediction, and output; its structure is shown in Figure 2. = softmax( + ) = ̅ , the result of applying the attention weight obtained through the learning of and for each input variable, was used as an input to the LSTM model. To identify the importance of each feature before inputting it to the LSTM model using this method, a dot-product attention operation was added to calculate the attention weight. By adding the attention layer, it is possible to identify which input variable has a significant impact on model prediction through the weight of each input variable. Proposed STL-ATTLSTM Method The STL-ATTLSTM model proposed in this study is composed of data preprocessing, price prediction, and output; its structure is shown in Figure 2. In the data preprocessing step, vegetable price data is decomposed into the seasonality, trend, and remainder components using the STL method. Of these, the derived variables of price are created in the remainder component. Next, input variables are learned through the attention layer, and attention weights are assigned to all input variables. The input variables assigned with attention weights are learned through the LSTM model, and the vegetable prices for the next month are forecast. In the output, the forecast vegetable prices for the next month and the attention weights trained in the attention layer are output. The structure and hyperparameters of the attention and LSTM models used in this study are shown in Table 2. The proposed model is composed of attention, LSTM, and fully connected layers. In the attention layer, the weight for input variables is output through the softmax activation function. The number of cell units of the LSTM layer connected behind the attention layer was set to six, and tanh was used as the activation function. To avoid the overfitting issue, a dropout layer was added, and the rate was set to 0.2. The proposed model used two fully connected layers. The number of neurons is set to 10 in the first layer and 1 in the second layer. Finally, the vegetable prices are output in the node. The model was trained for 1000 epochs and retrain with the best epoch. The best epoch is the epoch with the lowest verification loss. We used the Adam optimizer with a learning rate of 0.001, beta_1 = 0.9, beta_2 = 0.999. Research Design This section describes the data used and the performance evaluation criteria and presents the experimental method for measuring the performance of the proposed model. We conducted two experiments in this study. In the first experiment, we determined the optimal time-step value for the proposed STL-ATTLSTM model. In the second experiment, we compared the performance of the proposed STL-ATTLSTM to three benchmark models, LSTM, attention LSTM, and STL-LSTM. Dataset Description In this study, we forecast monthly prices of five crops, cabbage, radishes, onion, hot peppers, and garlic, using vegetable prices, weather information about the main production areas, and import/export data of vegetables from January 2012 to December 2019. The price trend of each crop is shown in Figure 3. The data collected from January 2012 to June 2019 were used as training data, and the data from July 2019 to December 2019 were used as test data. Vegetable price data were downloaded from the Outlook and Agricultural Statistics Information System (KREI OASIS) [43] and Korea Agricultural Marketing Information Service (aT KAMIS) [44]. As the vegetable price data are daily data, we grouped them on a monthly basis and used the average values as our monthly data. Vegetable prices are closely related to the relevant year's agricultural production. However, because production statistics are released after the year ends, it is difficult to use production data directly for monthly forecasting. To address this issue, we used the trading volume in the vegetable market. The trading volume refers to the volume that vegetables are brought into the market; it can replace production data in a sense. The trading volume data is provided daily by Outlook & Agricultural Statistics Information System (KREI OASIS) [43]. We also grouped the trading volume data on a monthly basis and used the accumulated values. The meteorological data used in this study were collected in the Korean Meteorological Vegetable price data were downloaded from the Outlook and Agricultural Statistics Information System (KREI OASIS) [43] and Korea Agricultural Marketing Information Service (aT KAMIS) [44]. As the vegetable price data are daily data, we grouped them on a monthly basis and used the average values as our monthly data. Vegetable prices are closely related to the relevant year's agricultural production. However, because production statistics are released after the year ends, it is difficult to use production data directly for monthly forecasting. To address this issue, we used the trading volume in the vegetable market. The trading volume refers to the volume that vegetables are brought into the market; it can replace production data in a sense. The trading volume data is provided daily by Outlook & Agricultural Statistics Information System (KREI OASIS) [43]. We also grouped the trading volume data on a monthly basis and used the accumulated values. The meteorological data used in this study were collected in the Korean Meteorological Administration (KMA) [45]. The weather information we used comprises the average temperature, average minimum temperature, average humidity, cumulative precipitation, minimum temperature days, maximum temperature days, typhoon advisories, and typhoon warnings in the main production areas. The day of a typhoon advisory and typhoon warning was indicated as 1, and the cumulative value grouped by month was used. As the main production areas of vegetables can change from year to year, we designed the model with these factors. We selected the three main production areas for each vegetable crop type and used weather information about the harvest time instead of the entire cultivation time. For example, the cultivation time for highland cabbages is usually from March to September, but in this study, we used the meteorological information from three main production areas from July to September, which was the harvest time. Table 3 shows a summary of the harvest times and main production areas of cabbage and radish by crop type. Here, the cultivation time for vegetables by crop type was provided by aT, and the cultivation area data by crop type was collected from the Korean Statistical Information Service (KOSIS) [46]. In this study, we used meteorological data only for the prediction of cabbage and radish prices, not for the other crops. The reason behind this is that cabbage and radish are brought into the market immediately after they have been harvested in the field. When it rains during the harvest period, they are dried in warehouses for two or three days and then brought back to the market. Conversely, as hot pepper, onion, and garlic are not immediately brought into the market and instead are stored in warehouses, they are expected to be less affected by the weather at harvest time. Vegetable prices are also closely related to import/export volumes. With an increase in the import volume, vegetable prices decrease. In recent times, because of various reasons, the cultivation area has been decreasing, and the volume of cheap imported vegetables has been increasing. Therefore, we used import/export volume information in this study. Import/export data are provided monthly from Korea Agro-fisheries & Food Trade Corporation (aT NongNet) [47], and are applied to the cabbage, radish, and onion prices. Table 4 shows the descriptions and formulas of the input variables used in this study, price variables, incoming volume variables, meteorological variables, and other variables. To prevent prediction lag in time-series data prediction, we generated all variables except the current price using the remainder component value. Measurement Criteria In this study, we used two performance indices to measure the prediction performance of the model, root mean square error (RMSE), and mean absolute percentage error (MAPE). RMSE is an index that measures the difference between the real value and the predicted value, and it is expressed as shown in Equation (9). To obtain the RMSE, the predicted value is first subtracted from the real value of each data sample. Then, the squared value is added, and the added value is divided by the number of samples. Next, the square root of the result is obtained. Here,ŷ t in Equation (9) refers to the predicted value for the number of data samples t, and y t refers to the real value for the data sample t. The RMSE value is always non-negative, and the closer to 0, the fewer the errors. MAPE is an index used to measure the accuracy of a prediction model in statistics, and it is expressed as shown in Equation (10). In Equation (10), A t refers to an actual measured value, and F t refers to a predicted value. To obtain MAPE, the difference between A t and F t is calculated and then divided by A t . Next, the absolute values of the divided values are summed, and then the summed value is divided by the number of samples to obtain the average. A percentage error can be calculated by multiplying this value by 100%. MAPE is relatively intuitive compared to RMSE because the error rate is expressed as a percentage regardless of domain knowledge. Optimal Time-Step Search LSTM is an algorithm that handles time-series data, and the user must set a time-step value that determines how much data comprises every single instance. It is a highly crucial hyperparameter because the composition of time-series data varies according to the time-step value, and it directly affects model training and performance. The optimal time-step may vary depending on the data of the task to be solved. In studies by Liu et al. [48] and Li et al. [11], experiments were conducted with grid search to find the optimal time-step. We designed our experiment to determine the optimal time-step for the five crop data sets used in this study. In this experiment, we measured the performance of the model while changing the L (i.e., time-step) value in the proposed STL-ATTLSTM model. To approximate the best performance of the model, we conducted a grid search over L ∈ {1, 2, 4, 6, 8, 12, 16}. We trained the model by setting L ∈ {1, 2, 4, 6, 8, 12, 16} and measured the average performance of the model using the last six test data sets. Performance Comparison between the Proposed Method and Benchmark Models In this section, we discuss the performance of the proposed STL-ATTLSTM model, and compare the performance of the proposed model and three benchmark models (LSTM, attention LSTM, and STL-LSTM) to determine the effect of each algorithm. The first benchmark model is a single LSTM model that does not use the STL method or attention mechanism. The second benchmark model is the attention-mechanism-based LSTM model, and we intend to investigate the effect of the attention mechanism through performance comparison with the simple LSTM model. The third benchmark model is STL-LSTM, and we intend to prove the importance of the STL method. Results and Discussions Using the aforementioned research design, in this study, we conducted an experiment to find the most optimal time-step value in the LSTM model and measured the monthly price prediction performance of the proposed model for five vegetable crops. Table 5 shows the performance measurement of the proposed model when the time-step L is set to L ∈ {1, 2, 4, 6, 8, 12, 16}. In this experiment, we used the monthly five crop data and calculated the MAPE and RMSE. The experimental results show that the lowest RMSE and MAPE were recorded when L = 4 for all vegetables except onion. Although onion recorded the lowest RMSE when L = 12, MAPE was lowest when L = 4. When the results are analyzed for L ∈ {1, 2, 4}, the performance is better when the time-step is 2 than when the time-step is 1. The reason is that L = 1 is not time-series data because one data point is regarded as an instance, and the relationship between successive data points cannot be expressed. In the experiment, the best performance was achieved when L = 4; when L ∈ {4, 6, 8, 12, 16}, the error rate increased with an increase in the time-step value. It can be seen that the larger the time-step, the less effective it is for model training. Further, if the time-step is large, the number of training data points decreases. Thus, it is considered that the model has not been sufficiently trained. Table 6 shows a performance comparison between the STL-ATTLSTM model proposed in this study and three benchmark models. As can be seen from Table 6, the proposed STL-ATTLSTM model recorded the lowest average RMSE and MAPE. Examining the performance of simple LSTM and attention LSTM, we see that the attention LSTM has approximately 300 lower RMSE and 4% lower MAPE than the simple LSTM. Li et al. [49] argued that, by assigning different weights to multiple inputs using the attention mechanism, greater weights were assigned to important inputs, and non-essential inputs were ignored. Qin et al., [10] also proved that the attention mechanism efficiently selected input variables. Through this experiment, we proved the effectiveness of the attention mechanism. Next, we examine the performance of the LSTM model using the STL method (STL-LSTM). The RMSE and MAPE of the STL-LSTM model were 598 and 12%, respectively, which was a very low error rate compared to the LSTM and attention LSTM models. Although the STL-LSTM model did not use the attention mechanism, the MAPE was reduced by approximately 7% compared to the attention LSTM model. These results demonstrate that the STL preprocessing method used in this study plays an essential role. The models are expected not to be well trained because the five vegetable prices are very volatile. According to Fan et al., [50], with the STL method, the subsequences are more regular and easier to learn and predict. Through this experiment, it can be seen that the STL preprocessing method was well applied to the time-series vegetable price data. Thus, the aforementioned experiment proved the effectiveness of the attention mechanism and the STL method. The STL-ATTLSTM model proposed in this study achieved the best performance, with an average RMSE of 380 and an average MAPE of 7%. In this study, prediction lag was found to occur in specific crop data in the process of making the models using the five monthly crop prices. The prediction lag when predicting the monthly radish data is shown in Figure 4 (top). Similar prediction lag occurred in other crops, but not as distinctly as in radish. As seen in the red box in the figure, the predicted value follows the true value by a gap of one month. Jin et al., [30] also found the prediction lag and explained the cause of this phenomenon as follows. The purpose of the deep learning model is to learn in the direction of decreasing mean error. However, when time-series data with high volatility are learned, this volatility is not well learned. Thus, a model gives the highest weight to the data of t−1 with the least volatility. Jin et al., [30] solved this prediction lag by decomposing time-series data using the STL method. Similarly, in this study, we generated input variables for the price using the remainder value generated by applying the STL method to solve the prediction lag. As seen in Figure 4 (bottom), the lag clearly visible in the box section disappears. Hence, the prediction performance of the model is also improved. Conclusions and Future Research In this study, we predicted five monthly vegetable prices using the STL-ATTLSTM model, which integrates the STL method and attention mechanism-based LSTM. We applied the proposed model to cabbage, radish, onion, garlic, and hot pepper, classified as the "five major supply-and-demand-sensitive vegetables" in the Korean market, using information such as vegetable prices, trading volumes, and weather information about the main production areas. In this study, using the STL method, we effectively solved the prediction lag caused by poor learning of the model, which was attributed to the high volatility sometimes found in time-series data. Further, we proved the importance of the proposed STL method and attention mechanism through experiments. The experimental results show that the proposed STL-ATTLSTM model achieved approximately 5-16% higher prediction accuracy than the three benchmark models, with an average RMSE of 380 and an average MAPE of 7%. In this study, we obtained the average performance using monthly test data for each vegetable. However, when comparing the monthly radish and onion forecast data with the actual data, we confirmed that there was still a section with high volatility. In the future, we will conduct research in the direction of reducing high volatility by adding some variables that influence the sharp rise and fall in vegetable prices into the forecast model. Additionally, we will conduct research on estimating the production of vegetables by using climate information to stabilize the price of vegetables. Conclusions and Future Research In this study, we predicted five monthly vegetable prices using the STL-ATTLSTM model, which integrates the STL method and attention mechanism-based LSTM. We applied the proposed model to cabbage, radish, onion, garlic, and hot pepper, classified as the "five major supply-and-demand-sensitive vegetables" in the Korean market, using information such as vegetable prices, trading volumes, and weather information about the main production areas. In this study, using the STL method, we effectively solved the prediction lag caused by poor learning of the model, which was attributed to the high volatility sometimes found in time-series data. Further, we proved the importance of the proposed STL method and attention mechanism through experiments. The experimental results show that the proposed STL-ATTLSTM model achieved approximately 5-16% higher prediction accuracy than the three benchmark models, with an average RMSE of 380 and an average MAPE of 7%. In this study, we obtained the average performance using monthly test data for each vegetable. However, when comparing the monthly radish and onion forecast data with the actual data, we confirmed that there was still a section with high volatility. In the future, we will conduct research in the direction of reducing high volatility by adding some variables that influence the sharp rise and fall in vegetable prices into the forecast model. Additionally, we will conduct research on estimating the production of vegetables by using climate information to stabilize the price of vegetables.
9,196.6
2020-12-08T00:00:00.000
[ "Computer Science", "Agricultural and Food Sciences" ]
Search for the Standard Model Higgs boson produced in association with top quarks and decaying into $b\bar{b}$ in pp collisions at $\sqrt{s}$ = 8 TeV with the ATLAS detector A search for the Standard Model Higgs boson produced in association with a pair of top quarks, $t\bar{t}H$, is presented. The analysis uses 20.3 fb$^{-1}$ of pp collision data at $\sqrt{s}$ = 8 TeV, collected with the ATLAS detector at the Large Hadron Collider during 2012. The search is designed for the H to $b\bar{b}$ decay mode and uses events containing one or two electrons or muons. In order to improve the sensitivity of the search, events are categorised according to their jet and b-tagged jet multiplicities. A neural network is used to discriminate between signal and background events, the latter being dominated by $t\bar{t}$+jets production. In the single-lepton channel, variables calculated using a matrix element method are included as inputs to the neural network to improve discrimination of the irreducible $t\bar{t}$+$b\bar{b}$ background. No significant excess of events above the background expectation is found and an observed (expected) limit of 3.4 (2.2) times the Standard Model cross section is obtained at 95% confidence level. The ratio of the measured $t\bar{t}H$ signal cross section to the Standard Model expectation is found to be $\mu$ = 1.5 $\pm$ 1.1 assuming a Higgs boson mass of 125 GeV. Introduction The discovery of a new particle in the search for the Standard Model (SM) [1][2][3] Higgs boson [4][5][6][7] at the LHC was reported by the ATLAS [8] and CMS [9] collaborations in July 2012. There is by now clear evidence of this particle in the H → γ γ , H → Z Z ( * ) → 4 , H → W W ( * ) → ν ν and H → τ τ decay channels, at a mass of around 125 GeV , which have strengthened the SM Higgs boson hypothesis [10][11][12][13][14][15] of the observation. To determine all properties of the new boson experimentally, it is important to study it in as many production and decay modes as possible. In particular, its coupling to heavy quarks is a strong focus of current experimental searches. The SM Higgs boson production in e-mail<EMAIL_ADDRESS>association with a top-quark pair (tt H) [16][17][18][19] with subsequent Higgs decay into bottom quarks (H → bb) addresses heavy-quark couplings in both production and decay. Due to the large measured mass of the top quark, the Yukawa coupling of the top quark (y t ) is much stronger than that of other quarks. The observation of the tt H production mode would allow for a direct measurement of this coupling, to which other Higgs production modes are only sensitive through loop effects. Since y t is expected to be close to unity, it is also argued to be the quantity that might give insight into the scale of new physics [20]. The H → bb final state is the dominant decay mode in the SM for a Higgs boson with a mass of 125 GeV. So far, this decay mode has not yet been observed. While a search for this decay via the gluon fusion process is precluded by the overwhelming multijet background, Higgs boson production in association with a vector boson (V H) [21][22][23] or a top-quark pair (tt) significantly improves the signal-tobackground ratio for this decay. This paper describes a search for the SM Higgs boson in the tt H production mode and is designed to be primarily sensitive to the H → bb decay, although other Higgs boson decay modes are also treated as signal. Figure 1a, b show two examples of tree-level diagrams for tt H production with a subsequent H → bb decay. A search for the associated production of the Higgs boson with a top-quark pair using several Higgs decay modes (including H → bb) has recently been published by the CMS Collaboration [24] quoting a ratio of the measured tt H signal cross section to the SM expectation for a Higgs boson mass of 125.6 GeV of μ = 2.8 ± 1.0. The main source of background to this search comes from top-quark pairs produced in association with additional jets. The dominant source is tt+bb production, resulting in the same final-state signature as the signal. An example is shown in Fig. 1c. A second contribution arises from tt production in association with light-quark (u, d, s) or gluon jets, referred to as tt+light background, and from tt production in association with c-quarks, referred to as tt+cc. The size of the second contribution depends on the misidentification rate of the algorithm used to identify b-quark jets. The search presented in this paper uses 20.3 fb −1 of data collected with the ATLAS detector in pp collisions at √ s = 8 TeV during 2012. The analysis focuses on final states containing one or two electrons or muons from the decay of the tt system, referred to as the single-lepton and dilepton channels, respectively. Selected events are classified into exclusive categories, referred to as "regions", according to the number of reconstructed jets and jets identified as b-quark jets by the b-tagging algorithm (b-tagged jets or b-jets for short). Neural networks (NN) are employed in the regions with a significant expected contribution from the tt H signal to separate it from the background. Simpler kinematic variables are used in regions that are depleted of the tt H signal, and primarily serve to constrain uncertainties on the background prediction. A combined fit to signalrich and signal-depleted regions is performed to search for the signal while simultaneously obtaining a background prediction. ATLAS detector The ATLAS detector [25] consists of four main subsystems: an inner tracking system, electromagnetic and hadronic calorimeters, and a muon spectrometer. The inner detector provides tracking information from pixel and silicon microstrip detectors in the pseudorapidity 1 range |η| < 2.5 and from a straw-tube transition radiation tracker covering |η| < 2.0, all immersed in a 2 T magnetic field provided by 1 ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the z-axis coinciding with the axis of the beam pipe. The x-axis points from the IP to the centre of the LHC ring, and the y-axis points upward. Cylindrical coordinates (r , φ) are used in the transverse plane, φ being the azimuthal angle around the beam pipe. The pseudorapidity is defined in terms of the polar angle θ as η = − ln tan(θ/2). Transverse momentum and energy are defined as p T = p sin θ and E T = E sin θ, respectively. a superconducting solenoid. The electromagnetic sampling calorimeter uses lead and liquid-argon (LAr) and is divided into barrel (|η| < 1.475) and end-cap regions (1.375 < |η| < 3.2). Hadron calorimetry employs the sampling technique, with either scintillator tiles or liquid argon as active media, and with steel, copper, or tungsten as absorber material. The calorimeters cover |η| < 4.9. The muon spectrometer measures muon tracks within |η| < 2.7 using multiple layers of high-precision tracking chambers located in a toroidal field of approximately 0.5 T and 1 T in the central and end-cap regions of ATLAS, respectively. The muon spectrometer is also instrumented with separate trigger chambers covering |η| < 2.4. Object reconstruction The main physics objects considered in this search are electrons, muons, jets and b-jets. Whenever possible, the same object reconstruction is used in both the single-lepton and dilepton channels, though some small differences exist and are noted below. Electron candidates [26] are reconstructed from energy deposits (clusters) in the electromagnetic calorimeter that are matched to a reconstructed track in the inner detector. To reduce the background from non-prompt electrons, i.e. from decays of hadrons (in particular heavy flavour) produced in jets, electron candidates are required to be isolated. In the single-lepton channel, where such background is significant, an η-dependent isolation cut is made, based on the sum of transverse energies of cells around the direction of each candidate, in a cone of size R = ( φ) 2 + ( η) 2 = 0.2. This energy sum excludes cells associated with the electron and is corrected for leakage from the electron cluster itself. A further isolation cut is made on the scalar sum of the track p T around the electron in a cone of size R = 0.3 (referred to as p cone30 T ). The longitudinal impact parameter of the electron track with respect to the selected event primary vertex defined in Sect. 4, z 0 , is required to be less than 2 mm. To increase efficiency in the dilepton channel, the electron selection is optimised by using an improved electron identification method based on a likelihood variable [27] and the electron isolation. The ratio of p cone30 T to the p T of the electron is required to be less than 0.12, i.e. p cone30 T / p e T < 0.12. The optimised selection improves the efficiency by roughly 7 % per electron. Muon candidates are reconstructed from track segments in the muon spectrometer, and matched with tracks found in the inner detector [28]. The final muon candidates are refitted using the complete track information from both detector systems, and are required to satisfy |η| < 2.5. Additionally, muons are required to be separated by R > 0.4 from any selected jet (see below for details on jet reconstruction and selection). Furthermore, muons must satisfy a p T -dependent track-based isolation requirement that has good performance under conditions with a high number of jets from other pp interactions within the same bunch crossing, known as "pileup", or in boosted configurations where the muon is close to a jet: the track p T scalar sum in a cone of variable size R < 10 GeV / p T μ around the muon must be less than 5 % of the muon p T . The longitudinal impact parameter of the muon track with respect to the primary vertex, z 0 , is required to be less than 2 mm. Jets are reconstructed from calibrated clusters [25,29] built from energy deposits in the calorimeters, using the antik t algorithm [30-32] with a radius parameter R = 0.4. Prior to jet finding, a local cluster calibration scheme [33,34] is applied to correct the cluster energies for the effects of dead material, non-compensation and out-of-cluster leakage. The jets are calibrated using energy-and η-dependent calibration factors, derived from simulations, to the mean energy of stable particles inside the jets. Additional corrections to account for the difference between simulation and data are applied [35]. After energy calibration, jets are required to have p T > 25 GeV and |η| < 2.5. To reduce the contamination from lowp T jets due to pileup, the scalar sum of the p T of tracks matched to the jet and originating from the primary vertex must be at least 50 % of the scalar sum of the p T of all tracks matched to the jet. This is referred to as the jet vertex fraction. This criterion is only applied to jets with p T < 50 GeV and |η| < 2.4. During jet reconstruction, no distinction is made between identified electrons and jet candidates. Therefore, if any of the jets lie R < 0.2 from a selected electron, the single closest jet is discarded in order to avoid double-counting of electrons as jets. After this, electrons which are R < 0.4 from a jet are removed to further suppress background from non-isolated electrons. Jets are identified as originating from the hadronisation of a b-quark via an algorithm [36] that uses multivariate techniques to combine information from the impact parameters of displaced tracks with topological properties of secondary and tertiary decay vertices reconstructed within the jet. The working point used for this search corresponds to a 70 % efficiency to tag a b-quark jet, with a light-jet mistag rate of 1 %, and a charm-jet mistag rate of 20 %, as determined for b-tagged jets with p T > 20 GeV and |η| < 2.5 in simulated tt events. Tagging efficiencies in simulation are corrected to match the results of the calibrations performed in data [37]. Studies in simulation show that these efficiencies do not depend on the number of jets. Event selection and classification For this search, only events collected using a single-electron or single-muon trigger under stable beam conditions and for which all detector subsystems were operational are considered. The corresponding integrated luminosity is 20.3 fb −1 . Triggers with different p T thresholds are combined in a logical OR in order to maximise the overall efficiency. The p T thresholds are 24 or 60 GeV for electrons and 24 or 36 GeV for muons. The triggers with the lower p T threshold include isolation requirements on the lepton candidate, resulting in inefficiency at high p T that is recovered by the triggers with higher p T threshold. The triggers use selection criteria looser than the final reconstruction requirements. Events accepted by the trigger are required to have at least one reconstructed vertex with at least five associated tracks, consistent with the beam collision region in the x-y plane. If more than one such vertex is found, the vertex candidate with the largest sum of squared transverse momenta of its associated tracks is taken as the hard-scatter primary vertex. In the single-lepton channel, events are required to have exactly one identified electron or muon with p T > 25 GeV and at least four jets, at least two of which are b-tagged. The selected lepton is required to match, with R < 0.15, the lepton reconstructed by the trigger. In the dilepton channel, events are required to have exactly two leptons of opposite charge and at least two b-jets. The leading and subleading lepton must have p T > 25 GeV and p T > 15 GeV, respectively. Events in the single-lepton sample with additional leptons passing this selection are removed from the single-lepton sample to avoid statistical overlap between the channels. In the dilepton channel, events are categorised into ee, μμ and eμ samples. In the eμ category, the scalar sum of the transverse energy of leptons and jets, H T , is required to be above 130 GeV. In the ee and μμ event categories, the invariant mass of the two leptons, m , is required to be larger than 15 GeV in events with more than two b-jets, to suppress contributions from the decay of hadronic resonances such as the J/ψ and ϒ into a same-flavour lepton pair. In events with exactly two b-jets, m is required to be larger than 60 GeV due to poor agreement between data and prediction at lower m . A further cut on m is applied in the ee and μμ categories to reject events close to the Z boson mass: |m − m Z | > 8 GeV. Single-lepton channel: a S/ √ B ratio for each of the regions assuming SM cross sections and branching fractions, and m H = 125 GeV . Each row shows the plots for a specific jet multiplicity (4, 5, ≥6), and the columns show the b-jet multiplicity (2, 3, ≥4). Signal-rich regions are shaded in dark red, while the rest are shown in light blue. The S/B ratio for each region is also noted. b The fractional contributions of the various backgrounds to the total background prediction in each considered region. The ordering of the rows and columns is the same as in a After all selection requirements, the samples are dominated by tt+jets background. In both channels, selected events are categorised into different regions. In the following, a given region with m jets of which n are b-jets are referred to as "(mj, nb)". The regions with a signal-to-background ratio S/B > 1 % and S/ √ B > 0.3, where S and B denote the expected signal for a SM Higgs boson with m H = 125 GeV , and background, respectively, are referred to as "signalrich regions", as they provide most of the sensitivity to the signal. The remaining regions are referred to as "signaldepleted regions". They are almost purely background-only regions and are used to constrain systematic uncertainties, thus improving the background prediction in the signal-rich regions. The regions are analysed separately and combined statistically to maximise the overall sensitivity. In the most sensitive regions, (≥ 6j, ≥ 4b) in the single-lepton channel and (≥ 4j, ≥ 4b) in the dilepton channel, H → bb decays are expected to constitute about 90 % of the signal contribution as shown in Fig. 20 of Appendix A. In the dilepton channel, a total of six independent regions are considered. The signal-rich regions are (≥ 4j, 3b) and (≥ 4j, ≥ 4b), while the signal-depleted regions are (2j, 2b), (3j, 2b), (3j, 3b) and (≥ 4j, 2b). Figure 2a shows the S/ √ B and S/B ratios for the different regions under consideration in the single-lepton channel based on the simulations described in Sect. 5. The expected proportions of different backgrounds in each region are shown in Fig. 2b. The same is shown in the dilepton channel in Fig. 3a, b. Background and signal modelling After the event selection described above, the main background in both the single-lepton and dilepton channels is tt+jets production. In the single-lepton channel, additional background contributions come from single top quark production, followed by the production of a W or Z boson in association with jets (W/Z +jets), diboson (W W , W Z, Z Z) production, as well as the associated production of a vector boson and a tt pair, tt+V (V = W, Z ). Multijet events also contribute to the selected sample via the misidentification of a jet or a photon as an electron or the presence of a non-prompt electron or muon, referred to as "Lepton misID" background. The corresponding yield is estimated via a datadriven method known as the "matrix method" [38]. In the dilepton channel, backgrounds containing at least two prompt leptons other than tt+jets production arise from Z +jets, diboson, and W t-channel single top quark production, as well as from the tt V processes. There are also several processes which may contain either non-prompt leptons that pass the lepton isolation requirements or jets misidentified as leptons. These processes include W +jets, tt production with a single prompt lepton in the final state, and single top quark production in t-and s-channels. Their yield is estimated using The S/B ratio for each region is also noted. b The fractional contributions of the various backgrounds to the total background prediction in each considered region. The ordering of the rows and columns is the same as in a simulation and cross-checked with a data-driven technique based on the selection of a same-sign lepton pair. In both channels, the contribution of the misidentified lepton background is negligible after requiring two b-tagged jets. ATLAS Simulation In the following, the simulation of each background and of the signal is described in detail. For all MC samples, the top quark mass is taken to be m t = 172.5 GeV and the Higgs boson mass is taken to be m H = 125 GeV. The tt+jets sample is generated inclusively, but events are categorised depending on the flavour of partons that are matched to particle jets that do not originate from the decay of the tt system. The matching procedure is done using the requirement of R < 0.4. Particle jets are reconstructed by clustering stable particles excluding muons and neutrinos using the anti-k t algorithm with a radius parameter R = 0.4, and are required to have p T > 15 GeV and |η| < 2.5. Events where at least one such particle jet is matched to a bottom-flavoured hadron are labelled as tt+bb events. Similarly, events which are not already categorised as tt+bb, and where at least one particle jet is matched to a charmflavoured hadron, are labelled as tt+cc events. Only hadrons not associated with b and c quarks from top quark and W boson decays are considered. Events labelled as either tt+bb or tt+cc are generically referred to as tt+HF events (HF for "heavy flavour"). The remaining events are labelled as tt+light-jet events, including those with no additional jets. Since Powheg+Pythia only models tt+bb via the parton shower, an alternative tt+jets sample is generated with the Madgraph5 1.5.11 LO generator [52] using the CT10 PDF set and interfaced to Pythia 6.425 for showering and hadronisation. It includes tree-level diagrams with up to three extra partons (including b-and c-quarks) and uses settings similar to those in Ref. [24]. To avoid double-counting of partonic configurations generated by both the matrix element calculation and the parton-shower evolution, a parton-jet matching scheme ("MLM matching") [53] is employed. Fully matched NLO predictions with massive b-quarks have become available recently [54] within the Sherpa with OpenLoops framework [55,56] referred to in the following as SherpaOL. The SherpaOL NLO sample is generated following the four-flavour scheme using the Sherpa 2.0 prerelease and the CT10 PDF set. The renormalisation scale (μ R ) is set to μ R = i=t,t,b,b E 1/4 T,i , where E T,i is the transverse energy of parton i, and the factorisation and resummation scales are both set to (E T,t + E T,t )/2. Fig. 4 Relative contributions of different categories of tt+bb events in Powheg+Pythia, Madgraph+Pythia and SherpaOL samples. Labels "tt+MPI" and "tt+FSR" refer to events where heavy flavour is produced via multiparton interaction (MPI) or final state radiation (FSR), respectively. These contributions are not included in the Sher-paOL calculation. An arrow indicates that the point is off-scale. Uncertainties are from the limited MC sample sizes For the purpose of comparisons between tt+jets event generators and the propagation of systematic uncertainties related to the modelling of tt+HF, as described in Sect. 8.3.1, a finer categorisation of different topologies in tt+HF is made. In particular, the following categories are considered: if two particle jets are both matched to an extra b-quark or extra c-quark each, the event is referred to as tt+bb or tt+cc; if a single particle jet is matched to a single b(c)-quark the event is referred to as tt+b (tt+c); if a single particle jet is matched to a bb or a cc pair, the event is referred to as tt+B or tt+C, respectively. Figure 4 shows the relative contributions of the different tt+bb event categories to the total tt+bb cross section at generator level for the Powheg+Pythia, Mad-graph+Pythia and SherpaOL samples. It demonstrates that Powheg+Pythia is able to reproduce reasonably well the tt+HF content of the Madgraph tt+jets sample, which includes a LO tt+bb matrix element calculation, as well as the NLO SherpaOL prediction. The relative distribution across categories is such that SherpaOL predicts a higher contribution of the tt + B category, as well as every category where the production of a second bb pair is required. The modelling of the relevant kinematic variables in each category is in reasonable agreement between Powheg+Pythia and SherpaOL. Some dif-ferences are observed in the very low regions of the mass and p T of the bb pair, and in the p T of the top quark and tt systems. The prediction from SherpaOL is expected to model the tt+bb contribution more accurately than both Powheg +Pythia and Madgraph+Pythia. Thus, in the analysis tt+bb events are reweighted from Powheg+ Pythia to reproduce the NLO tt+bb prediction from SherpaOL for relative contributions of different categories as well as their kinematics. The reweighting is done at generator level using several kinematic variables such as the top quark p T , tt system p T , R and p T of the dijet system not coming from the top quark decay. In the absence of an NLO calculation of tt+cc production, the Madgraph+Pythia sample is used to evaluate systematic uncertainties on the tt+cc background. Since achieving the best possible modelling of the tt+jets background is a key aspect of this analysis, a separate reweighting is applied to tt+light and tt+cc events in Powheg+Pythia based on the ratio of measured differential cross sections at √ s = 7 TeV in data and simulation as a function of top quark p T and tt system p T [57]. It was verified using the simulation that the ratio derived at √ s = 7 TeV is applicable to √ s = 8 TeV simulation. It is not applied to the tt+bb component since that component was corrected to match the best available theory calculation. Moreover, the measured differential cross section is not sensitive to this component. The reweighting significantly improves the agreement between simulation and data in the total number of jets (primarily due to the tt system p T reweighting) and jet p T (primarily due to the top quark p T reweighting). This can be seen in Fig. 5, where the number of jets and the scalar sum of the jet p T (H T had ) distributions in the exclusive 2-btag region are plotted in the single-lepton channel before and after the reweighting is applied. Other backgrounds The W/Z +jets background is estimated from simulation reweighted to account for the difference in the W/Z p T spectrum between data and simulation [58]. The heavy-flavour fraction of these simulated backgrounds, i.e. the sum of W/Z + bb and W/Z + cc processes, is adjusted to reproduce the relative rates of Z events with no b-tags and those with one b-tag observed in data. Samples of W/Z +jets events, and diboson production in association with jets, are generated using the Alpgen 2.14 [59] leading-order (LO) generator and the CTEQ6L1 PDF set. Parton showers and fragmentation are modelled with Pythia 6.425 for W/Z +jets production and with Herwig 6.520 [60] for diboson production. The W +jets samples are generated with up to five additional partons, separately for W +light-jets, W bb+jets, W cc+jets, and W c+jets. Similarly, the Z +jets background is generated with up to five additional partons separated in different par- using the MSTW2008 NNLO PDF set [67,68]. Samples of tt+V are generated with Madgraph 5 and the CTEQ6L1 PDF set. Pythia 6.425 with the AUET2B tune [69] is used for showering. The tt V samples are normalised to the NLO cross-section predictions [70,71]. Signal model The tt H signal process is modelled using NLO matrix elements obtained from the HELAC-Oneloop package [72]. Powheg-Box serves as an interface to shower Monte Carlo programs. The samples created using this approach are referred to as PowHel samples [73]. They are inclusive in Higgs boson decays and are produced using the CT10nlo PDF set and factorisation (μ F ) and renormalisation scales set to μ F = μ R = m t + m H /2. The PowHel tt H sample is showered with Pythia 8.1 [74] with the CTEQ6L1 PDF and the AU2 underlying-event tune [75]. The tt H cross section and Higgs boson decay branching fractions are taken from (N)NLO theoretical calculations [19,[76][77][78][79][80][81][82], collected in Ref. [83]. In Appendix A, the relative contributions of the Higgs boson decay modes are shown for all regions considered in the analysis. Common treatment of MC samples All samples using Herwig are also interfaced to Jimmy 4.31 [84] to simulate the underlying event. All simulated samples utilise Photos 2.15 [85] to simulate photon radiation and Tauola 1.20 [86] to simulate τ decays. Events from minimum-bias interactions are simulated with the Pythia 8.1 generator with the MSTW2008 LO PDF set and the AUET2 [87] tune. They are superimposed on the simulated MC events, matching the luminosity profile of the recorded data. The contributions from these pileup interactions are simulated both within the same bunch crossing as the hardscattering process and in neighbouring bunch crossings. Finally, all simulated MC samples are processed through a simulation [88] of the detector geometry and response either using Geant4 [89], or through a fast simulation of the calorimeter response [90]. All simulated MC samples are processed through the same reconstruction software as the data. Simulated MC events are corrected so that the object identification efficiencies, energy scales and energy resolutions match those determined from data control samples. Figure 6a, b show a comparison of predicted yields to data prior to the fit described in Sect. 9 in all analysis regions in the single-lepton and dilepton channel, respectively. The data agree with the SM expectation within the uncertainties of 10-30 %. Detailed tables of the event yields prior to the fit and the corresponding S/B and S/ √ B ratios for the single-lepton and dilepton channels can be found in Appendix B. When requiring high jet and b-tag multiplicity in the analysis, the number of available MC events is significantly reduced, leading to large fluctuations in the resulting distributions for certain samples. This can negatively affect the sensitivity of the analysis through the large statistical uncertainties on the templates and unreliable systematic uncertainties due to shape fluctuations. In order to mitigate this problem, instead of tagging the jets by applying the b-tagging algorithm, their probabilities to be b-tagged are parameterised as functions of jet flavour, p T , and η. This allows all events in the sample before b-tagging is applied to be used in predicting the normalisation and shape after b-tagging [91]. The tagging probabilities are derived using an inclusive tt+jets simulated sample. Since the b-tagging probability for a b-jet coming 6 Comparison of prediction to data in all analysis regions before the fit to data in a the single-lepton channel and b the dilepton channel. The signal, normalised to the SM prediction, is shown both as a filled red area stacked on the backgrounds and separately as a dashed red line. The hashed area corresponds to the total uncertainty on the yields from top quark decay is slightly higher than that of a b-jet with the same p T and η but arising from other sources, they are derived separately. The predictions agree well with the normalisation and shape obtained by applying the b-tagging algorithm directly. The method is applied to all signal and background samples. Analysis method In both the single-lepton and dilepton channels, the analysis uses a neural network (NN) to discriminate signal from background in each of the regions with significant expected tt H signal contribution since the S/ √ B is very small and the uncertainty on the background is larger than the signal. Those include (5j, ≥ 4b), (≥ 6j, 3b) and (≥ 6j, ≥ 4b) in the case of the single-lepton channel, and (≥ 4j, 3b) and (≥ 4j, ≥ 4b) in the case of the dilepton channel. In the dilepton channel, an additional NN is used to separate signal from background in the (3j, 3b) channel. Despite a small expected S/ √ B, it nevertheless adds sensitivity to the signal due to a relatively high expected S/B. In the single-lepton channel, a dedicated NN is used in the (5j, 3b) region to separate tt+light from tt+HF backgrounds. The other regions considered in the analysis have lower sensitivity, and use H T had in the single-lepton channel, and the scalar sum of the jet and lepton p T (H T ) in the dilepton channel as a discriminant. The NNs used in the analysis are built using the Neu-roBayes [92] package. The choice of the variables that enter the NN discriminant is made through the ranking procedure implemented in this package based on the statistical separation power and the correlation of variables. Several classes of variables were considered: object kinematics, global event variables, event shape variables and object pair properties. In the regions with ≥6 (≥4) jets, a maximum of seven (five) jets are considered to construct the kinematic variables in the single-lepton (dilepton) channel, first using all the b-jets, and then incorporating the untagged jets with the highest p T . All variables used for the NN training and their pairwise correlations are required to be described well in simulation in multiple control regions. In the (5j, 3b) region in the single-lepton channel, the separation between the tt+light and tt+HF events is achieved by exploiting the different origin of the third b-jet in the case of tt+light compared to tt+HF events. In both cases, two of the b-jets originate from the tt decay. However, in the case of tt+HF events, the third b-jet is likely to originate from one of the additional heavy-flavour quarks, whereas in the case of tt+light events, the third b-jet is often matched to a c-quark from the hadronically decaying W boson. Thus, kinematic variables, such as the invariant mass of the two untagged jets with minimum R, provide discrimination between tt+light and tt+HF events, since the latter presents a distinct peak at the W boson mass which is not present in the former. This and other kinematic variables are used in the dedicated NN used in this region. In addition to the kinematic variables, two variables calculated using the matrix element method (MEM), detailed in Sect. 7, are included in the NN training in (≥ 6j, 3b) and (≥ 6j, ≥ 4b) regions of the single-lepton channel. These two variables are the Neyman-Pearson likelihood ratio (D1) (Eq. (4)) and the logarithm of the summed signal likelihoods (SSLL) (Eq. (2)). The D1 variable provides the best separation between tt H signal and the dominant tt+bb background in the (≥ 6j, ≥ 4b) region. The SSLL variable further improves the NN performance. The variables used in the single-lepton and dilepton channels, as well as their ranking in each analysis region, are listed in Tables 1 and 2, respectively. For the construction of variables in the (≥ 4j, ≥ 4b) region of the dilepton channel, the two b-jets that are closest in R to the leptons are considered to originate from the top quarks, and the other two b-jets are assigned to the Higgs candidate. Figures 7 and 8 show the distribution of the NN discriminant for the tt H signal and background in the single-lepton and dilepton channels, respectively, in the signal-rich regions. In particular, Fig. 7a shows the separation between the tt+HF and tt+light-jet production achieved by a dedicated NN in the (5j, 3b) region in the single-lepton channel. The distributions in the highest-ranked input variables from each of the NN regions are shown in Appendix C. For all analysis regions considered in the fit, the tt H signal includes all Higgs decay modes. They are also included in the NN training. The analysis regions have different contributions from various systematic uncertainties, allowing the combined fit to constrain them. The highly populated (4j, 2b) and (2j, 2b) regions in the single-lepton and dilepton channels, respectively, provide a powerful constraint on the overall normalisation of the tt background. The (4j, 2b), (5j, 2b) and (≥ 6j, 2b) regions in the single-lepton channel and the (2j, 2b), (3j, 2b) and (≥ 4j, 2b) regions in the dilepton channel are almost pure in tt+light-jets background and provide an important constraint on tt modelling uncertainties both in terms of normalisation and shape. Uncertainties on c-tagging are reduced by exploiting the large contribution of W → cs decays in the tt+light-jets background populating the (4j, 3b) region in the single-lepton channel. Finally, the consideration of regions with exactly 3 and ≥ 4 b-jets in both channels, having different fractions of tt+bb and tt+cc backgrounds, provides the ability to constrain uncertainties on the tt+bb and tt+cc normalisations. The matrix element method The matrix element method [94] has been used by the D0 and CDF collaborations for precision measurements of the top quark mass [95,96] and for the observations of single top quark production [97,98]. Recently this technique has been used for the tt H search by the CMS experiment [99]. By directly linking theoretical calculations and observed quantities, it makes the most complete use of the kinematic information of a given event. The method calculates the probability density function of an observed event to be consistent with physics process i described by a set of parameters α. This probability density function P i (x|α) is defined as and is obtained by numerical integration over the entire phase space of the initial-and final-state particles. In this equation, x and y represent the four-momentum vectors of all finalstate particles at reconstruction and parton level, respectively. The flux factor F and the Lorentz-invariant phase space ele- normalises P i to unity taking acceptance and efficiency into account. The assignment of reconstructed objects to final-state partons in the hard process contains multiple ambiguities. The process probability density is calculated for each allowed assignment permutation of the jets to the final-state quarks of the hard process. A process likelihood function can then be built by summing the process probabilities for the N p allowed assignment permutation, The process probability densities are used to distinguish signal from background events by calculating the likelihood ratio of the signal and background processes contributing with fractions f bkg , This ratio, according to the Neyman-Pearson lemma [100], is the most powerful discriminant between signal and back-ground processes. In the analysis, this variable is used as input to the NN along with other kinematic variables. Matrix element calculation methods are generated with Madgraph 5 in LO. The transfer functions are obtained from simulation following a similar procedure as described in Ref. [101]. For the modelling of the parton distribution functions the CTEQ6L1 set from the LHAPDF package [102] is used. The integration is performed using VEGAS [103]. Due to the complexity and high dimensionality, adaptive MC techniques [104], simplifications and approximations are needed to obtain results within a reasonable computing time. In particular, only the numerically most significant contributing helicity states of a process hypothesis for a given event, identified at the start of each integration, are evaluated. This does not perceptibly decrease the separation power but reduces the calculation time by more than an order of magnitude. Furthermore, several approximations are made to improve the VEGAS convergence rate. Firstly, the dimensionality of integration is reduced by assuming that the final-state object directions in η and φ as well as charged lepton momenta are well measured, and therefore the corresponding transfer functions are represented by δ functions. The total momentum conservation and a negligible transverse momentum of the initial-state partons allow for further reduction. Secondly, kinematic transformations are utilised to optimise the integration over the remaining phase space by aligning the peaks of the integrand with the integration dimensions. The narrow- width approximation is applied to the leptonically decaying W boson. This leaves three b-quark energies, one light-quark energy, the hadronically decaying W boson mass and the invariant mass of the two b-quarks originating from either the Higgs boson for the signal or a gluon for the background as the remaining parameters which define the integration phase space. The total integration volume is restricted based upon the observed values and the width of the transfer functions and of the propagator peaks in the matrix elements. Finally, the likelihood contributions of all allowed assignment permutations are coarsely integrated, and only for the leading twelve assignment permutations is the full integration performed, with a required precision decreasing according to their relative contributions. The signal hypothesis is defined as a SM Higgs boson produced in association with a top-quark pair as shown in Fig. 1a, b. Hence no coupling of the Higgs boson to the W boson is accounted for in |M i | 2 to allow for a consistent treatment when performing the kinematic transformation. The Higgs boson is required to decay into a pair of bquarks, while the top-quark pair decays into the single-lepton channel. For the background hypothesis, only the diagrams of the irreducible tt+bb background are considered. Since it dominates the most signal-rich analysis regions, inclusion of other processes does not improve the separation between signal and background. No gluon radiation from the finalstate quarks is allowed, since these are kinematically suppressed and difficult to treat in any kinematic transformation aiming for phase-space alignment during the integration process. In the definition of the signal and background hypothesis the LO diagrams are required to have a top-quark pair as an intermediate state resulting in exactly four b-quarks, two light quarks, one charged lepton (electron or muon) and one neutrino in the final state. Assuming lepton universality and invariance under charge conjugation, diagrams of only one lepton flavour and of only negative charge (electron) are considered. The probability density function calculation of the signal and background is only performed in the (≥ 6j, 3b) and (≥ 6j, ≥ 4b) regions of the single-lepton channel. Only six reconstructed jets are considered in the calculation: the four jets with the highest value of the probability to be a b-jet returned by the b-tagging algorithm (i.e. the highest btagging weight) and two of the remaining jets with an invariant mass closest to the W boson mass of 80.4 GeV. If a jet is btagged it cannot be assigned to a light quark in the matrix element description. In the case of more than four b-tagged jets, only the four with the highest b-tagging weight are treated as b-tagged. Assignment permutations between the two light quarks of the hadronically decaying W boson and between the two b-quarks originating from the Higgs boson or gluon result in the same likelihood value and are thus not considered. As a result there are in total 12 and 36 assignment permutations in the (≥ 6j, ≥ 4b) and (≥ 6j, 3b) region, respectively, which need to be evaluated in the coarse integration phase. Using the tt H process as the signal hypothesis and the tt+bb process as the background hypothesis, a slightly modified version of Eq. (3) is used to define the likelihood ratio D1: where α = 0.23 is a relative normalisation factor chosen to optimise the performance of the discriminant given the finite bin sizes of the D1 distribution. In this definition, signal-like and background-like events have D1 values close to one and zero, respectively. The logarithm of the summed signal likelihoods defined by Eq. (2) and the ratio D1 are included in the NN training in both the (≥ 6j, 3b) and (≥ 6j, ≥ 4b) regions. Systematic uncertainties Several sources of systematic uncertainty are considered that can affect the normalisation of signal and background and/or the shape of their final discriminant distributions. Individual sources of systematic uncertainty are considered uncorrelated. Correlations of a given systematic effect are maintained across processes and channels. Table 3 presents a summary of the sources of systematic uncertainty considered in the analysis, indicating whether they are taken to be normalisationonly, shape-only, or to affect both shape and normalisation. In Appendix D, the normalisation impact of the systematic uncertainties are shown on the tt background as well as on the tt H signal. In order to reduce the degradation of the sensitivity of the search due to systematic uncertainties, they are fitted to data in the statistical analysis, exploiting the constraining power from the background-dominated regions described in Sect. 4. Each systematic uncertainty is represented by an independent parameter, referred to as a "nuisance parameter", and is fitted with a Gaussian prior for the shape differences and a log-normal distribution for the normalisation. They are centred around zero with a width that corresponds to the given uncertainty. Luminosity The uncertainty on the integrated luminosity for the data set used in this analysis is 2.8 %. It is derived following the same methodology as that detailed in Ref. [105]. This systematic uncertainty is applied to all contributions determined from the MC simulation. Leptons Uncertainties associated with the lepton selection arise from the trigger, reconstruction, identification, isolation and lepton momentum scale and resolution. In total, uncertainties associated with electrons (muons) include five (six) components. Table 3 List of systematic uncertainties considered. An "N" means that the uncertainty is taken as normalisation-only for all processes and channels affected, whereas an "S" denotes systematic uncertainties that are considered shape-only in all processes and channels. An "SN" means that the uncertainty is taken on both shape and normalisation. Some of the systematic uncertainties are split into several components for a more accurate treatment. This is the number indicated in the column labelled as "Comp." Jets Uncertainties associated with the jet selection arise from the jet energy scale (JES), jet vertex fraction requirement, jet energy resolution and jet reconstruction efficiency. Among these, the JES uncertainty has the largest impact on the analysis. The JES and its uncertainty are derived combining information from test-beam data, LHC collision data and simulation [35]. The jet energy scale uncertainty is split into 22 uncorrelated sources which can have different jet p T and η dependencies. In this analysis, the largest jet energy scale uncertainty arises from the η dependence of the JES calibration in the end-cap regions of the calorimeter. It is the second leading uncertainty. Heavy-and light-flavour tagging A total of six (four) independent sources of uncertainty affecting the b(c)-tagging efficiency are considered [37]. Each of these uncertainties corresponds to an eigenvector resulting from diagonalising the matrix containing the information about the total uncertainty per jet p T bin and the bin-tobin correlations. An additional uncertainty is assigned due to the extrapolation of the b-tagging efficiency measurement to the highp T region. Twelve uncertainties are considered for the light-jet tagging and they depend on jet p T and η. These systematic uncertainties are taken as uncorrelated between b-jets, c-jets, and light-flavour jets. No additional systematic uncertainty is assigned due to the use of parameterisations of the b-tagging probabilities instead of applying the b-tagging algorithm directly since the difference between these two approaches is negligible compared to the other sources. tt+ jets modelling An uncertainty of +6.5 %/-6 % is assumed for the inclusive tt production cross section. It includes uncertainties from the top quark mass and choices of the PDF and α S . The PDF and α S uncertainties are calculated using the PDF4LHC prescription [106] with the MSTW2008 68 % CL NNLO, CT10 NNLO [107] and NNPDF2.3 5f FFN [108] PDF sets, and are added in quadrature to the scale uncertainty. Other systematic uncertainties affecting the modelling of tt+jets include uncertainties due to the choice of parton shower and hadronisation model, as well as several uncertainties related to the reweighting procedure applied to improve the tt MC model. Additional uncertainties are assigned to account for limited knowledge of tt+HF jets production. They are described later in this section. As discussed in Sect. 5, to improve the agreement between data and the tt simulation a reweighting procedure is applied to tt MC events based on the difference in the top quark p T and tt system p T distributions between data and simulation at √ s = 7 TeV [57]. The nine largest uncertain-ties associated with the experimental measurement of top quark and tt system p T , representing approximately 95 % of the total experimental uncertainty on the measurement, are considered as separate uncertainty sources in the reweighting applied to the MC prediction. The largest uncertainties on the measurement of the differential distributions include radiation modelling in tt events, the choice of generator to simulate tt production, uncertainties on the components of jet energy scale and resolution, and flavour tagging. Because the measurement is performed for the inclusive tt sample and the size of the uncertainties applicable to the tt+cc component is not known, two additional uncorrelated uncertainties are assigned to tt+cc events, consisting of the full difference between applying and not applying the reweightings of the tt system p T and top quark p T , respectively. An uncertainty due to the choice of parton shower and hadronisation model is derived by comparing events produced by Powheg interfaced with Pythia or Herwig. Effects on the shapes are compared, symmetrised and applied to the shapes predicted by the default model. Given that the change of the parton shower model leads to two separate effects -a change in the number of jets and a change of the heavy-flavour content -the parton shower uncertainty is represented by three parameters, one acting on the tt+light contribution and two others on the tt+cc and tt+bb contributions. These three parameters are treated as uncorrelated in the fit. Detailed comparisons of tt+bb production between Powheg+Pythia and an NLO prediction of tt+bb production based on SherpaOL have shown that the cross sections agree within 50 % of each other. Therefore, a systematic uncertainty of 50 % is applied to the tt+bb component of the tt+jets background obtained from the Powheg+Pythia MC simulation. In the absence of an NLO prediction for the tt+cc background, the same 50 % systematic uncertainty is applied to the tt+cc component, and the uncertainties on tt+bb and tt+cc are treated as uncorrelated. The large available data sample allows the determination of the tt+bb and tt+cc normalisations with much better precision, approximately 15 and 30 %, respectively (see Appendix D). Thus, the final result does not significantly depend on the exact value of the assumed prior uncertainty, as long as it is larger than the precision with which the data can constrain it. However, even after the reduction, the uncertainties on the tt+bb and the tt+cc background normalisation are still the leading and the third leading uncertainty in the analysis, respectively. Four additional systematic uncertainties in the tt+cc background estimate are derived from the simultaneous variation of factorisation and renormalisation scales, matching threshold and c-quark mass variations in the Mad-graph+Pythia tt simulation, and the difference between the tt+cc simulation in Madgraph+Pythia and Powheg +Pythia since Madgraph+Pythia includes the tt+cc process in the matrix element calculation while it is absent in Powheg+Pythia. For the tt+bb background, three scale uncertainties, including changing the functional form of the renormalisation scale to μ R = (m t m bb ) 1/2 , changing the functional form of the factorisation μ F and resummation μ Q scales to T,i and varying the renormalisation scale μ R by a factor of two up and down are evaluated. Additionally, the shower recoil model uncertainty and two uncertainties due to the PDF choice in the SherpaOL NLO calculation are quoted. The effect of these variations on the contribution of different tt+bb event categories is shown in Fig. 9. The renormalisation scale choice and the shower recoil scheme have a large effect on the modelling of tt+bb. They provide large shape variations of the NN discriminants resulting in the fourth and sixth leading uncertainties in this analysis. Finally, two uncertainties due to tt+bb production via multiparton interaction and final-state radiation which are not present in the SherpaOL NLO calculation are applied. Overall, the uncertainties on tt+bb normalisation and modelling result in about a 55 % total uncertainty on the tt+bb background contribution in the most sensitive (≥ 6j, ≥ 4b) and (≥ 4j, ≥ 4b) regions. The W/Z +jets modelling As discussed in Sect. 5, the W/Z +jets contributions are obtained from the simulation and normalised to the inclusive theoretical cross sections, and a reweighting is applied to improve the modelling of the W/Z boson p T spectrum. The full difference between applying and not applying the W/Z boson p T reweighting is taken as a systematic uncertainty, which is then assumed to be symmetric with respect to the central value. Additional uncertainties are assigned due to the extrapolation of the W/Z +jets estimate to high jet multiplicity. Misidentified lepton background modelling Systematic uncertainties on the misidentified lepton background estimated via the matrix method [38] in the singlelepton channel receive contributions from the limited number of data events, particularly at high jet and b-tag multiplicities, from the subtraction of the prompt-lepton contribution as well as from the uncertainty on the lepton misidentification rates, estimated in different control regions. The statistical uncertainty is uncorrelated among the different jet and b-tag multiplicity bins. An uncertainty of 50 % asso- Simulation ATLAS ciated with the lepton misidentification rate measurements is assumed, which is taken as correlated across jet and btag multiplicity bins, but uncorrelated between electron and muon channels. Uncertainty on the shape of the misidentified lepton background arises from the prompt-lepton background subtraction and the misidentified lepton rate measurement. In the dilepton channel, since the misidentified lepton background is estimated using both the simulation and samesign dilepton events in data, a 50 % normalisation uncertainty is assigned to cover the maximum difference between the two methods. It is taken as correlated among the different jet and b-tag multiplicity bins. An additional uncertainty is applied to cover the difference in shape between the predictions derived from the simulation and from same-sign dilepton events in data. Electroweak background modelling Uncertainties of +5 %/-4 % and ±6.8 % are used for the theoretical cross sections of single top production in the singlelepton and dilepton channels [64,65], respectively. The former corresponds to the weighted average of the theoretical uncertainties on s-, t-and W t-channel production, while the latter corresponds to the theoretical uncertainty on W tchannel production, the only single top process contributing to the dilepton final state. The uncertainty on the diboson background rates includes an uncertainty on the inclusive diboson NLO cross section of ±5 % [62] and uncertainties to account for the extrapolation to high jet multiplicity. Finally, an uncertainty of ±30 % is assumed for the theoretical cross sections of the tt+V [70,71] background. An additional uncertainty on tt+V modelling arises from variations in the amount of initial-state radiation. The tt + Z background with Z boson decaying into a bb pair is an irreducible background to the tt H, H → bb signal, and as such, has kinematics and an NN discriminant shape similar to those of the signal. The uncertainty on the tt+V background normalisation is the fifth leading uncertainty in the analysis. Uncertainties on signal modelling Dedicated NLO PowHel samples are used to evaluate the impact of the choice of factorisation and renormalisation scales on the tt H signal kinematics. In these samples the default scale is varied by a factor of two up and down. The effect of the variations on tt H distributions was studied at particle level and the nominal PowHel tt H sample was reweighted to reproduce these variations. In a similar way, the nominal sample is reweighted to reproduce the effect of changing the functional form of the scale. Additional uncertainties on the tt H signal due to Statistical methods The distributions of the discriminants from each of the channels and regions considered are combined to test for the presence of a signal, assuming a Higgs boson mass of m H = 125 GeV . The statistical analysis is based on a binned likelihood function L(μ, θ ) constructed as a product of Poisson probability terms over all bins considered in the analysis. The likelihood function depends on the signal-strength parameter μ, defined as the ratio of the observed/expected cross section to the SM cross section, and θ , denoting the set of nuisance parameters that encode the effects of systematic uncertainties on the signal and background expectations. They are implemented in the likelihood function as Gaussian or log-normal priors. Therefore, the total number of expected events in a given bin depends on μ and θ . The nuisance parameters θ adjust the expectations for signal and background according to the corresponding sys-tematic uncertainties, and their fitted values correspond to the amount that best fits the data. This procedure allows the impact of systematic uncertainties on the search sensitivity to be reduced by taking advantage of the highly populated background-dominated control regions included in the likelihood fit. It requires a good understanding of the systematic effects affecting the shapes of the discriminant distributions. The test statistic q μ is defined as the profile likelihood ratio: q μ = −2 ln(L(μ,θ μ )/L(μ,θ)), whereμ andθ are the values of the parameters that maximise the likelihood function (with the constraints 0 ≤μ ≤ μ), andθ μ are the values of the nuisance parameters that maximise the likelihood function for a given value of μ. This test statistic is used to measure the compatibility of the observed data with the background-only hypothesis (i.e. for μ = 0), and to make statistical inferences about μ, such as upper limits using the CL s method [112][113][114] as implemented in the RooFit package [115,116]. To obtain the final result, a simultaneous fit to the data is performed on the distributions of the discriminants in 15 regions: nine analysis regions in the single-lepton channel and six regions in the dilepton channel. Fits are performed under the signal-plus-background hypothesis, where the signal-strength parameter μ is the parameter of interest in the fit and is allowed to float freely, but is required to be the same in all 15 fit regions. The normalisation of each background is determined from the fit simultaneously with μ. Contributions from tt, W/Z +jets production, single top, diboson and tt V backgrounds are constrained by Statistical uncertainties in each bin of the discriminant distributions are taken into account by dedicated parameters in the fit. The performance of the fit is tested using simulated events by injecting tt H signal with a variable signal strength and comparing it to the fitted value. Good agreement between the injected and measured signal strength is observed. Results The results of the binned likelihood fit to data described in Sect. 9 are presented in this section. Figure 10 shows the yields after the fit in all analysis regions in the singlelepton and dilepton channels. The post-fit event yields and the corresponding S/B and S/ √ B ratios are summarised in Appendix E. Figures 11,12,13,14 and 15 show a comparison of data and prediction for the discriminating variables (either H T had , H T , or NN discriminants) for each of the regions considered in the single-lepton and dilepton channels, respectively, both pre-and post-fit to data. The uncertainties decrease significantly in all regions due to constraints provided by data and correlations between different sources of uncertainty introduced by the fit to the data. In Appendix F, the most highly discriminating variables in the NN are shown post-fit compared to data. Table 4 shows the observed μ values obtained from the individual fits in the single-lepton and dilepton channels, and their combination. The signal strength from the combined fit for m H = 125 GeV is: The expected uncertainty for the signal strength (μ = 1) is ±1.1. The observed (expected) significance of the signal is 1.4 (1.1) standard deviations, which corresponds to an observed (expected) p-value of 8 % (15 %). The probability, p, to obtain a result at least as signal-like as observed if no signal is present is calculated using q 0 = −2ln(L(0,θ μ )/L(μ,θ)) as a test statistic. The fitted values of the signal strength and their uncertainties for the individual channels and their combination are shown in Fig. 16. ATLAS The observed limits, those expected with and without assuming a SM Higgs boson with m H = 125 GeV , for each channel and their combination are shown in Fig. 17. A signal 3.4 times larger than predicted by the SM is excluded at 95 % CL using the CL s method. A signal 2.2 times larger than for the SM Higgs boson is expected to be excluded in the case of no SM Higgs boson, and 3.1 times larger in the case of a SM Higgs boson. This is also summarised in Table 5. In particular, the last bin of Fig. 18 includes the two last bins from the most signal-rich region of the NN distribution in (≥ 6j, ≥ 4b) and the two last bins from the most signalrich region of the NN in (≥ 4j, ≥ 4b) from the fit. The signal is normalised to the fitted value of the signal strength (μ = 1.5) and the background is obtained from the global fit. A signal strength 3.4 times larger than predicted by the SM, which is excluded at 95 % CL by this analysis, is also shown. Figure 19 demonstrates the effect of various systematic uncertainties on the fitted value of μ and the constraints provided by the data. The post-fit effect on μ is calculated by fixing the corresponding nuisance parameter atθ ± σ θ , whereθ is the fitted value of the nuisance parameter and σ θ is its post-fit uncertainty, and performing the fit again. The difference between the default and the modified μ, μ, represents the effect on μ of this particular systematic uncertainty. The largest effect arises from the uncertainty in normalisation of the irreducible tt+bb background. This uncertainty is reduced by more than one half from the initial 50%. The tt+bb background normalisation is pulled up by about 40 % in the fit, resulting in an increase in the observed tt+bb yield with respect to the Powheg+Pythia prediction. Most of the reduction in uncertainty on the tt+bb normalisation is the result of the significant number of data events in the signal-rich regions dominated by tt+bb background. With no Gaussian prior considered on the tt+bb normalisation, as described in Sect. 8, the fit still prefers an increase in the amount of tt+bb background by about 40 %. The tt+bb modelling uncertainties affecting the shape of this background also have a significant effect on μ. These systematic uncertainties affect only the tt+bb modelling and are not correlated with the other tt+jets backgrounds. The largest of the uncertainties is given by the renormalisation scale choice. The uncertainty drastically changes the shape of the NN for the tt+bb background, making it appear more signal-like. The tt+cc normalisation uncertainty is ranked third (Fig. 19) and its pull is slightly negative, while the postfit yields for tt+cc increase significantly in the four-and five-jet regions in the single-lepton channel and in the twoand three-jet regions of the dilepton channel (see Tables 10, 11 of Appendix 1). It was verified that this effect is caused by the interplay between the tt+cc normalisation uncertainty and several other systematic uncertainties affecting the tt+cc background yield. Fig. 19 The fitted values of the nuisance parameters with the largest impact on the measured signal strength. The points, which are drawn conforming to the scale of the bottom axis, show the deviation of each of the fitted nuisance parameters,θ , from θ 0 , which is the nominal value of that nuisance parameter, in units of the pre-fit standard deviation θ. The error bars show the post-fit uncertainties, σ θ , which are close to 1 if the data do not provide any further constraint on that uncertainty. Conversely, a value of σ θ much smaller than 1 indicates a significant reduction with respect to the original uncertainty. The nuisance parameters are sorted according to the post-fit effect of each on μ (hashed blue area) conforming to the scale of the top axis, with those with the largest impact at the top The noticeable effect of the light-jet tagging (mistag) systematic uncertainty is explained by the relatively large fraction of the tt+light background in the signal region with four b-jets in the single-lepton channel. The tt+light events enter the 4-b-tag region through a mistag as opposed to the 3-b-tag region where tagging a c-jet from a W boson decay is more likely. Since the amount of data in the 4-b-tag regions is not large this uncertainty cannot be constrained significantly. The tt + Z background with Z → bb is an irreducible background to the tt H signal as it has the same number of b-jets in the final state and similar event kinematics. Its normalisation has a notable effect on μ (dμ/dσ (tt V ) = 0.3) and the uncertainty arising from the tt+V normalisation cannot be significantly constrained by the fit. Other leading uncertainties include b-tagging and some components of the JES uncertainty. Uncertainties arising from jet energy resolution, jet vertex fraction, jet reconstruction and JES that affect primarily low p T jets as well as the tt+light-jet background modelling uncertainties are constrained mainly in the signal-depleted regions. These uncertainties do not have a significant effect on the fitted value of μ. Summary A search has been performed for the Standard Model Higgs boson produced in association with a top-quark pair (tt H) using 20.3 fb −1 of pp collision data at √ s = 8 TeV collected with the ATLAS detector during the first run of the Large Hadron Collider. The search focuses on H → bb decays, and is performed in events with either one or two charged leptons. To improve sensitivity, the search employs a likelihood fit to data in several jet and b-tagged jet multiplicity regions. Systematic uncertainties included in the fit are significantly constrained by the data. Discrimination between signal and background is obtained in both final states by employing neural networks in the signal-rich regions. In the singlelepton channel, discriminating variables are calculated using the matrix element technique. They are used in addition to kinematic variables as input to the neural network. No significant excess of events above the background expectation is found for a Standard Model Higgs boson with a mass of 125 GeV. An observed (expected) 95 % confidencelevel upper limit of 3.4 (2.2) times the Standard Model cross section is obtained. By performing a fit under the signalplus-background hypothesis, the ratio of the measured signal strength to the Standard Model expectation is found to be μ = 1.5 ± 1.1. Figure 20 shows the contributions of different Higgs boson decay modes in each of the analysis regions in the singlelepton and dilepton channels. The H → bb decay is the dominant contribution in the signal-rich regions. Appendix B: Event yields prior to the fit The event yields prior to the fit for the combined e+jets and μ+jets samples for the different regions considered in the analysis are summarised in Table 6. The event yields prior to the fit for the combined ee+jets, μμ+jets and eμ+jets samples for the different regions considered in the dilepton channel are summarised in Table 7. Appendix C: Discrimination power of input variables Figures 21,22,23,24,25,26 and 27 show the discrimination between signal and background for the top four input variables in each region where NN is used in the single-lepton and dilepton channels, respectively. In Fig. 21, the NN is designed to separate tt+HF from tt+light. Tables 8 and 9 show pre-fit and post-fit contributions of the different categories of uncertainties (expressed in %) for the tt H signal and main background processes in the (≥ 6j, ≥ 4b) region of the single-lepton channel and the (≥ 4j, ≥ 4b) region of the dilepton channel, respectively. Appendix D: Tables of systematic uncertainties in the signal region The "Lepton efficiency" category includes systematic uncertainties on electrons and muons listed in Table 3. The "Jet efficiency" category includes uncertainties on the jet vertex fraction and jet reconstruction. The "tt heavy-flavour modelling" category includes uncertainties on the tt+bb NLO shape and on the tt+cl p T reweighting and generator. The "Theoretical cross sections" category includes uncertainties on the single top, diboson, V +jets and tt+V theoretical cross sections. The "tt H modelling" category includes contributions from tt H scale, generator, hadronisation model and PDF choice. The details of the evaluation of the uncertainties can be found in Sect. 8. Appendix E: Post-fit event yields The post-fit event yields for the combined single-lepton channel for the different regions considered in the analysis are summarised in Table 10. Similarly, the post-fit event yields for the combined dilepton channels for the different regions are summarised in Table 11. Table 8 Single lepton channel: normalisation uncertainties (expressed in %) on signal and main background processes for the systematic uncertainties considered, before and after the fit to data in (≥ 6j, ≥ 4b) region of the single lepton channel. The total uncertainty can be different from the sum in quadrature of individual sources due to the anti-correlations between them Single lepton Single lepton
15,832.6
2015-03-17T00:00:00.000
[ "Physics" ]
Exponentiated Transmuted Generalized Rayleigh Distribution : A New Four Parameter Rayleigh Distribution This paper introduces a new four parameter Rayleigh distribution which generalizes the transmuted generalized Rayleigh distribution introduced by Merovci (2014). The new model is referred to as exponentiated transmuted generalized Rayleigh (ETGR) distribution. Various mathematical properties of the new model including ordinary and incomplete moments, quantile function, generating function and Rényi entropy are derived. We proposed the method of maximum likelihood for estimating the model parameters and obtain the observed information matrix. Two real data sets are used to compare the flexibility of the new model versus other models. Introduction introduced twelve different forms of cumulative distribution functions for modeling lifetime data.Among those twelve distribution functions, Burr-Type X and Burr-Type XII received the maximum attention.For more detail about those two distributions seeJohnson et al. (1994).Recently, Surles and Padgett (2001) introduced two-parameter Burr Type X distribution and correctly named as the generalized Rayleigh distribution. The procedure of expanding a family of distributions for added flexibility or to construct covariate models is a well-known technique in the literature.In many applied sciences such as medicine, engineering and finance, amongst others, modeling and analyzing lifetime data are crucial.Several lifetime distributions have been used to model such kinds of data.The quality of the procedures used in a statistical analysis depends heavily on the assumed probability model or distributions.Because of this, considerable effort has been expended in the development of large classes of standard probability distributions along with relevant statistical methodologies.However, there still remain many important problems where the real data does not follow any of the classical or standard probability models.Merovci (2014) introduced transmuted generalized Rayleigh (TGR) distribution.In this article we present a new generalization of the TGR distribution called Exponentiated transmuted generalized Rayleigh (ETGR) distribution.The cumulative distribution function (cdf) of the TGR distribution is given by where , > 0, || ≤ 1 and is a scale parameter, is a shape parameter and the transmuted parameter.The corresponding probability density function (pdf) is given by Recently, the distributions (or exponentiated distributions) have been shown to have a wide domain of applicability, in particular in modeling and analysis of lifetime data. Definition 1: Let be an absolutely continuous cdf with support on (, ), where the interval may be unbounded, and let  be a positive real number.The random variable has an distributions if its cdf , denoted by, () is given by   ( ) = ( ) = ( ) , > 0, > 0. G x F x F x x    which is the  th power of the base line distribution function () and the corresponding pdf of X is given by The class of distributions contains certain well-known distributions for which their cdf's have closed forms (see, e.g.Gupta and Kundu (1999, 2000, 2001, 2007) and Nadarajah (2011)).Shakil and Ahsanullah (2012) introduced some distributional properties of order statistics and record values from distributions. Recently, various generalizations have been introduced based on the above definition.We aim in this paper to define and study the ETGR distribution.The rest of the paper is organized as follows.In Section 2, we define the new distribution and provide some plots for its pdf.Section 3 provides some statistical properties including quantile function, random number generation, moments, generating functions, incomplete moments, mean deviasions and Rényi entropy are derived.In Section 4, the order statistics are discussed. In Section 5, we present the reliability function (rf), hazard rate function (hrf), reversed hazard rate function (rhrf), cumulative hazard rate function (chrf), moments of the residual life and moments of the reversed residual life.The maximum likelihood estimates (MLEs) for the model parameters and the observed information matrix are provided in Section 6.In Section 7, the ETGR distribution is applied to two real data sets to illustrate its usefulness.Finally, some concluding remarks are given in Section 8. The ETGR Distribution The ETGR distribution and its sub-models are presented in this section.The cdf of ETGR (for >0 X ) is given by Using the series expansion The cdf of the ETGR distribution in (3) can be expressed as where where  is a scale parameter representing the characteristic life, ,  and  are shape parameters representing the different patterns of the ETGR distribution and  the transmuted parameter.The corresponding pdf of ( 4) is given by Using the series expansion the pdf in (5) can be expressed in the mixture form as where Plots of the pdf for selected parameter values are given in Figure 1.The ETGR distribution is very flexible model that approaches to different distributions when its parameters are changed.The subject distribution includes as special cases four well known probability distributions as illustrated in corollary 1. Corollary 1 If X is a random variable with pdf in ( 5), then we have the following seven cases. Statistical Properties The statistical properties of the ETGR distribution including quantile and random number generation, moments, moment generating function, incomplete moments, mean deviasions and Rényi entropy are discussed in this section. Quantile and Random Number Generation The quantile function (qf), say , q x of X is the real solution of the following equation ( ) = . q F x q Then, we can write By putting = 0.5 q in Equation ( 7) gives the median of X .Simulating the ETGR random variable is straightforward.If U is a uniform variate on the unit interval (0,1) , then the random variable = q XX at = qU follows (5). Moments The r th moment, denoted by , ' r  of X is given by the following theorem. Theorem 1 If X is a continuous random variable has the ETGR ( , , , , ), x Proof: where By substituting from Equation (10) into Equation (9), we obtain Therefore, the first and second moments of the ETGR random variable can be obtained by setting = 1,2 r respectively, in Equation ( 8) as follows Then we can get the variance by the relation Based on the above Theorem (1) the coefficient of variation, coefficient of skewness and coefficient of kurtosis of the ETGR ( , , , , ) distribution can be obtained according to the well-known relations. Corollary 2 Using the relation between the central moments and non-centeral moments, we can obtain the th n central moment, denoted by , n M of a ETGR random variable as follows Generating Function The moment generating function ( mgf ) of the ETGR distribution is given by the following theorem. Theorem 2 If X is a continuous random variable has the ETGR ( , , , , ), x By substituting from Equation ( 8) into Equation ( 11), we obtain   X Mt , which completes the proof.The measure of central tendency, measure of dispersion, coefficient of variation, coefficient of skewness and coefficient of kurtosis of X can be obtained according to the above relation in Theorem 2. Incomplete Moments The main application of the first incomplete moment refers to the Bonferroni and Lorenz curves.These curves are very useful in economics, reliability, demography, insurance and medicine.The answers to many important questions in economics require more than justknowing the mean of the distribution, but its shape as well.This is obvious not only in the study of econometrics but in other areas as well.The th incomplete moments, denoted by  , Using Equation ( 6) and the lower incomplete gamma function, we obtain Another application of the first incomplete moment is related to the mean residual life and the mean waiting time given by       The amount of scatter in a population is evidently measured to some extent by the totality of deviations from the mean and median.The mean deviations about the mean and about the median  is the first incomplete moments that comes from ( 12) by setting =1 s and = M is the median of X . Rényi and q-Entropies Entropy refers to the amount of uncertainty associated with a random variable.The Rényi entropy has numerous applications in information theoretic learning, statistics (e.g.classification, distribution identification problems, and statistical inference), computer science (e.g.average case analysis for random databases, pattern recognition, and image matching) and econometrics, see Källberg et al. (2014).The Rényi entropy of a random variable X represents a measure of variation of the uncertainty.The Rényi entropy is defined by Therefore, using Equation ( 6), the Rényi entropy of the random variable X is given by The q-entropy, say () q HX, is defined by The pdf of the j th order statistics for a ETGR distribution is given by where  and the pdf of the smallest order statistics   The joint pdf of where Then the minimum and maximum joint probability density of the ETGR distribution, denoted by   Reliability Analysis In this section we introduce the reliability function, the hazard rate function, the cumulative hazard rate function, reversed hazard rate, moments of the residual life and moments of the reversed residual life for the ETGR ( , , , , ). x     The Reliability, Hazard Rate, Reversed Hazard Rate and Cumulative Hazard Rate Functions The rf also known as the survival function, which is the probability of an item not failing prior to some time , t is defined by ( ) = 1 ( ). R x F x  The rf of the ETGR distribution, say ( , , , , , ), The other characteristic of interest of a random variable is the hrf.The hrf of the ETGR distribution also known as instantaneous failure rate, say ( ), hx is an important quantity characterizing life phenomenon.It can be loosely interpreted as the conditional probability of failure, given it has survived to the time t .The hrf of the ETGR distribution is defined by ( , , , , , ) = ( , , , , , ) / ( , , , , , ) , , , , = . It is important to note that the units for () hx is the probability of failure per unit of time, or cycles.These failure rates are defined with different choices of parameters. Plots of the hazard rate function of ETGR for selected parameter values are provided in Figure 2. Moments of the Residual Life Several functions are defined related to the residual life.The failure rate function, mean residual life function and the left censored mean function, also called vitality function.It is well known that these three functions uniquely determine () Fx (see Gupta (1975), Kotz and Shanbhag (1980) where   Another interesting function is the mean residual life function (MRL), defined by  and it represents the expected additional life length for a unit which is alive at age x .The MRL of the ETGR distribution can be obtained by setting =1 k in the above equation.Guess and Proschan (1988) gave an extensive coverage of possible applications of the mean residual life.The MRL has many applications in survival analysis in biomedical sciences, life insurance, maintenance and product quality control, economics and social studies, Demography and product technology (see Lai and Xie (2006)). Moments of the Reversed Residual Life The ℎ moments of the reversed residual life, denoted by Then, The k th moments of the reversed residual life of X is given by The mean waiting time (MWT) also known as mean reversed residual life function, defined by  and it represents the waiting time elapsed since the failure of an item on condition that this failure had occurred in   0, x .The MRRL of the ETGR distribution can be obtained by setting =1 k in the last equation. Estimation and Inference The maximum likelihood estimators (MLEs) for the parameters of the ETGR distribution is discussed in this section.Consider the random sample 12 , ,..., n X X X of size n from this distribution with unknown parameter vector = (, , , ) .Then, the log-likelihood function, say ℓ = ln ℓ(), becomes ℓ = (ln 2 + ln + ln + 2 ln ) + ∑ ln − ∑ ( ) 2 Equation ( 13) can be maximized either directly by using the R (optim function), SAS (PROC NLMIXED), Ox program (sub-routine MaxBFGS) or by solving the nonlinear likelihood equations obtained by differentiating (13).Therefore, the score vector is given by () = . The maximum likelihood estimator ̂= ( ̂, ̂, ̂, ̂) of = ( , , , )       is obtained by solving the nonlinear system of equations ( 14) through (17).These equations cannot be solved analytically and statistical software can be used to solve them numerically by means of iterative techniques such as the Newton-Raphson algorithm.For the four parameters ETGR distribution all the second order derivatives exist. For interval estimation of the model parameters, we require the 44  observed information matrix, whose elements are derived in appendix A,      = for , = , , , . Applications In this section we provide two applications of the ETGR distribution to two real data sets.The first data set, strength data, which were originally reported by Badar and Priest (1982) and it represents the strength measured in GPA for single carbon fibers and impregnated 1000-carbon fiber tows.Single fibers were tested under tension at gauge lengths of 10 mm (data set 1) and 20 mm (data set 2), with sample sizes n = 63 and m = 74 respectively.The data are presented below.Several authors analyzed these data sets.Surles andPadgett (1998, 2001), Raqab and Kundu (2005) observed that the generalized Rayleigh distribution works quite well for these strength data.Kundu and Gupta (2006) analyzed these data sets using two-parameter Weibull distribution after subtracting 0.75 from both these data sets.After subtracting 0.75 from all the points of these data sets, Kundu and Gupta (2006) fitted Weibull distribution to both these data sets with equal shape parameters.These two data sets also studied by Rao (2014) to estimation of reliability in multicomponent stressstrength based on generalized.Here I would like to mention that the exact number of (data set 2) is 74 instead of 69, which mentioned in Kundu and Gupta (2009).Here we used these data to compare the proposed ETGR model with TGR, GR and R distributions. The first data set (gauge lengths of 10 mm) from Kundu and Raqab (2009) where k is the number of parameters and n is the sample size. Figure 1 : Figure 1: Plots of the density function for some parameter values we get the transmuted Rayleigh distribution, TR( ,, x  ). Figure 2 : Figure 2: Plots of the hrf for some parameter values upper incomplete gamma function. to construct approximate confidence intervals for the model parameters.Here, ) , , and  can be determined as:  th percentile of the standard normal distribution. Table 1 lists the MLEs of the model parameters for ETGR, TGR, GR and R distributions, the corresponding standard errors are given in parentheses.In this table we shall compare Pak.j. Table 2 lists the MLEs of the model parameters for ETGR, TGR, GR and R distributions, the corresponding standard errors are given in parentheses.In this table we shall compare our new model with other sub models. Table 2 : MLEs under the considered models and corresponding Tables1 and 2compare the ETGR model with the TGR, GR, and Rayleigh distributions.We note that the ETGR model gives the lowest values for the , AIC BIC and CAIC statistics among all fitted models.So, we conclude that the ETGR distribution provides a Pak.j. stat.oper.res. Vol.XI No.1 2015 pp115-134 134 Appendix A The elements of the observed information matrix are given by
3,654
2015-04-06T00:00:00.000
[ "Mathematics", "Computer Science" ]
Creation of Tissue-Engineered Urethras for Large Urethral Defect Repair in a Rabbit Experimental Model Introduction: Tissue engineering is a potential source of urethral substitutes to treat severe urethral defects. Our aim was to create tissue-engineered urethras by harvesting autologous cells obtained by bladder washes and then using these cells to create a neourethra in a chronic large urethral defect in a rabbit model. Methods: A large urethral defect was first created in male New Zealand rabbits by resecting an elliptic defect (70 mm2) in the ventral penile urethra and then letting it settle down as a chronic defect for 5–6 weeks. Urothelial cells were harvested noninvasively by washing the bladder with saline and isolating urothelial cells. Neourethras were created by seeding urothelial cells on a commercially available decellularized intestinal submucosa matrix (Biodesign® Cook-Biotech®). Twenty-two rabbits were divided into three groups. Group-A (n = 2) is a control group (urethral defect unrepaired). Group-B (n = 10) and group-C (n = 10) underwent on-lay urethroplasty, with unseeded matrix (group-B) and urothelial cell-seeded matrix (group-C). Macroscopic appearance, radiology, and histology were assessed. Results: The chronic large urethral defect model was successfully created. Stratified urothelial cultures attached to the matrix were obtained. All group-A rabbits kept the urethral defect size unchanged (70 ± 2.5 mm2). All group-B rabbits presented urethroplasty dehiscence, with a median defect of 61 mm2 (range 34–70). In group-C, five presented complete correction and five almost total correction with fistula, with a median defect of 0.3 mm2 (range 0–12.5), demonstrating a significant better result (p = 7.85 × 10−5). Urethrography showed more fistulas in group-B (10/10, versus 5/10 in group-C) (p = 0.04). No strictures were found in any of the groups. Group-B histology identified the absence of ventral urethra in unrepaired areas, with squamous cell metaplasia in the edges toward the defect. In group-C repaired areas, ventral multilayer urothelium was identified with cells staining for urothelial cell marker cytokeratin-7. Conclusions: The importance of this study is that we used a chronic large urethral defect animal model and clearly found that cell-seeded transplants were superior to nonseeded. In addition, bladder washing was a feasible method for harvesting viable autologous cells in a noninvasive way. There is a place for considering tissue-engineered transplants in the surgical armamentarium for treating complex urethral defects and hypospadias cases. Introduction: Tissue engineering is a potential source of urethral substitutes to treat severe urethral defects. Our aim was to create tissue-engineered urethras by harvesting autologous cells obtained by bladder washes and then using these cells to create a neourethra in a chronic large urethral defect in a rabbit model. Methods: A large urethral defect was first created in male New Zealand rabbits by resecting an elliptic defect (70 mm 2 ) in the ventral penile urethra and then letting it settle down as a chronic defect for 5-6 weeks. Urothelial cells were harvested noninvasively by washing the bladder with saline and isolating urothelial cells. Neourethras were created by seeding urothelial cells on a commercially available decellularized intestinal submucosa matrix (Biodesign ® Cook-Biotech ® ). Twenty-two rabbits were divided into three groups. Group-A (n = 2) is a control group (urethral defect unrepaired). Group-B (n = 10) and group-C (n = 10) underwent on-lay urethroplasty, with unseeded matrix (group-B) and urothelial cell-seeded matrix (group-C). Macroscopic appearance, radiology, and histology were assessed. Results: The chronic large urethral defect model was successfully created. Stratified urothelial cultures attached to the matrix were obtained. All group-A rabbits kept the urethral defect size unchanged (70 ± 2.5 mm 2 ). All group-B rabbits presented urethroplasty dehiscence, with a median defect of 61 mm 2 (range 34-70). In group-C, five presented complete correction and five almost total correction with fistula, with a median defect of 0.3 mm 2 (range 0-12.5), demonstrating a significant better result (p = 7.85 × 10 −5 ). Urethrography showed more fistulas in group-B (10/10, versus 5/10 in group-C) (p = 0.04). No strictures were found in any of the groups. Group-B histology identified the absence of ventral urethra in unrepaired areas, with squamous cell metaplasia in the edges toward the defect. In group-C repaired areas, ventral multilayer urothelium was identified with cells staining for urothelial cell marker cytokeratin-7. Conclusions: The importance of this study is that we used a chronic large urethral defect animal model and clearly found that cell-seeded transplants were superior to nonseeded. In addition, bladder washing was a feasible method for harvesting viable autologous cells in a noninvasive way. There is a place for considering tissue-engineered transplants in the surgical armamentarium for treating complex urethral defects and hypospadias cases. INTRODUCTION Hypospadias is a common congenital malformation caused by a default in the normal penis development, resulting in a defect of the ventral urethra, with the urethral meatus located below its normal position, usually associated with a deficiency of the ventral prepuce and an abnormal penile curvature (1). It occurs approximately in 1 in 150 to 300 live births (1)(2)(3). Hypospadias treatment is surgical. Mild case (distal hypospadias) repair is usually successful with numerous techniques, but severe case (proximal hypospadias) treatment may be challenging due to the lack of healthy tissue for the urethral reconstruction. In these cases, multiple urethral tissue substitutes have, so far, been described for creating a neourethra, such as the inner prepuce, buccal mucosa, bladder mucosa, postauricular grafts, among others (4-9). These substitutes have well documented side effects such as donor site morbidity and present mechanical and biological differences compared with the native urethra, and sometimes may not even be available due to multiple previous surgical interventions. In theory, urethral tissue engineering could be a solution to problems related to traditional urethral substitutes, by offering an off-the-shelf neourethra with the same properties of the native urethra. Neourethras had been created by using two main strategies involving scaffolds (10)(11)(12). In one strategy, acellular scaffolds, either of natural origin or synthetic (or a hybrid), have been used in both animal and clinical studies with favorable results for the correction of small defects surrounded by a good vascular bed (13)(14)(15)(16)(17). In a second strategy for neourethra regeneration, cell-seeded scaffolds, created by seeding autologous cells on scaffolds, have demonstrated superior results for the correction of bigger defects compared with acellular scaffolds and demonstrated increased vascularization and decreased inflammation and fibrosis (12,(18)(19)(20). There are several cell types used for urethral reconstruction, the most common are autologous urothelial cells (21), buccal mucosa cells (22), keratinocytes (23), fibroblasts (24), and smooth muscle cells (25). These cells can be obtained by biopsy of the tissue, or as in case for urothelium, by a non-invasive method such as bladder washes (26)(27)(28). In a clinical study comprising six patients with severe hypospadias, bladder washing was used as the source for autologous urothelial cell harvesting (29,30). However, the application of tissue-engineered neourethras in clinical practice is still scarce. Translation to the clinic is limited by the lack of an ideal neourethra and the complex methods involved in its development (11,12,31). Simplification of its creation processes could allow its wide application in patients. Our aim was to create a straightforward tissue-engineered urethral construct by seeding a porcine small intestine submucosa scaffold with urothelial cells obtained by bladder washes and test it in a chronic large urethral defect in a rabbit model. Ethical Considerations This study was approved by the ethical committee at the Hospital Universitario La Paz (CEBA-08-2016) and the Madrid Community Environmental Concierge (PROEX-186/16). Creation of the Chronic Large Urethral Defect Model Twenty-two adult (16 weeks old) giant New Zealand rabbits (Oryctolagus cuniculus), 4.5 (4, 5) kg of weight, were operated under general anesthesia to create a chronic urethral defect simulating the large urethral defects of proximal hypospadias. An elliptic segment of approximately 70 mm 2 (longest 18 mm and widest 5 mm) was resected in the ventral urethra, including the urethral mucosa, subcutaneous tissue, and skin, preserving the glans (the glans was left intact in the model to avoid the need of a urethral catheter due to edema and risk of acute urinary retention after the urethral repair). The edges of the defect were sutured to join the skin with the urethral mucosa to create a stable elliptic defect (Figure 1). The wellbeing of the subjects was evaluated with a specific animal supervision protocol. A welfare scale of 0-12 points was used, where 0 points corresponded to a normal status and 12 points being the endpoint criteria that required early termination of the animal according to principles for animal ethics ( Table 1). After 5-6 weeks, the model reproducibility and stability over time were studied, by macroscopic evaluation of the presence of inflammation and infection in the penis and measuring the area of the urethral defect. This area was accurately calculated by using the SketchAndCalc TM application, which assessed the surface by using a photo with a reference scale. In two of the rabbits, the model was left intact to evaluate the long-term stability after 3 months. In this group, the area of the defect was remeasured with the application, and voiding cytourethrogram and histological analysis were performed. Creation of the Tissue-Engineered Urethra Urethral tissues were created with autologous urothelial cells, obtained with a noninvasive method as described before (26)(27)(28). In brief, autologous cells were harvested from bladder washes, under general anesthesia, just before performing the surgery to create the urethral defect. To achieve the bladder washes, the urethra was catheterized with an 8 Ch Foley catheter, urine in the bladder was extracted, and several bladder lavages were performed by introducing 50 ml of physiological saline in the bladder and extracting the same volume until collecting a total of 300 ml of fluid (six to eight times). Bladder wash fluid was centrifuged at 1,500 rpm for 10 min, washed twice using Dulbecco's modified Eagle's medium (DMEM). Thereafter, the pellet was resuspended in 3 ml of CnT-Prime R culture medium (Cell-n-Tech, Bern, Switzerland) supplemented with fetal bovine serum (Fisher Scientific S.L., Madrid, Spain) and antibiotics (Penicillin-Streptomycin Mixture 5,000U/5,000 µg Lonza, Barcelona, Spain Barcelona, Spain). The final cell suspension was cultured in a 10-cm 2 cell culture well-coated with a Human Recombinant Laminin Mixture (Biolaminin 521 LN 5 µg/ml and Biolaminin 511 LN 5 µg/ml, Biolamina AB, Sundbyberg, Sweden). At subconfluence, (70-80% of confluence), the cells were detached using accutase (CnT-Accutase-100, Cell-n-Tech, Bern, Switzerland) and seeded in a scaffold of decellularized porcine small intestinal submucosa matrix (SIS matrix, Biodesign R 1-layer tissue graft, Cook Biotech Reconstructive urethroplasty was performed with the urethral tissue created by seeding the small intestinal submucosa matrix (SIS) matrix with autologous urothelial cells. 10 Europe APS Bjaeverskov, Denmark) with CnT-Prime R culture medium without FBS. After 24 h, the tissue constructs were transferred into an air-liquid interface using Transwell inserts (Falcon R Permeable Supports for 6-well Plate with 1.0 µm Transparent PET Membrane, Corning Optical Communications, S.L.U, Madrid, Spain) in order to stimulate the stratification of the epithelium and thereafter cultivated for 3 weeks. The constructs created had a rectangular shape of approximately 30 × 7 mm, according to the size of the SIS matrix used. A small piece of each construct (10 × 7 mm) was used for histology and immunoassay, and the remaining (20 × 7 mm) segment of the constructs was used to perform reconstructive urethroplasty in the hypospadias model. Urethral Defect Repair The 22 rabbits were divided into three groups ( Table 2): Group-A or control group (two rabbits), in which no reconstructive urethroplasty was performed, and the model was left intact; Group-B or SIS matrix urethroplasty group (10 rabbits), in whom a reconstructive urethroplasty was performed with the porcine small intestine submucosa matrix (Biodesign R 1-layer tissue graft, Cook Biotech Europe APS Bjaeverskov, Denmark) without cells; Group-C or urethral tissue urethroplasty group (10 rabbits), in which a reconstructive urethroplasty was performed with the urethral tissue constructs created by seeding the SIS matrix with autologous urothelial cells. Urethroplasty pair was performed approximately 5-6 weeks after the urethral defect model creation to ensure good healing and to allow cell culture to be carried out. Prophylactic antibiotic was administered using ceftriaxone (20 mg/kg) before the procedure. During the reconstruction, an 8 Ch urethral catheter was placed to the bladder, and the penile skin was mobilized. The material chosen for urethroplasty according to the treatment group (SIS matrix alone or urethral tissue) had dimensions of approximately 20 mm × 7 mm, corresponding approximately to the size of the urethral defect. The material was implanted in a "on-lay" mode, placing it on the ventral side of the hypospadias defect to complete the urethral cylinder. Fixation of the material was performed with a 6-0 poliglecaprone absorbable monofilament suture (Monocryl R ). The urethroplasty was covered with the penile skin adjacent, sutured in the ventral midline, avoiding suture overlapping. The urethral catheter was removed at the end of the procedure, due to the tendency of rabbits to bite and extract the catheters. Intramuscular analgesia with Meloxicam (1 mg/kg) was administered every 24 h for 2 days. The welfare scale specific to supervise the rabbits was applied during the postoperative period. The results of the urethroplasty were evaluated 4 weeks after the reconstructive surgery. The rabbits were examined under general anesthesia. Macroscopic examination of the penis was done by a blinded external evaluator to determine if the correction of the urethral defect was achieved, to assess the existence of curvatures of the penis and to evaluate the aesthetic appearance, using subjective aesthetic scale of 0-10 points (0 being the worst appearance, 10 the best appearance). Photographs were taken to measure the area of the urethral defect in cases of urethral fistula o dehiscence, and this area was calculated using the app SketchAndCalc TM . Voiding cystourethrogram was also performed by filling the bladder with iodinated contrast, to determine and document the caliber of the urethra and the presence of strictures or fistulas. After full bladder filling, animals presented with penile erection, which allowed to confirm the absence of penile curvatures. Afterward, termination of the animals, and histological and immunohistochemical examinations were carried out. Histology and Immunohistochemistry The urethral tissue constructs and the rabbit penises were fixed in buffered formalin, dehydrated in ascending series of ethanol, and finally embedded in paraffin. Transversal sections of 7 µm of the urethral tissues and axial sections of 5 µm of rabbit penises were processed and stained with hematoxylin and eosin (H&E). The presence of urothelial cells and epithelial stratification were evaluated. Urothelial cells were characterized using reacting monoclonal mouse anti-cytokeratin 7 antibody (CK7 Clone OV-TL 12/30; Dako Omnis R , Dako, Glostrup, Denmark). Immunolabeling was performed using the horseradish peroxidase (HRP) detection system, visualized with 3,3 ′ -diaminobenzidine (DAB) Chromogen (EnVision FLEX+ mouse, high pH K8002 secondary antibody; Dako Omnis R ), and sections were counterstained with hematoxylin. In total, two slides of H and E and two of CK7 per each tissue construct and penis were analyzed. Statistical Analysis Numeric data were analyzed using SPSS (SPSS Inc., Chicago, IL, USA). Outcomes of group A, B, and C were compared. Categorical data were compared using Chi-squared test. Continuous data were presented using median and range and compared using Mann-Whitney U-test. Normally distributed continuous data were presented as median and standard deviation. Differences were considered statistically significant at p-values < 0.05. Chronic Large Urethral Defect Model All subjects survived the intervention. All presented with mild hematuria, self-limited in the first postoperative hours and minimal discomfort [1 (0-2) point in the welfare scale]. No change in weight was observed. There were no signs of inflammation or infection in the surgical area, or injuries in the adjacent tissues. In all rabbits, the urethral defect was maintained without presenting reclosure. The mean area of the defect at 5 weeks was 70.3 ± 2.5 mm 2 . In the two rabbits of group-A, in which the model was left intact, there were no changes in the created defect, neither in its size (p = 0.35), nor in the characteristics of the tissues at 3 months after surgery ( Figure 1C). In the cystourethrogram, a large urinary leak was identified at the site of defect, with a normal posterior urethra and without signs of strictures. In the histological study with H and E, it was found a wide hypospadias-like defect on the ventral aspect of the penis, with a keratinized stratified epithelium on the edges of the defect and a urothelial-type epithelium on the dorsal aspect and lateral faces of the native urethra, with a transition to the keratinized epithelium next to the edges of the defect. No signs of inflammation were evident. The vascularization of the edges of the defect was adequate, and there was no significant fibrosis. Establishment of the Tissue-Engineered Urethra Urothelial cultures were established in all rabbits by the bladder washing method. In 7 of the 22 rabbits (31.82%), it was necessary to repeat the washing once, since no colonies were formed within the first 14 days. The average of washes was 1.32 (±0.48) per rabbit. No more than two washes were necessary in any case. The final cell suspension from the bladder washing contained a mixture of different cell types (red blood cells, white blood cells, urothelial cells, among others). A volume of 2 ml of cell suspension was seeded in the laminin-coated wells, achieving a good adhesion of the urothelial cells to the well. The remaining cells were washed out with the medium changes. The first urothelial cell colonies appeared at a median of 8 (4-14) days after seeding. The cultures reached cell subconfluence (Figure 2A) at a median of 15 (11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21) days. The cells were passed and seeded on the SIS matrix with good adhesion to the matrix in all 100% samples. The culture was carried out in an airliquid interface for a median of 21 (18)(19)(20)(21)(22)(23) days and developed stratification of the epithelium in all cases (Figures 2B,C). In all cases, a stratified epithelium of three to five cell layers was found similar to the epithelium of the native Frontiers in Pediatrics | www.frontiersin.org rabbit urethra (Figure 2E), without metaplasia or cell atypia ( Figure 2C). The immunoassay study demonstrated CK7-stained cells ( Figure 2D). Urethroplasty Urethroplasty was performed in group-B (10 rabbits) and group-C (10 rabbits), after a median of 40 (30-49) days after the first intervention to create the chronic large urethral defect (Figure 3). No urethral repair was performed in group-A (control). The procedure was performed with good tolerance by the subjects, requiring analgesia the first three postoperative days due to minor discomfort (score of 1 point (range 0-3) in the welfare scale). All presented with mild and self-limited hematuria. There were no episodes of urine retention. No animals needed preterm termination. Surgical manipulation resistance and characteristics were comparable in the SIS matrix only and the TEU. There were no breaks, tears, or deformations in both materials during the procedure. The face of the construct containing the cells was easily identified with magnifying glasses and faced the lumen of the created neourethra. In the macroscopic examination of the penises 4 weeks after surgery, all rabbits in group-B (10/10) presented dehiscence of the urethroplasty or a large fistula (Figure 3C). In group-C, 5/10 rabbits presented a complete urethral repair without fistulas or stenosis, and 5/10 presented almost total repair but with a small urethral fistula (Figure 3C ′ ). The median calculated area of the urethral defect after urethroplasty in group-B was 61.1 mm 2 (range 34.0-70.5), and in group-C, it was 0.3 mm 2 (range 0-12.5) ( Table 3), with a statistically significant difference (p = 7.85 × 10 −5 ). No penile curvatures were identified in any samples. Regarding the aesthetic aspect, this was deficient in group-B, with a score of 2 points (range 1-4) in the subjective aesthetic scale. However, in group-C, the aesthetic result was favorable, with a median score of 8 points (range 5-9) (p = 7.85 × 10 −5 ). In rabbits of group-A, in which the model was left intact, there were no changes in the created defect, neither in its size nor in the characteristics of the tissues at 3 months after model creation. In the radiological evaluation using voiding cystourethrogram, cases with fistula were confirmed. In group-B, 10/10 rabbits presented with urinary fistula (Figure 3D), compared with 5/10 in group-C (Figure 3D ′ ), the difference was statistically significant (p = 0.04). No strictures or diverticula were identified in any animals. In the histological examination, all rabbits in group-B had an absence of the ventral urethra in the areas of uncorrected defect, with continuity between the dorsal urethra and the penile skin and areas of squamous metaplasia in the lateral sites of the urethra (Figures 3E,F). In group-C, rabbits with total correction of the urethral defect had a ventral urethra with multilayered urothelial tissue, and subcutaneous and cutaneous tissue, and in the rabbits with fistula, the ventral urethra had some squamous metaplasia, with the rest of the urethra repaired with normal urothelium (Figures 3E ′ ,F ′ ). Presence of vascular and connective tissues in the repaired urethra areas were identified. No atypia or tumors were recognized in any of the samples. Immunoassay DISCUSSION In this study, we can summarize three main findings: we were successful in establishing a reproducible and chronic large urethral defect animal model in rabbits; TEUs were successfully created, using the bladder washing technique as source of cells and SIS matrix as scaffold; and reconstructive urethroplasty using TEUs was superior to the use of acellular scaffolds of SIS matrix in the rabbit model. In the model of the study, a large ventral urethral defect was created and found stable in the long-term (during 5 weeks in all rabbits and up to 3 months in control rabbits or group-A). In addition, the variation of the size of a urethral defect between animals was minimal, demonstrating the reproducibility of the model. The rabbit model is one of the best animal models for the study of urethroplasty techniques due to its anatomical characteristics (32)(33)(34), but there is a lack of an ideal animal model that truly resemble human congenital urethral defects (12). In human hypospadias, the urethral deficiency is usually associated to a defect of the corpus spongiosum, which gives a poor vascularization in the wound bed (35). Hypospadias animal models usually have plenty of urethral tissue with good vascularization, which could give better results to the experimental studies compared with the clinical practice. For this reason, in this study, we tried to create a hypospadias animal model, with a large urethral defect, to reduce the abundant urethral tissue. None of the animals presented urethral strictures, which demonstrates that the urethral defect was large enough to avoid re-closure of the defect with the adjacent tissue. Compared with other experimental studies for urethral reconstruction, our model is unique, as the urethral defect in our study was prepared 4-5 weeks before the planned reconstruction to allow settlement of the urethral defect as chronic and simulate the conditions under which urethral repair normally takes place. Previously, in most tissue-engineered urethral experimental studies, the urethral defect has been created intraoperatively at the time of reconstruction (10)(11)(12). The stratified TEUs created in this study had some advantages compared with other types of urethral tissue-engineered constructs previously described (10,(36)(37)(38) that make them ideal for translation to clinical setting. The SIS matrix scaffold is a commercially available biomaterial approved for human use since 2004, and its safety in clinical practice has been extensively proven (39)(40)(41)(42)(43). Cell harvesting by the bladder washing procedure has the advantages of low morbidity, avoiding donor site lesions and providing the possibility of repeated procedure if needed (26)(27)(28)(29)(30). In the clinical practice, bladder washes could be accomplished in the outpatient clinic in older patients. The type of cell cultures and expansion techniques, without using any feeding cells or any other substance not allowed for human application, is also an advantage of our construct. The only substance that should be substituted in a human setting is fetal bovine serum (Fisher Scientific S.L., Madrid, Spain) in the cell culture medium, which may be replaced by the patient's serum. Furthermore, with our simple culture techniques, a multilayer construct was accomplished, without requiring expensive bioreactors or complex laboratory machines. Cultures could be elaborated in standard laboratories accredited by Good Manufacturing Practice for human application. Reconstructive urethroplasty using TEUs was superior to the use of acellular scaffolds in our study. The urethroplasty, using our cell-seeded constructs, allowed complete repair in half of the cases, with only small fistulas in the other half, despite no catheter left in the animals. In a clinical scenario, where urethral catheter may be used, fistula appearance may be further reduced. Also, cell-seeded constructs allowed better healing of the urethral defect, which also improved the cosmetic appearance of the repair. This result is concordant with previous evidence about this topic in preclinical studies, in which cell seeding of tissue-engineered constructs leads to a significant reduction in side effects after urethral repair and has superior results (10-12, 18-20, 37, 38). However, the translation to clinical practice of cell-seeded constructs has been scarce, and interpretation of the results of clinical studies is difficult due to small cohort sizes, short-term results, and most of them being low-evidence studies without randomization or control group (24,25,29,30,44,45). In other animal studies, a full tubularized repair has been described (18,20), while in clinical scenarios, a common method would be using inlay or onlay procedures (24,25,29,30,44,45). It is important to mention that we performed the urethral repair in an onlay fashion to resemble the situation with a poor vascularization as in real human hypospadias, but in the clinical practice, we would recommend using this type of constructs in an inlay fashion, as a two-staged procedure, to enhance optimal vascularization of the construct. We believe that tubularized tissue-engineered constructs would only be an option if a vascular layer or arterial pedicle is associated in the construct; otherwise, the construct would have difficulties to survive in a human scenario with a poorly vascularized wound bed. Strictures could not be found in any of our specimens. It is possible that the SIS matrix, which is a naturally derived decellularized matrix, reduces the development of fibrosis compared with synthetic materials. In studies with dermal matrices, it has been demonstrated that natural matrices have superior results compared with synthetic matrices, as the latter seem to induce more foreign body reactions including giant cell formation (46). A limitation of the study was the lack of long-term results. Something that cannot be obtained in a translational manner in an animal study. Further elucidation would need to take place in a human model. Safety of tissue transplants must be proven to ensure the absence of development of atypia or neoplasia. Also, urethral repair stability in a longer period most be proven. Another limitation is the deficiency of the animal model to really resemble a human hypospadias. The corpus spongiosum is normal in our model, and the glans was preserved. It provided better vascularization compared with the clinical practice, in which most of the theoretical target patients for using tissue-engineered urethral constructs are complex cases with cripple penises and a poor vascular bed. Finally, the avoidance of using a urethral catheter in our model was due to technical reasons, which could increase the appearance of urine leaks and fistula formation, in all the study groups. CONCLUSIONS The chronic large urethral defect model created was harmless to the rabbit, reproducible and stable over time, and may therefore be suitable for use for further development of urethroplasty models and tissue engineering techniques. Bladder washing was a reliable source of viable urothelial cells for culture. Urothelial cell seeding on an SIS matrix allowed creation of urethral tissues with a stratified epithelium similar to a native urethra and suitable for urethroplasty. The use of tissue-engineered urethras for urethroplasty in the rabbit model was feasible and presented macroscopic, radiological, and histological superior results compared with the SIS matrix alone. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The animal study was reviewed and approved by the ethical committee at Hospital Universitario La Paz (CEBA-08-2016) and the Madrid Community Environmental Concierge (PROEX-186/16). AUTHOR CONTRIBUTIONS MA, MF, and PL contributed to conception and design of the study. MA, BS, CC, and MF developed the tissue-engineered urethras. MA, SR, RL, and MM created the hypospadias animal model and performed the urethral repair procedures in the animal model. MA and BS performed the histology and immunohistochemistry. MA, SR, RL, and MM organized the database. MA performed the statistical analysis. MA, MF, and PL wrote the first draft of the manuscript. CC, MM, SR, and RL wrote sections of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version. FUNDING The project described was supported by the Foundation for Investigation in Urology (Fundación para la Investigación en Urología-FIU) of the Spanish Association of Urology, with the grant Pedro Cifuentes Diaz. Novo Nordisk foundation (NNFSA170030576) supported authors MF and CC.
6,753.6
2021-06-22T00:00:00.000
[ "Medicine", "Engineering" ]
Probabilistic prediction and context tree identification in the Goalkeeper game In this article we address two related issues on the learning of probabilistic sequences of events. First, which features make the sequence of events generated by a stochastic chain more difficult to predict. Second, how to model the procedures employed by different learners to identify the structure of sequences of events. Playing the role of a goalkeeper in a video game, participants were told to predict step by step the successive directions—left, center or right—to which the penalty kicker would send the ball. The sequence of kicks was driven by a stochastic chain with memory of variable length. Results showed that at least three features play a role in the first issue: (1) the shape of the context tree summarizing the dependencies between present and past directions; (2) the entropy of the stochastic chain used to generate the sequences of events; (3) the existence or not of a deterministic periodic sequence underlying the sequences of events. Moreover, evidence suggests that best learners rely less on their own past choices to identify the structure of the sequences of events. Introduction The aim of this work is to model the performance of a player trying to guess successive choices displayed by an electronic video game called the Goalkeeper Game (https://game.numec.prp.usp.br/).In this game, the participant, playing the role of a goalkeeper, has to guess at each trial the next direction to where the penalty kicker will send the ball.An animation feedback then shows to which direction the ball was actually sent.The sequence of kicks is selected by a stochastic chain with memory of variable length. Stochastic chains with memory of variable length were introduced by Rissanen (1983) 1 as a universal model for data compression.Rissanen observed that very often in experimental datasets composed by sequences of symbols, each new symbol appears to be randomly selected by taking into account a sequence of past units whose length is variable and changes as a function of the sequence of past units itself.Rissanen called a context the smallest sequence of past symbols required to generate the next one.The set of contexts can be represented by a rooted and labeled oriented tree, henceforth called context tree.The procedure to generate the sequence of symbols is defined by the context tree and an associated family of transition probabilities used to choose each next symbol, given the context associated to the sequence of past symbols at each time step.From now on, stochastic chains with memory of variable length will be called context tree models.Under suitable continuity conditions, stationary stochastic chains can be well approximated by a context tree model 2 .For that reason, they have been largely used to model biological and linguistic phenomena [3][4][5][6][7][8][9][10][11] .In the experimental protocol considered here the sequences of directions chosen by the kicker have been generated by context tree models. In the Goalkeeper game, playing the role of the goalkeeper, the volunteer was instructed to stop the penalty kicks.Obviously, the intrinsic randomness of the algorithm used by the kicker to choose the directions makes it impossible to stop all the penalty kicks.However, the full identification of the context tree and the associated family of probability distributions used by the kicker is an important asset to increase the goalkeeper's success rate.Moreover, adopting a good strategy to face the randomness of the kicker choices might maximize the goalkeeper success rate. Actually, two strategies have been proposed to address the problem of making correct guesses in sequences produced by stochastic chains.The kernel of the problem is to identify the structure of the chain in spite of the intrinsic randomness of its realization (see for instance, Schulzea et al. 12 and Koehler et al. 13 ).The first strategy, known as maximizing strategy, corresponds to always choosing the outcome with the higher probability.In the second strategy, called matching strategy, the participant tries to emulate the selection procedure used to generate the sequences of events. In our experimental protocol an extra difficulty appears, namely the fact that the probability distributions used by the kicker depend on the successive contexts occurring in the sequence of his previous choices.This means that the goalkeeper must deal simultaneously with the problem of identifying the contexts and its associated transition probabilities as well as the problem of choosing a strategy.A double problem of this type was already considered by Wang et al. 14 . In this article we address two related issues.First, which features of the stochastic chain generating the sequences of events make it more difficult to predict.Second, how to model the procedures employed by different learners to identify the structure of sequences of events.This is done through a rigorous statistical procedure to identify both the context tree and the strategy used by the goalkeeper to make his guesses.We collected data from 122 participants, each one playing the role of the goalkeeper against a kicker that used one out of four different context tree models.By analyzing their sequences of responses, we investigate whether they correctly identify the context tree model used by the kicker and which strategy they use to face the randomness of the kicker's choices. Results The aim of the experiment was to model the performance of a player trying to guess successive symbols displayed by an electronic video game called the Goalkeeper Game (https://game.numec.prp.usp.br/demo).Playing the role of a goalkeeper, the participant was told to guess one of the three directions to where the penalty kicker could send the ball: left, center, or right, hereafter represented by the numbers 0, 1, and 2, respectively.An animation feedback showed in which direction the ball was effectively sent (Figure 1A). The sequences of shot directions were generated by four different context tree models (Figure 1B).Context tree models are characterized by two elements.The first element is a context tree and the second element is a family of transition probabilities indexed by the leaves of the context tree.In our experimental protocol, the four context tree models characterizing the sequences of the kicker's choices will be denoted by . The upper index k in the above notation stands for kicker.These four context tree models are represented in Figure 1B.Sequences generated by using each of these context tree models are depicted in Figure 1C. For a fixed context tree and two different associated families of transition probabilities, we conjecture that the context tree model with higher entropy would be more difficult to learn.For the first pair (Figure 1B, left panel), changes in the transition probabilities associated to the contexts 01 and 21 increased the entropy values from 0.65 in (τ k 1 , p k 1 ) to 0.81 in (τ k 2 , p k 2 ).We also conjectured that for a fixed context tree and two different associated families of transition probabilities, the one that displays a periodic structure would be easier to learn.For the second pair (Figure 1B, right panel), sequences generated by the context tree model (τ k 3 , p k 3 ) can be described as a concatenation of strings 211 in which the symbol 1 is replaced by the symbol 0 with a small probability in an i.i.d way.For the context tree model (τ k 4 , p k 4 ), the interchange of transition probabilities associated to 01 and 21 as well as of the most probable outcome of context 2 disrupts the periodic structure displayed in the context tree model (τ k 3 , p k 3 ) without changing the entropy values (0.54 for the context tree model (τ k 3 , p k 3 ) and 0.56 for the context tree model (τ k 4 , p k 4 )).Finally, comparing the performance obtained with the context tree models (τ k 1 , p k 1 ) and (τ k 2 , p k 2 ) with the context tree models (τ k 3 , p k 3 ) and (τ k 4 , p k 4 ) might give an indication that augmenting the number of contexts increases the learning difficulty. A total of 122 participants was divided into four groups of 30, 31, 31 and 30, respectively.Each context tree model in Figure 1B was played by a different group of participants (see section Methods).For each participant a sample was constituted by collecting an ordered sequence of 1000 pairs in which the first element at each pair indicates the choice of the kicker at that step and the second element corresponds to that of the goalkeeper. Time evolution of the performance per context tree model Figure 2A shows the cumulative proportion of correct predictions across trials per participant for the four context tree models.An exploratory analysis of the cumulative proportion of correct predictions for models (τ k 1 , p k 1 ) and (τ k 2 , p k 2 ) reveals that the participants tend to lie mostly between the matching and the maximizing strategy scores as the number of trials increases.This is not the case for models (τ k 3 , p k 3 ) and (τ k 4 , p k 4 ).A sliding window approach was employed to further explore the temporal evolution of the participants performance for each context tree model.Boxplots (Figure 2B) depict the distributions of the proportions of correct predictions across participants for each time window and each context tree model.For (τ k 1 , p k 1 ) and (τ k 2 , p k 2 ) the interquartile range is almost above the matching strategy score from the third time window on. For (τ k 1 , p k 1 ) and (τ k 2 , p k 2 ) the median of the proportion of correct predictions across participants is above the theoretical matching strategy score from the third time window on.Also, the interquartile range of proportion of correct predictions for 2/15 One trial in the game choice feedback ready to make a decision (τ k 2 , p k 2 ) is larger than for (τ k 1 , p k 1 ), suggesting a higher performance variability.For (τ k 3 , p k 3 ) and (τ k 4 , p k 4 ) the median of the proportion of correct predictions across participants is smaller than the theoretical matching strategy score in all time windows.In (τ k 3 , p k 3 ), the third quartile of the boxplot almost reaches the theoretical matching strategy score from the fourth time window on.Results are even worse for (τ k 4 , p k 4 ) as the third quartile is always clearly below the theoretical matching strategy score. Finally, there is a much greater variability in the distribution of proportions of correct predictions across participants in (τ k 3 , p k 3 ) and (τ k 4 , p k 4 ), as compared with (τ k 1 , p k 1 ) and (τ k 2 , p k 2 ).Curiously, for (τ k 1 , p k 1 ) there are some outliers in time window 6, suggesting that the performance of some participants deteriorated towards the end of the task. Identifying the goalkeeper strategy To identify the strategy to which a given participant was closer to, we estimated, for each context tree model, a probability density of the proportion of correct predictions for the matching and the maximizing strategies.This was done by comparing, for each participant and each window of analysis, the likelihood that the participant's proportion of correct guesses was generated by one of the two distributions (matching vs. maximizing).See Figure 3A.Two samples of proportions of correct predictions corresponding to a goalkeeper using the matching and the maximizing strategies were simulated.This was done by generating 10000 kicker sequences of size 250 (the size of each windows of analysis) and the corresponding response sequences.Then a kernel density estimator was used to obtain a probability density estimate for each strategy. Figure 3B depicts the proportion of participants per window of analysis that employed undermatching, matching, and maximizing strategies per context tree model.For (τ k 1 , p k 1 ) and (τ k 2 , p k 2 ) the great majority of participants laid either at the matching or the maximizing strategies in all time windows.Interestingly for (τ k 2 , p k 2 ) the proportion of participants employing the matching considerably reduced in favor of the maximizing strategy across time.For (τ k 3 , p k 3 ) most participants started by employing an undermatching strategy which was succeeded progressively by a matching strategy.Finally for (τ k 4 , p k 4 ) the undermatching strategy prevailed across time.Almost no participant achieved the maximizing strategy for (τ k 3 , p k 3 ) and (τ k 4 , p k 4 ). Figure 3. A) A probability density of the proportion of correct predictions for matching and maximizing strategies was estimated using a kernel density estimator on simulated data.For each participant and each window of analysis, the likelihood that the participant's proportion of correct guesses was generated by one of the two estimated distributions (matching, in red vs. maximizing, in blue) is considered to decide to which strategy the participant is close to B) Proportion of participants per window of analysis that undermatched (left) matched (center) and maximized (right) per context tree model. ANOVA of participants' performance across time windows To access the differences in performance between the context tree models across time windows, a statistical analysis was done using a two-way mixed ANOVA.The intrinsic randomness of each of the context tree models used to guide the choice of the kicker implies that the optimal performance associated to the maximizing strategy differs from one model to another (see top dashed lines in Figure 2A).Therefore, for statistical analysis, the proportions of correct predictions obtained per participant and per time window were normalized using the theoretical maximizing strategy score of the corresponding context tree model.These normalized proportions of correct guesses were transformed using a logit transformation (see Supplementary Figure S1). To eliminate data outliers, an univariate linear regression model was fitted to each participant's normalized proportions of correct guesses (in logit scale) as a function of the time window.Participants displaying a negative slope in the estimated regression line were excluded from the subsequent analysis (see Supplementary Figure S2).As a consequence, the final number of participants per context tree models used in the analysis are 24, 24, 27 and 26, respectively. The two-way mixed ANOVA analysis of the participants' normalized proportions of correct guesses (in logit scale) considers the context tree model as a between subject factor and the time window as a within subject factor.In our case, the levels of the between subject factor were (τ k 1 , p k 1 ), (τ k 2 , p k 2 ), (τ k 3 , p k 3 ), (τ k 4 , p k 4 ) and the levels of the within subject factor are 1, 2, 3, 4, 5, 6.A significant interaction between the time window and the context tree model, F(265.67,8.22) = 3.04, p = 0.003, indicated that the performance evolved differently across the four context tree models. Figure 4 shows the graph of interactions of the two-way mixed ANOVA analysis.The differences between the means at consecutive time windows per context tree model were tested to access the performance evolution for that context tree model. 5/15 A comparison of the means of the context tree models was performed per time window.To globally control the level of significance of the test with multiple comparisons, the Benjamini & Hochberg correction was used 15 .For model (τ k 1 , p k 1 ), the participants' performance strongly improved from the first to the second time window and then stabilized, with no more significant improvement (see Figure 4 and Supplementary Table S1 for exact p-values).Conversely, for model (τ k 2 , p k 2 ), significant differences appeared up to the fourth time window, then the performance stabilized and presented a significant improvement in the step to the last time window (see Figure 4 and Supplementary Table S1 for exact p-values).Besides, comparison of (τ k 1 , p k 1 ) and (τ k 2 , p k 2 ) performance per time window revealed that the only significant difference occurs at the second time window.Changing the transitions associated to contexts 01 and 21 from deterministic in (τ k 1 , p k 1 ) to random in (τ k 2 , p k 2 ) increases the entropy of the corresponding stochastic chains from 0.65 to 0.81.As a consequence, the participants needed more time to learn the structure of the chain. For model (τ k 3 , p k 3 ), the performance of the participants improved significantly up to the fifth time window.For (τ k 4 , p k 4 ), significant differences were detected only up to the third time window.Besides, (τ k 3 , p k 3 ) significantly differed from (τ k 4 , p k 4 ) in almost all time windows (see Figure 4 and Supplementary Table S1 for exact p-values).These results suggest that changes made to model (τ k 3 , p k 3 ) to obtain model (τ k 4 , p k 4 ) imposed a significant learning difficulty to model (τ k 4 , p k 4 ) in comparison to model (τ k 3 , p k 3 ).Significant differences in performance also appeared between (τ k 2 , p k 2 ) and (τ k 3 , p k 3 ) for all time windows.Thus, differences in performance can be assumed to occur between {(τ k 1 , p k 1 ), (τ k 2 , p k 2 )} and {(τ k 3 , p k 3 ), (τ k 4 , p k 4 )}. Does the goalkeeper identify the context tree used by the kicker? To retrieve the structure of the context tree governing the goalkeeper choices from the collected data, we introduce a statistical model selection procedure (see Methods, section Statistical model selection procedure), performed separately for each participant data and each time window.Using this statistical procedure, we retrieved the context tree and the associated family of For each context tree model (τ k i , p k i ) and each time window j, we end up with a set of trees { τv, j i , v ∈ V i }, where V i is the subset of participants that played against the kicker using the context tree model (τ k i , p k i ).The mode context tree 11 of this set of trees is computed to summarize the result of the set of participants. Figure 5 presents the mode context tree computed per time window for each context tree model.This is highlighted in a tree structure that contains all possible past strings up to length 4 that can be identified as a context. It can be verified that the mode context tree matches that of the kicker's context tree as early as in the first time window for models (τ k 1 , p k 1 ) and (τ k 2 , p k 2 ).Nevertheless, a greater consensus around the contexts used by the kicker is observed in (τ k 1 , p k 1 ) than in (τ k 2 , p k 2 ) for all time windows.This suggests that context tree model (τ k 2 , p k 2 ) is more difficult to learn than context tree model (τ k 1 , p k 1 ).For models (τ k 3 , p k 3 ) and (τ k 4 , p k 4 ), the mode context tree matches that of the kicker's context tree in the third and fourth time window, respectively.The fact that a higher number of participants misidentified the kicker's contexts indicates that these models are more difficult to learn.To identify these models we used the responses of each goalkeeper and the kicker choices within a sliding window of length 250 pacing at 150 trials.The nodes at each tree structure represent the strings that different goalkeepers identified as a context.Each node is colored from light pink to dark red according to the proportion of participants identifying the node as a context.Thick lines highlight the mode context tree.The leaves of the mode context tree are the strings that were identified as contexts more often across participants.A) For each context tree and each participant, a sample consisted in an ordered sequence of 1000 pairs of events, each pair corresponding to the successive directions chosen by the kicker and the corresponding guesses of the goalkeeper.B) At each step, we use the string of past directions chosen by the kicker and the successive prediction made by the goalkeeper to estimate a transition probability.B1, B2) To retrieve the context tree used by the goalkeeper, we prune the tree of candidate contexts.Starting from the leaves, we prune the tree branches using the BIC criterion.B3) The penalty constant in the BIC is chosen so as to minimize the proportion of prediction errors 3 .C) For each time window, the mode context tree was estimated from the retrieved set of context trees. 2Figure 1 . Figure 1.(A) Acting as a goalkeeper, the participant must guess, at each step, to where the next penalty kick will be shot by pressing the corresponding keyboard arrow.The options are left, center or right, represented by the symbols 0, 1, and 2, respectively.An animation feedback shows to which direction the ball was effectively sent.(B) Context tree models governing the kicker's choices and their corresponding entropy values.(C) Examples of sequences selected by the kicker using each one of the four context tree models.(D) Graph representation of the context tree models governing the kicker's choices. Figure 2 . Figure 2. (A) Time evolution from trial 100 to trial 1000 of the cumulative proportion of correct guesses for each context tree model.(B) Boxplots of proportions of correct guesses across participants in a sliding window of length 250 pacing at 150 trials for each context tree model.The proportions of correct guesses that could be achieved by a goalkeeper using the matching (bottom line in (A) and black square marker in (B)) and the maximizing (top line in (A) and black circle in (B)) strategies are indicated. Figure 4 . Figure 4. Interaction graph corresponding to the two-way mixed ANOVA analysis using the logit transformation of the normalized proportions of correct predictions as dependent variable and the context tree and time window as factors.Marginal means and 95% confidence intervals of the means are represented with dots and bars, respectively.For each context tree model, the significance level of the difference between successive time windows is indicated using the following convention: * * * for a p-value in the interval [0, 0.0001), * * for a p-value in the interval [0.0001, 0.01), * for a p-value in the interval [0.01, 0.05), • for a p-value in the interval [0.05, 0.1), null for a p-value in the interval [0.1, 1].The same convention is used to indicate the significant level of the difference between the means of (τ k 1 , p k 1 ) and (τ k 2 , p k 2 ), (τ k 2 , p k 2 ) and (τ k 3 , p k 3 ), and (τ k 3 , p k 3 ) and (τ k 4 , p k 4 ), for each time window. Figure 5 . Figure5.The context trees modeling the goalkeepers choices are summarized for each of the four context tree models.To identify these models we used the responses of each goalkeeper and the kicker choices within a sliding window of length 250 pacing at 150 trials.The nodes at each tree structure represent the strings that different goalkeepers identified as a context.Each node is colored from light pink to dark red according to the proportion of participants identifying the node as a context.Thick lines highlight the mode context tree.The leaves of the mode context tree are the strings that were identified as contexts more often across participants. FPEFigure 7 . Figure7.A) For each context tree and each participant, a sample consisted in an ordered sequence of 1000 pairs of events, each pair corresponding to the successive directions chosen by the kicker and the corresponding guesses of the goalkeeper.B) At each step, we use the string of past directions chosen by the kicker and the successive prediction made by the goalkeeper to estimate a transition probability.B1, B2) To retrieve the context tree used by the goalkeeper, we prune the tree of candidate contexts.Starting from the leaves, we prune the tree branches using the BIC criterion.B3) The penalty constant in the BIC is chosen so as to minimize the proportion of prediction errors 3 .C) For each time window, the mode context tree was estimated from the retrieved set of context trees.
5,810
2023-02-28T00:00:00.000
[ "Computer Science", "Psychology" ]
Performance of Matrix-Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry for Identification of Scedosporium, Acremonium-Like, Scopulariopsis, and Microascus Species Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) has emerged as a powerful microorganism identification tool. Research on MALDI-TOF MS identification of rare filamentous fungi is still lacking. This study aimed to evaluate the performance of MALDI-TOF MS in the identification of Scedosporium, Acremonium-like, Scopulariopsis, and Microascus species. Sabouraud broth cultivation and formic acid/acetonitrile protein extraction were used for MALDI-TOF MS identification by a Bruker Biotyper system. An in-house database containing 29 isolates of Scedosporium, Acremonium-like, Scopulariopsis, and Microascus spp. was constructed. A total of 52 clinical isolates were identified using the Bruker Filamentous Fungi Library v1.0 (FFL v1.0) alone, and Filamentous Fungi Library v1.0 plus the in-house library, respectively. The mass spectrum profile (MSP) dendrograms of the 28 Scedosporium isolates, 26 Acremonium-like isolates, and 27 Scopulariopsis and Microascus isolates were constructed by MALDI Biotyper OC 4.0 software, respectively. The correct species identification rate significantly improved when using the combined databases compared with that when using FFL v1.0 alone (Scedosporium spp., 75% versus 0%; Acremonium-like spp., 100% versus 0%; Scopulariopsis and Microascus spp., 100% versus 62.5%). The MSP dendrograms differentiated Acremonium-like species, Scopulariopsis and Microascus species clearly, but cannot distinguish species in the Scedosporium apiospermum complex. In conclusion, with an expanded database, MALDI-TOF MS is an effective tool for the identification of Scedosporium, Acremonium-like, Scopulariopsis, and Microascus species. INTRODUCTION In recent years, with the development of organ transplantation and the widespread use of immunosuppressants and antibiotics, the number of cases of invasive infections caused by filamentous fungi have increased (Brown et al., 2012). In addition to that of infection caused by the most common filamentous fungus, Aspergillus, the incidence of infections caused by nonaspergillus filamentous fungi such as Fusarium, Mucorales, Scedosporium, and other rare fungi is also increasing (Miceli and Lee, 2011). Scedosporium, Acremonium-like, Scopulariopsis, and Microascus spp. are saprobic fungi commonly found in the environment, some species have been reported as pathogens of humans, and most are opportunistic (Ramirez-Garcia et al., 2018;Perez-Cantero and Guarro, 2020a,b). Within the genus Scedosporium, the Scedosporium apiospermum species complex and Scedosporium aurantiacum are most related to human diseases; the former currently contains Scedosporium apiospermum, Scedosporium boydii, Scedosporium ellipsoideum, Scedosporium angustum, and Scedosporium fusarium (Ramirez-Garcia et al., 2018). Aremonium-like spp. comprise a high diversity of morphologically and genetically related fungi, among which Aremonium egyptiacum and Sarocladium kiliense are the most commonly involved in human diseases (Perez-Cantero and Guarro, 2020b). Scopulariopsis-like spp. include a group of hyaline and dematiaceous fungi, and most of the clinically relevant species belong to the genera Scopulariopsis and Microascus (Perez-Cantero and Guarro, 2020a). The above pathogenic fungi have raised concern in the field of medical mycology, as they show intrinsic resistance to multiple antifungal drugs (Ramirez-Garcia et al., 2018;Perez-Cantero and Guarro, 2020a,b). Additionally, differences in in vitro susceptibility have been reported among species within Scedosporium and Scopulariopsis-like spp. (Lackner et al., 2012;Yao et al., 2015). Therefore, rapid and accurate identification of these fungi is important for timely treatment, as well as for the supplementation of epidemiological data and drug susceptibility studies of a large range of species. However, due to the taxonomic complexity of these genera and the interspecies morphological similarities, morphological methods are often unable to identify these fungi at the species level (Perdomo et al., 2011;Ramirez-Garcia et al., 2018;Perez-Cantero and Guarro, 2020a). DNA sequencing allows accurate identification but is expensive, labor intensive, time consuming, and not suitable for routine laboratory testing. Over the last few years, matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) has been increasingly used for the laboratory identification of yeast and filamentous fungi because of its high accuracy, simple operation and low cost (Wilkendorf et al., 2020). The main obstacles to the application of this technique to the identification of filamentous fungi are the lack of a sufficient database and of a rapid and effective protein extraction method (Wilkendorf et al., 2020). Few studies have been conducted on the identification of Scedosporium (Del Chierico et al., 2012;Normand et al., 2013;Ranque et al., 2014;Sitterle et al., 2014;Sleiman et al., 2016;Zvezdanova et al., 2019), Acremonium-like (Becker et al., 2014;Rychert et al., 2018), Scopulariopsis and Microascus species (Lau et al., 2013;Becker et al., 2014;Schulthess et al., 2014;Levesque et al., 2015;Riat et al., 2015;Stein et al., 2018;Sacheli et al., 2020) by MALDI-TOF MS. In this study, we developed an inhouse database containing 29 strains covering 21 species of Scedosporium, Acremonium-like, Scopulariopsis, and Microascus and challenged the system with 52 clinical isolates to evaluate the identification performance of the Bruker Filamentous Fungi Library v1.0 (FFL v1.0) (Bruker Daltonics, Bremen, Germany) alone and FFL v1.0 plus the in-house database. Fungal Strains A total of 81 clinical isolates preserved at the Research Center for Medical Mycology of Peking University were included in this study. The sources and antifungal susceptibility data of these isolates were shown in the previous studies (Wang et al., 2015;Jagielski et al., 2016;Yao et al., 2019). Twenty out of 28 Scedosporium isolates (four species) were identified by DNA sequencing of the partial β-tubulin (BT2, exons 2-4), calmodulin (CAL, exons 3, 4), second large subunit of RNA polymerase II (RPB2), superoxide dismutase (SOD), and actin (ACT) loci in a previous study (Wang et al., 2015). The remaining eight Scedosporium isolates were identified by BT2 locus sequencing using the same method and the GenBank accession numbers of them were MW528357 to MW5283564. Twentysix Acremonium-like isolates (seven species) were identified by sequencing the internal transcribed spacer (ITS), ribosomal large subunit (LSU), and transcription elongation factor 1-α (EF1-α) loci in a previous study (Yao et al., 2019). Twentyseven Scopulariopsis and Microascus isolates (10 species) were identified by sequencing the ITS, LSU, TUB, and EF1-α loci as previously reported (Jagielski et al., 2016). Twenty-seven of the 81 isolates were selected as reference strains to build the novel Beijing Medical University (BMU) database (Table 1). Challenge isolates comprising 20 Scedosporium, 16 Acremonium-like, and 16 Scopulariopsis and Microascus isolates (Table 1) were used to assess the performance of FFL v1.0 alone versus that of the combined databases for species identification. Sample Preparation All strains were cultivated on potato dextrose agar (PDA) slants at 28 • C for 5-10 days until the colonies reached a size of approximately 1 cm in diameter. The spores and hyphae were scraped by a long, sterile, wet cotton swab; reinoculated in 5 mL of Sabouraud dextrose broth (SDB) (Becton Dickinson, Franklin Lakes, NJ, United States) at 28 • C on a Loopster digital rotator (IKA, Staufen, Germany); and incubated for 24-48 h to yield a high quantity of small mycelial balls. The ethanol/formic acid extraction procedure was performed according to the manufacturer's instructions. Briefly, 1.5 mL of mycelium was transferred into a 1.5-mL tube (Eppendorf, Hamburg, Germany) after settling and centrifuged at 15,870 × g for 2 min. Then, the pellet was washed twice with 1 mL of deionized water. The supernatant was discarded, and then, 300 µL of deionized water and 900 µL of anhydrous ethanol (Sigma-Aldrich, St. Louis, MO, United States) were added and vortexed in sequence. After 10 min of incubation, the materials were centrifuged at 15,870 × g for 2 min, and the supernatant was discarded. After drying at 37 • C, the pellet was thoroughly mixed in 25-100 µL of 70% formic acid (the volume of 70% formic acid depended on the size of the pellet), incubated for 10 min at ambient temperature, mixed with an equal volume of acetonitrile (Sigma-Aldrich, St. Louis, MO, United States) and incubated for another 10 min. This mixture was centrifuged at 15,870 × g for 2 min, and then 1 µL of the supernatant was transferred onto an MTP 384 polished steel MALDI target plate (Bruker Daltonik GmbH, Bremen, Germany), air dried and overlaid with 1 µL of saturated α-cyano-4 hydroxy-cinnamic acid (HCCA) matrix solution (Bruker Daltonics, Bremen, Germany). Finally, the MALDI target was placed into an autoflex speed TOF instrument (Bruker Daltonik, Bremen, Germany). Beijing Medical University Database Construction The reference spectra on Scedosporium, Acremonium-like, Scopulariopsis, and Microascus species in FFL v1.0 only include 4 S. apiospermum [anamorph] P. boydii [teleomorph], 1 A. strictum (now named S. strictum), 7 S. brevicaulis, 1 S. acremonium, and 1 S. brumptii. The isolates used for in-house database construction were deposited in eight target spots. The acquisition settings were as follows: ion source 1 at 19.50 kV, ion source 2 at 18.22 kV, lens at 7.01 kV, and mass range from 2,000 to 20,000 Da. The other parameters were kept the default settings. The process of in-house database building was in accordance with Bruker's procedures. Three fingerprints from each target spot were manually obtained by Bruker Daltonics FlexControl 3.4 software (summing to a signal strength > 10,000); MALDI Biotyper OC 4.0 software was used to evaluate the quality of the 24 fingerprints, poorly quality and poor repeated mass spectra were removed in GelView, and then, 20-24 mass spectra were used to generate a reference main spectrum profile (MSP). Matrix-Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry Identification The challenge isolates were deposited in four target spots. Spectrum acquisition was performed automatically using MALDI FIGURE 1 | Matrix-assisted laser desorption ionization-time-of-flight mass spectra (m/z 2,000 to 20,000) of Scedosporium species. Frontiers in Microbiology | www.frontiersin.org Biotyper RTC 4.0 software (Bruker Daltonik, Bremen, Germany). MALDI Biotyper 3.1 (Bruker Daltonics, Bremen, Germany) was applied for spectral acquisition and comparison with reference spectra from FFL v1.0 and FFL v1.0 plus the BMU database. Identification scores ≥ 2.0 and 1.7-1.99 indicated speciesand genus-level identification, respectively, and scores of <1.7 indicated no reliable identification. If the best matches in at least three of the four spots tested were consistent, the MALDI-TOF MS identification was considered interpretable. Correct identification indicates that the results were consistent with the DNA sequencing. Composite Correlation Index Matrix We used the representative composite correlation index (CCI) tool by MALDI Biotyper 4.0 software to analyze the relationships between spectra. CCI value around 1 represents a high conformance of spectra, while that near 0 indicates a clear diversity of the spectra. The CCI matrix translated into a heat map in which closely related spectra are represented in hot colors and unrelated spectra in cold colors. Phylogenetic Analysis and Mass Spectrum Profile Dendrograms Phylogenetic tree of 28 Scedosporium isolates was constructed by the BT2 sequence using the Kimura two-parameter model and the neighbor-joining (NJ) method. Multilocus phylogenetic tree of 26 Acremonium-like isolates was constructed by combining the ITS-LSU and EF1-α sequences using the Tamura-Nei model and the NJ method. Multilocus phylogenetic tree of 27 Scopulariopsis and Microascus isolates was constructed by combining the ITS, LSU, TUB, and EF1-α sequences using the Tamura-Nei model and the NJ method. The bootstrap method (1,000 replicates) was applied to test for phylogeny. The MSP dendrograms of 28 Scedosporium isolates, 26 Acremonium-like isolates, and 27 Scopulariopsis and Microascus isolates were constructed by MALDI Biotyper 4.0 software. Statistical Analysis Statistical analysis was performed by SPSS 26. McNemar's test was used to compare the species identification rates using FFL v1.0 alone and the combined databases, and a p-value < 0.05 was considered statistically significant. RESULTS The mass spectra of Scedosporium, Acremonium-like, Scopulariopsis, and Microascus species are shown in Figures 1-3, respectively. The results of MALDI-TOF MS identification of the 52 Scedosporium, Acremonium-like, Scopulariopsis, and Microascus isolates by FFL v1.0 alone and by the combined databases are shown in Table 2. Using FFL v1.0 alone, 60% (12/20) of Scedosporium isolates were identified with scores ≥ 2.0, while all of them were identified as Scedosporium apiospermum [anamorph] Pseudallescheria boydii [teleomorph], which was the only Scedosporium species represented in FFL v1.0, so clear species identification was not available. Seven isolates (35%) were identified to the genus level and also identified as S. apiospermum [ana] P. boydii [teleo]. Using the combination of FFL v1.0 and the BMU database, the correct species-level identification rates increased significantly to 75% (15/20) (p < 0.001), and all 10 S. boydii isolates were correctly identified. The remaining five isolates were misidentified as species within the S. apiospermum species complex ( Table 2). Using FFL v1.0 alone, none of the Acremonium-like isolates were reliably identified. When supplemented with the inhouse database, all isolates were correctly identified to the species level. Even though FFL v1.0 contains one strain of Acremonium strictum (now named Sarocladium strictum), there was no consistent identification of the quadruplicate tests when challenged with one isolate of S. strictum used for BMU database construction. The species-and genus-levels identification rates of Scedosporium, Acremonium-like, Scopulariopsis, and Microascus isolates using the BMU database alone were the same as that using the combined databases. The CCI matrix of test species indicated that mass spectra within the S. apiospermum complex were highly similar, while those within Acremonium-like, Scopulariopsis, and Microascus spp. were more diverse (Figure 4). The MSP dendrogram of the Scedosporium isolates cannot differentiate species in the S. apiospermum complex ( Figure 5). As for the Acremonium-like isolates, the MSP dendrogram showed clear separation of different species, and the topology of the dendrogram appeared to be similar to that of the . By comparing the spectra of different species, numerical correlation index is obtained to form the CCI matrix and translated into a heat map. Closely related spectra were represented in hot colors and with higher CCI value, and unrelated spectra in cold colors with lower CCI value. CCI value ≥ 0.5 were shown in the view. multilocus phylogenetic tree (Figure 6). The MSP dendrogram of Scopulariopsis and Microascus spp. clustered 27 isolates to the Scopulariopsis clade and Microascus clade (Figure 7). Isolates of M. gracilis and M. croci clustered in one subclade, whereas isolates of M. onychoides and M. intricatus clustered in another subclade, and the above two pairs of species were also close in the phylogenetic tree. The other isolates belonging to the same species clustered together into respective subclades. However, the topology of the MSP dendrogram and that of the multilocus phylogenetic tree of Microascus isolates differed. DISCUSSION The identification of filamentous fungi by MALDI-TOF MS relies mainly on an available database and an efficient protein extraction method. Liquid culture can improve the accuracy of MALDI-TOF MS identification and is usually used for research purpose. In clinical practice, liquid culture can be added when the identification of solid culture is unsatisfactory. The main obstacle of MALDI-TOF MS identification of Scedosporium, Acremonium-like, and Scopulariopsis-like species is a lack of reference spectral data in commercial databases. Our study indicated that MALDI-TOF MS is a powerful technique for rapid and accurate identification of the above species when using the Bruker database complimented with an in-house database and liquid culture. Due to the non-updated nomenclature and a lack of reference species of Scedosporium in FFL v1.0, clear species identification is unavailable. With a supplementary database, the species identification rate of Scedosporium spp. ranged between 76 and 100% (Del Chierico et al., 2012;Normand et al., 2013;Ranque et al., 2014;Sleiman et al., 2016;Zvezdanova et al., 2019), and our study also found a correct identification rate of 75% using the combined databases. However, we found that the mass spectra within the S. apiospermum complex were highly similar, indicated by the high CCI values between species, so the strains in the complex could not be accurately differentiated, which was also reported by Bernhard et al. (2016). Zvezdanova et al. (2019) also found that even though 9 strains of S. apiospermum were included in an in-house database, three strains of S. apiospermum were misidentified at the species level (misidentified as S. boydii [n = 1] or Lomentospora prolificans [n = 2]). However, Sleiman et al. (2016) found that 17 strains of Scedosporium spp. and L. prolificans were correctly identified with a supplementary database, but the strains in the S. apiospermum complex included only five S. apiospermum and one S. boydii isolates. Sitterle et al. (2014) found with an in-house database containing 47 reference strains that the Andromas system identified 64 strains of Scedosporium spp. (including S. boydii, S. apiospermum, S. aurantiacum, S. minutispora, and S. dehoogii) and L. prolificans to the species level using the direct smearing method (without protein extraction). This indicated that the ability of different MALDI-TOF MS systems to identify Scedosporium spp. may be different, and a larger number of strains for database construction may improve the ability of species differentiation. Due to the lack of species, we were unable to clarify whether MALDI-TOF MS can distinguish the S. apiospermum complex from other species in the genus. In addition, by comparing the MSP dendrogram with the drug sensitivity data (Wang et al., 2015), we didn't find a relation between the spectra difference and the drug sensitivity in the test strains. In conclusion, our results showed that MALDI-TOF MS identification for Scedosporium spp. by the Bruker Biotyper was relatively reliable after protein extraction from liquid cultivation and database combination. To date, there have been no MALDI-TOF MS identification studies for a wide range of Acremonium-like species. Becker et al. (2014) correctly identified five out of six isolates of S. strictum to the species level with a supplemented database by the Bruker Biotyper system. Rychert et al. (2018) correctly identified 30 Acremonium sclerotigenum (now named A. egyptiacum) (Summerbell et al., 2018) isolates to the species level using the Vitek MS v3.0 database. Due to the limited species in these studies, the identification ability of MALDI-TOF MS for Acremonium-like species could not be defined. We conducted the first MALDI-TOF MS identification study on multiple species and genera of Acremonium-like spp. Due to a lack of reference MSPs, FFL v1.0 could not identify all of the Acremonium-like isolates. Although one S. strictum MSP was included in the commercial database, no identification results were found when tested with one S. strictum isolate. However, three of the four isolates of S. kiliense were identified as S. strictum, though with low scores (between 1.464 and 1.671, data not shown), suggesting that the reference strain of S. strictum in FFL v1.0 might actually be S. kiliense. Some isolates previously reported as S. strictum in the literature were also reconfirmed as S. kiliense (Perdomo et al., 2011;Summerbell et al., 2018), indicating the possibility of confusion. After supplementation of the BMU database, the identification accuracy by MALDI-TOF MS of Acremonium-like species was similar to that of multilocus sequencing, and the former method was faster, simpler and less expensive. Only a few species and strains of Scopulariopsis-like fungi have yet been involved in MALDI-TOF MS identification studies. Using FFL v1.0, S. brevicaulis was correctly identified in some cases (Schulthess et al., 2014;Levesque et al., 2015;Riat et al., 2015;Stein et al., 2018;Sacheli et al., 2020), while S. candida, S. cinerea, S. brumptii, and M. cirrosus could not be identified (Schulthess et al., 2014;Levesque et al., 2015;Stein et al., 2018). The identification of Scopulariopsis-like fungi can be improved by constructing an in-house database (Lau et al., 2013;Becker et al., 2014). Lau et al. (2013) correctly identified three strains of S. brevicaulis to the species level using the Bruker Biotyper system plus the self-built NIH database. Becker et al. (2014) also correctly identified 12 strains of S. brevicaulis and one strain of S. candida with an in-house database. Our study found that FFL v1.0 performed well in the identification of S. brevicaulis but was unable to identify other species due to a lack of reference MSPs. After adding the reference MSPs of ten species, the Bruker Biotyper system was able to accurately identify Scopulariopsis and Microascus species. The MSP dendrogram obtained by MALDI-TOF MS analysis indicates the relationship between the strains based on protein fingerprint differences and can be applied to the classification of microorganisms. This study found that the MSP dendrogram of Acremonium-like isolates clearly distinguished 7 different species, with similar topology to the multilocus phylogenetic tree. The MSP dendrogram of Scopulariopsis and Microascus spp. clustered isolates of the same species or close species to a branch, while the topology of Microascus spp. was inconsistent with that of the multilocus phylogenetic tree. These results indicated that the MSP dendrogram is efficient for species differentiation; however, there was a difference between molecular taxonomy at the nucleic acid and protein levels, which may be due to the varied protein expression related to the growth condition and life cycle of the fungi (Putignani et al., 2011). Shao et al. (2018) also found that the interspecific discrimination of MSP dendrograms and ITS-based trees of Rhizopus and Mucor species was consistent, while there were inconsistences at the intergeneric and intraspecific levels. Research on Lichtheimia species by Schrodl et al. (2012) showed that the MSP dendrogram clearly discriminated different species, but the topology was different from that of the phylogenetic tree. Therefore, MSP dendrogram is an excellent tool for interspecific differentiation but is limited in phylogenetic analysis (Shao et al., 2018). In addition, we found the MSP dendrogram cannot differentiate species in the S. apiospermum complex. Shao et al. (2020) also found that the MSP dendrogram was difficult to distinguish species in the Trichophyton mentagrophytes series. These results indicated discrimination of close species in some species complex by MSP dendrogram is difficult. This study showed that with the supplementary database, MALDI-TOF MS represents a powerful tool to identify Scedosporium, Acremonium-like, Scopulariopsis, and Microascus species. As the test strains were all previously identified storage strains, and the species were rare and difficult to collect in a short time, blind test was not conducted. In the future, the identification effect will be clearer on the basis of continuous database expansion and amplification of test species and strains. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://www.ncbi.nlm. nih.gov/genbank/, MW528357 to MW5283564. AUTHOR CONTRIBUTIONS JY designed the study. ZW collected the samples. LW, JS, YS, LY, and HW participated in the performance of the research. LW participated in the writing of the manuscript and data analysis. All authors contributed to the article and approved the submitted version.
5,118.4
2022-03-02T00:00:00.000
[ "Biology" ]
An enhanced technique for digital watermarking using multilevel DWT and error correcting codes Digital Watermarking has attracted researchers’ attention because of its useful applications and, over the past decades, great efforts have been made to develop digital watermarking techniques and algorithms. Most researches use different transform techniques to enhance the robustness and quality of extracted watermark. This paper presents an enhanced technique for digital image watermarking based on multilevel Discrete Wavelet Transform (DWT) in conjunction with the well-known RS codes over finite fields. To observe and appreciate the significance of using the error correcting codes technique for enhancing the digital watermarking performance against attacks, a series of experiments were conducted. The enhanced methodology, presented and implemented in this research, achieved a very good performance. Regarding the significance of using error correcting codes in conjunction with DWT transform digital image watermarking; it was shown that, in all cases investigated, for all the attacks considered, there was an increase in the robustness of the digital watermark, in terms of the performance measure SSIM values. In some cases, it improves, to almost 27 times, the case without using error correcting codes. Among each class of codes, for all the attacks, Reed-Solomon block codes of length n = 255 over the Galois field GF(2) with k = 135, performs better than all others. Introduction Digital Watermarking embeds a digital image, known as a watermark, into a host image with the aim of being able to detect its presence at a later stage [4]. Its main uses are, but not limited to, establishing and proving ownership rights, tracking content usage, ensuring authorized access, facilitating content authentication and preventing illegal replication [3]. Digital watermarking has many branches where researches are involved with. These branches depend on how watermarking is classified. There are many ways to classify it as shown in Figure (1) [3,7,5]. In transform domain, there are many types of transforms such as discrete Fourier transform (DFT), discrete cosine transform (DCT), and discrete wavelet transform (DWT). Our research is applied for non-blind watermarking in transform domain with invisible watermark [6,5,8,11,9]. It is focusing on the ability of using DWT transform in conjunction with a family of RS cyclic error correcting codes of different rates to investigate its effectiveness on the robustness of the watermark against various well-known watermarking attacks. The remainder of this paper is organized as follows: 1.Discrete Wavelet Transform Wavelet transform is one of the advanced mathematical transforms with many applications in different areas [1,6]. Due to its properties, Wavelet transform has become an important tool in image processing and watermarking. The basic idea of discrete wavelet transform (DWT) is to separate the frequency details of the image, and is based on wavelets, of varying frequency and limited duration. Each level of decomposition of (DWT) separates an image into four sub-bands, namely a lower resolution approximation component (LL) and three other corresponding to horizontal (HL), vertical (LH) and diagonal (HH) detail components. The LL sub-band is the result of low-pass filtering both the rows and columns and contains an approximate description of the image. The HH sub-band is high-pass filtered in both directions and contains the high-frequency components along the diagonals. The HL and LH images are the results of low-pass filtering on one direction and high-pass filtering in another direction. After the image is processed by the wavelet transform, most of the information contained in the original image is concentrated into the LL image. LH contains mostly the vertical detail information which corresponds to horizontal edges. HL represents the horizontal detail information from the vertical edges. The low pass subband can further be decomposed to obtain another level of decomposition [9,12,13]. This process is continued until the desired number of levels determined by the application is reached. The result of a three levels of decomposition of an image is shown in Figure 1. Error Correcting Codes In this subsection, an introduction to the well-known class of RS error correcting codes is presented [9]. RS codes are a subclass of cyclic codes and are characterized by their particular construction through minimal polynomials [2]. The construction of this kind of codes is very similar to generating cyclic codes, with the only difference that, choosing the zeros of the generating polynomial (X).We to consider the elements of the Galois field GF(q^m) described by consecutive powers of the primitive element α . As a matter of fact, if we are required to correct t errors, we have to select 2t consecutive powers of the primitive element α so that to obtain a minimum distance 2t +1. Reed-Solomon codes are minimum distance codes, that is a Reed-Solomon code of the (n , k), which will be denoted by RS(n , k) has got the smallest possible Hamming distance: dmin(RS(n , k))= nk + 1 [10]. 3.Proposed Methodology This paper proposes an enhanced digital image watermarking technique that is based on the Multilevel Discrete Wavelet Transform (DWT) in conjunction with two families of powerful error correcting codes, RS codes over finite fields. Digital Watermarking Algorithm In general, a digital watermarking system consists of an embedding and an extracting procedure [1]. In the proposed scheme, an error correcting code is incorporated to correct the errors that occur due to the attacks on the watermarked image. The next subsections will present the Embedding and Extracting methods as well as the Encoding and Decoding of the digital image watermark. 1.Digital Watermarking Embedding In this approach, the digital watermark image is encoded using RS block error correcting codes, then DWT is applied to the encoded digital watermark. On the other hand, the digital host image is DWT transformed directly. The embedding of the encoded digital watermark is performed using the well-known alpha blending technique [13,14] as explained below. Alpha blending technique used for Watermark Embedding. The formula of the alpha blending embedding technique is as follows: WMI=k×(LLj)|+q×(EWMj) (1) Where : ▪ EWMj is the j-levels low frequency approximation of the encoded watermark image. ▪ LLj is the j-levels low frequency approximation of the host image HI ▪ WMI is the watermarked image. 2.Digital Watermarking Extraction In this research, various types of attacks on the watermarked image are considered. Addition of noise was the main focus of this research; the noise types considered are Salt & Pepper, Gaussian, and Speckle noise. In this subsection, the extraction of the digital watermark method is presented and illustrated. Watermark Extraction using Alpha blending extraction technique: Let WMIA denote the attacked digital watermarked image. The formula of the alpha blending extraction technique is as follows: ▪ ERW is the low frequency approximation of the recovered encoded watermark. ▪ LLj is the j-levels low frequency approximation of the host image . ▪ WMIA is the attacked watermarked image. 3.Performance Evaluation & Measures For non-blind digital watermarking systems, the digital watermark image(WM), should not affect the appearance of the host image(HI) ; in image processing terminology this is known as a distortion [9,15,16]. The Peak Signal to noise ratio is given as follows: attacks. The similarity between the inserted digital watermark and the extracted digital watermark is a good measure adopted in such type of applications. Experimental Setup & Results Discussions To validate the enhanced proposed algorithm, a series of experiments are designed and implemented using the well-known software package MATLAB. To obtain the optimal values for the scaling factors r & q , extensive experiments were conducted. Below, it is shown that the optimal values are r = 0.99 and q = 0.001. For comparison, the worst case values for r & q were recorded. So, throughout the experimental setup, these optimal values are used. 2.Scenario 1: Comparing between the noise level with and without RS ECC In this experiment, three types of noises were added (to simulate the attack), with different levels. The resulting SSIM & PSNR were computed for two cases, case 1: without using RS ECC code, and case 2: with using RS ECC code. The results are shown in Table 1. Table 1 shows the values of SSIM for each noise level for both cases: with and without RS ECC block code. It can be observed that around 9 times an improvement is gained when using RS codes at the noise level (0.01), this is obtained from (0.4170 / 0.0442). Figure (5), illustrates, in bar plots, the significant increase in SSIM when using RS codes, for various noise levels. In this case, a noise of type Gaussian Noise is added to the watermarked image with different levels. Table 1 shows the values of SSIM for each noise level for both cases: with and without RS ECC block code. It can be observed that around 27 times an improvement is gained when using RS codes at the noise level (0.01), this is obtained from (0.3579 / 0.0128). Figure (6), illustrates, in bar plots, the significant increase in SSIM when using RS codes, for various noise levels. In this case, a noise of type Speckle Noise is added to the watermarked image with different levels. Table 1 shows the values of SSIM for each noise level for both cases: with and without RS ECC block code. It can be observed that around 4 times an improvement is gained when using RS codes at the noise level (0.01), this is obtained from (0.3729 / 0.0954). Figure (7), illustrates, in bar plots, the significant increase in SSIM when using RS codes, for various noise levels 3.Scenario 2: Comparing between RS ECC with different rates In this section, we investigate the robustness improvement gained by using powerful family of error correcting codes named Reed-Solomon block codes of length n=255 over the Galois field GF2^8 with different code rates, k=135,165,175,183,191,205,225&233. The watermarked image was subjected to three different attacks, addition of Salt & Pepper noise, and addition of Gaussian noise and addition of Speckle noise in DWT L3. Table 2 gives the results obtained when the watermarked image was subjected to the addition of Salt & Pepper Noise Attack, Gaussian noise Attack, and Speckle Noise Attack for different RS code rates (k), in L3-DWT level. In order to compare the performance of each RS code, under each type of noise attack, the following results are obtained. • Salt & Pepper Attack For this type of noise, it can be observed from Table 2, that the eight RS ECC codes used can be arranged, with the best code in the beginning as follows: . This result is illustrated in Figures 8a&b. • Gaussian Noise Attack For this type of noise, it can be observed from Table 2, that the eight RS ECC codes used can be arranged, with the best code in the beginning as follows: . This result is illustrated in Figures 9a&b. • Speckle Noise Attack For this type of noise, it can be observed from Table 2, that the eight RS ECC codes used can be arranged, with the best code in the beginning as follows: . This result is illustrated in Figures 10a&b. 4-Conclusions Many models and algorithms have been designed to address copyright problems. This paper dealt with the invisible digital watermarking where the mark is a digital image (LOGO) of Aden University, and the cover is another commonly used digital image (Lenna). The research investigated the performance the well-known Digital Wavelet Transform (DWT) in conjunction with the technique of block error control coding classes, RS under various well-known attacks. The enhanced methodology, presented and implemented in this research, achieved a very good performance. Regarding the significance of using error correcting codes, in conjunction with DWT transform digital image watermarking, it was shown that, in all cases investigated for all the attacks considered, there was an increase in the robustness of the digital watermark, in terms of the performance measure SSIM values. In some cases, it improves to almost 27 times the case without using error correcting codes. Among each class of codes, for all the attacks, Reed-Solomon block codes of length = over the Galois field ( ) with = , performs better than all others, which is supported by the theory of error correcting codes.
2,815.4
2022-03-22T00:00:00.000
[ "Computer Science" ]
UNRAVELING THE ANTHROPOLOGICAL-EXISTENTIAL SIGNIFICANCE OF TRANSCENDENTAL PROPOSITIONS : Kant labels his transcendental propositions as “ principles ” instead of mathematical “ theorems ” because they have the quite peculiar property of “ making possible their ground of proof ( Beweisgrund ), namely experience ” . The paper introduces an original reading. Importantly, this reading does not conflict with established interpretations, as it does not touch on the core focus of Kant's first Critique — examining the possibility of cognition ( Erkenntnis ). The emphasis is on the anthropological sense of Kant's key question: “ What is man? ” The proposal suggests that “ possible experience ” can be anthropologically understood as the possibility of understanding ourselves as human beings. Our understanding of ourselves dispenses with concepts made a priori, such as mathematical and formal ones. In contrast, without categories (and thus without transcendental propositions), we cannot comprehend ourselves as inhabitants of a world of persistent objects and events that interact causally in space and time. According to this interpretation, a “ synthetic a priori proposition ” , in Kant's view, is one whose truth depends on the world, not conceptual relations. Nonetheless, it is a priori in a quite specific sense — it is essential for our understanding as human beings. Kant e-prints, Campinas, v. 18, pp.[50][51][52][53][54][55][56][57][58][59][60][61][62][63]2023 between the dogmatic and mathematical uses of pure reason.Additionally, he identifies two categories of synthetic a priori propositions: mathematical and metaphysical.While the synthetic a priori propositions of mathematics are "theorems" (Lehrsätze) whose proof is based on a priori or pure intuitions, the synthetic a priori propositions of metaphysics are called "principles" (Grundsätze) because they have the special property of making their ground of proof (Beweisgrund), namely experience, first possible and must always be presupposed in this" (see KrV, A737/B765). 1 Kant argues that in "dogmatic" metaphysics, metaphysical propositions are misunderstood because they are equated with mathematical theorems (Lehrsätze) that can be derived from axioms by a priori intuitions.In contrast, in his critical philosophy, metaphysical propositions are referred to as "principles" (Grundsätze), i.e., propositions with a unique proof method, namely the proof that such principles are necessary for possible experience.Regarding this, "transcendental" does not refer to any a priori or pure propositions whose truth is independent of experience but only to a priori propositions that make experience possible in the first place. The idea of a special method of transcendental proof gave rise to decades of debate about the nature of so-called "transcendental arguments".It begins with a single mention in Strawson's (1959) book and Stroud's (1968) renowned refutation.Since then, the argument has continued to be heated. 2The discussion centers on multiple axes.The first question is whether the transcendental argument is a Kantian refutation of global skepticism.If this was Kant's intent, the second question is what form of global skepticism he would have aimed for with his arguments if he had achieved his goal.If the alleged transcendental argument is indeed anti-skeptical, the new question is whether or not it is effective.In the nineties of the previous century, a consensus arose regarding the following thesis: "world-directed" or 1 The same idea appears in several passages of the Critique.For example, Kant claims that without a priori concepts (of transcendental propositions), "nothing is possible as an object of experience.The objective validity of the categories as a priori concepts rests on the fact that through them alone is experience possible" (KrV, A93/B126, emphasis in original)."The possibility of experience is, therefore, what gives us all our cognitions a priori objective reality" (KrV, A156/B185, emphasis in original)."The conditions of the possibility of experience, in general, are at the same time the conditions of the possibility of the objects of experience, and on this account have objective reality in synthetic judgment a priori" (KrV, A158/B197, emphasis in original)."Through concepts of the understanding, however, it certainly erects secure principles, not directly from concepts, rather always indirectly through the relation of these concepts to something contingent, namely possible experience" (KrV, A737/B765, emphasis in original). 2Considering what was published in the twenty-first century, the literature is enormous.See Bardon 2005Bardon , 2006;;Bell, 1999;Callanan, 2006Callanan, , 2011;;Caranti, 2017;Cassam, 2007;Chang, 2008;Dicker, 2008;D'Oro, 2019;Finnis, 2011;Franks, 2005;Giladi, 2016;Glock, 2003;Grundmann and Misselhorn, 2003;Houlgate, 2015;Lockie, 2018;McDowell, 2006;Mizrahi, 2012;Rähme, 2017;Rockmore & Breazeale, 2014;Russell & Reynolds, 2011;Stapleford, 2008;Stern, 2007;Vahid, 2011;Wang, 2012;Westphal, 2004."truth-directed" transcendental arguments are doomed to fail (see Peacocke, 1989, p. 4;and Cassam, 1999, p. 83).3At most, transcendental arguments could establish the essential connections between our conceptual scheme's primary concepts (see Strawson, 1984;Stroud, 1999;Stern, 2007).However, a minority still believes in the viability of world-directed transcendental proofs. Indeed, there are connections between Kant's special transcendental methods of proof and the transcendental proposition, as in several cases, we characterize a proposition by its method of proof, and as we shall see below, several readings of transcendental propositions rely on a prior understanding of "possible experience" as forms of transcendental argument.In any case, the present article is only concerned with such a "peculiar method" of proof to the extent that it facilitates comprehension of the transcendental proposition, which is the main topic.The multi-decade debate over the nature and efficacy of transcendental arguments is none of our business.We are interested in what Kant calls "transcendental propositions".Kant referred to these propositions as synthetic a priori transcendental propositions.What are they exactly? There are several well-established interpretations of this.The paper introduces a completely original reading.Importantly, this reading does not conflict with established interpretations, as it does not touch on the core focus of Kant's first Critique-examining the possibility of cognition (Erkenntnis).The emphasis is on the anthropological sense of Kant's key question: "What is man?" (Log, 9: 25).The proposal suggests that "possible experience" can be anthropologically understood as the possibility of understanding ourselves as human beings.Our understanding of ourselves dispenses with concepts made a priori, such as mathematical and formal ones.In contrast, without categories (and thus without transcendental propositions), we cannot comprehend ourselves as inhabitants of a world of persistent objects and events that interact causally in space and time.According to this interpretation, a "synthetic a priori proposition", in Kant's view, is one whose truth depends on the world, not conceptual relations.Nonetheless, it is a priori in a quite specific sense-it is essential for our understanding as human beings. This paper is organized as follows: In the section following this brief introduction, we will appreciate Kant's view on the tertium connecting the concept predicate with the conceptual subject in the case of synthetic a priori propositions, especially transcendental ones.After discarding several possible readings, we reach an aporetic conclusion: what Kant calls a tertium cries out for interpretation. In the third section, we appreciate the mainstream reading of "possible experience" as the possibility of objectively representing objects.We argue that this reading finds no support in Kant's writings and is at odds with what makes transcendental deduction inevitable for Kant, namely, the metaphysical fact that we can already represent objects without categories or transcendental propositions through our senses alone.The reading that best fits Kant's transcendental deduction is to assume that "possible experience" means the possibility of recognizing that what we represent through our senses exists objectively as a precondition for Newtonian mechanics. In the fourth and final section, we present our alternative existential reading.This reading is not meant to exclude any other.It is compatible with the two interpretations considered last.It is based on Kant's distinction between concepts made a priori and concepts given a priori.The claim is that transcendental propositions are indispensable for understanding ourselves as human beings, i.e., as inhabitants of a world of persistent objects and events that interact causally in space and time. On the supreme principle of all synthetic judgments Given that for Kant, all propositions have a categorical form, namely, a predicate concept is predicated of whatever a subject concept represents, then all synthetic propositions require a tertium that connects the two main concepts of a synthetic proposition. What is this tertium in the particular case of transcendental propositions?We can rule out a priori three possible readings without much thinking.The first considers this tertium as an empirical or a posteriori sensory intuition representing something particular.A particular empirical intuition cannot be the required tertium because Kant talks about a priori and not a posteriori propositions.Empirical intuitions are the basis for justifying a posteriori proposition. Empirical intuitions are excluded a priori. For equally obvious reasons, the tertium cannot be an a priori or a pure intuition, for as we have seen, a transcendental proposition is not an a priori mathematical theorem whose proof rests on axioms, which in turn rest on a priori intuitions (construction of concepts). They are principles (which first make experience possible).Let us now take stock and consider what Kant says in the section entitled "On the supreme principle of all synthetic judgments" (KrV, A114/B193).Kant names three candidates for the conditions of "possible experience:" If it is thus conceded that [in the case of synthetic a priori propositions] one must go beyond a given concept in order to compare it synthetically with another, a third thing is necessary in which alone the synthesis of two concepts can originate.But now, what is this third thing, as the medium of all synthetic judgments?There is only one totality in which all of our representations are contained, namely inner sense and its a priori form, time.The synthesis of representations rests on the imagination, but their synthetic unity (which is a requisite of the judgment) on the unity of apperception.(KrV, A155/B194, emphasis added) When the synthetic proposition is a posteriori, the tertium becomes an empirical sensory intuition of an object.But when the synthetic proposition is a priori mathematical, the tertium takes the form of "pure intuition".When, however, the synthetic proposition is a priori but transcendental, this tertium finally takes the enigmatic form of a "possible experience."Kant calls this the inner sense, the synthesis of the imagination, and the unity of apperception (a possible experience).Nonetheless, instead of clarifying the expression "possible experience," Kant's list cries out for interpretation. Let us now consider a third untenable reading.For those who think that the idea of an "a priori synthetic proposition" is an oxymoron, there is no tertium.The entire Critique is a complex conceptual analysis of the central concept of "possible experience".Kant calls the inner sense, the synthesis of the imagination, and the unity of apperception only "partial concepts" (Merkmale) of the concept of possible experience.However, one may wonder why Kant speaks of an "a priori synthetic proposition" and not an analytic proposition.The usual answer is that Kant had a somewhat restrictive conception of analyticity, namely one whose negation reveals a self-contradiction or whose predicate concept is already contained in the subject concept (see Bennett, 1966).Suppose, however, that one can free oneself from Kant's restrictive understanding of analyticity.In that case, it is easy to see that what he calls synthetic a priori are just highly complex analytic propositions.However, we need not waste our time refuting this possible reading of Kant's transcendental proposition since it finds no textual support in Kant's writings. In the literature, there are different interpretations of the three conditions.I will discuss only the three most plausible interpretations.The first considers possible experience as "possible perception", where "perception" is understood as conscious intuition.In this interpretation, the truth of the transcendental proposition rests on the fact that they allow Kant e-prints, Campinas, v. 18, pp.50-63, 2023 introspection of our mental states in the inner sense, the synthesis of the imagination, and the unity of perception.This reading is closely related to the idea of a transcendental argument that seeks to show that perception presupposes transcendental propositions.I will explain and reject this reading in the remainder of this section. In his Prolegomena, Kant gives two exclusionary meanings for experience in the same paragraph: "When I claim that experience teaches us something, I am thinking only of the perception it contains.On the contrary, experience is produced by the attribution of an intellectual concept to perception" (Prol,4: 305).According to the first meaning, "experience" is nothing other than "perception," namely, something essentially subjective, the consciousness of my sensory state.We find this to be the result of apprehension in the A-deduction.As a perception, experience requires only "running through [this manifold] and then taking it together" (KrV, A99). The problem is how it is possible, by starting from perception as a subjective synthesis of apprehension, to justify transcendental objective propositions such as Newtonian mechanics (conservation of mass, inertia, and equality of action and reaction). The original gap between sensible intuition and transcendental propositions remains.Worse, the scholar faces the following dilemma: On the one hand, the more you side with "possible experience," the more you make the synthetic proposition "quasi-analytic".On the other hand, the less you side with "possible experience", the more you widen the gap between "possible experience" and the transcendental propositions. The mainstream reading A third interpretation aligns with the mainstream viewpoint.This interpretation explains "possible experience" as the ability to create a representation of an object based on one's sensory input.Essentially, our sensory perception initially presents us with a disorganized sensory experience.But by using a priori concepts of synthesis called categories, we can construct a clear representation of an object from this chaotic sensory variety. Transcendental propositions play a crucial role in making this possible.They involve the inner sense, the synthesis of the imagination, and the unity of perception.This interpretation is closely related to Strawson's idea of a transcendental argument against the skepticism of sense data (Strawson, 1966).To assess this mainstream interpretation, examining these passages in §13 is essential: Objects can indeed appear to us without necessarily having to be related to functions of the understanding.(KrV, A89/B122.Emphasis added) Appearances would nonetheless offer objects to our intuition, for intuition by no means requires the function of thinking.(KrV, A90-1/B122-3.Emphasis added) According to the prevailing reading, the term "possible experience" in Kant's work refers to the ability to represent an object through the senses.Based on this interpretation, the quoted passage suggests that Kant is considering "skeptical scenarios".These scenarios challenge the idea that objects can only be represented through categories.Kant's deduction aims to refute this skeptical challenge by proving that objects can only appear through categories.Therefore, the skeptical hypotheses are flawed. Following Strawson (1966) and Henrich (1969), Allison suggests that Kant entertains a radical skeptical scenario in A89/B122 and A90-1/B122-3 that is to be refuted at the end of the deduction (Allison, 2015, p. 54).His primary assumption is that our experience would be utterly disordered and haphazard without the categories.Allison believes that our understanding plays a vital role in synthesizing and organizing the sensory information our senses receive into coherent objects of perception.Understanding not only serves to comprehend what we represent by our senses, but it is also a creative force that structures the sensory input, resulting in our representations of objects. Within the deduction, there are only a limited number of passages that, if misinterpreted, could imply the skeptical scenario proposed by Allison.One such passage is Kant's assertion in Critique that inner perception is empirical and eternally variable (see KrV, A107).However, this statement does not imply that our self-knowledge derived from introspection is a disordered hodgepodge of sense impressions without apperception and categories.Nonetheless, the most misleading and misinterpreted passage is found in A-Deduction.There Kant claims that without a transcendental ground of unity, "a swarm of appearance" could fill up our souls, suggesting that without categories, our sense experience would be senseless (see KrV, A111). Upon careful examination, Kant's concept of a "swarm of appearances" is not synonymous with disordered, meaningless, manifest sensory experiences.To be sure, Kant assumes that a multitude of appearances can populate our consciousness, suggesting that objects can reveal themselves to our senses independently of experience or cognition. However, the mainstream mistakenly treats experience and cognition as mere representations of objects.Instead, experience and cognition should be understood as technical terms.They do not mean the representation of objects or the representation of objective particulars.existence of objects rather than as prerequisites for representing objects' existence objectively. Our understanding as human beings The epistemic grounding or justification for Kant's transcendental proposition regarding "possible experience" is rooted primarily in cognition (Erkenntnis).In particular, it relates to our cognitive awareness that what we represent as existing objectively in space and time exists objectively.This cognition serves as the basis for validating the transcendental proposition.Kant's transcendental propositions encompass several aspects, including the "Analogies of Experience" and the "Postulates of Empirical Thinking", to name a few.However, in exploring the meaning of "the "possibility of experience", it is essential to go beyond the experience or cognition of what is represented as an object.It is insufficient to limit understanding to cognitive recognition. Instead, there is an ontological (in the phenomenological sense of "ontological") or existential meaning that has been overlooked.This dimension extends the meaning of "the possibility of experience" beyond cognitive cognition.When we acknowledge this ontological meaning, we gain a fuller understanding of Kant's notion of the "possibility of experience" and its meaning within his transcendental framework.In Kant's "Transcendental Doctrine of Method" and his books of Logic, he distinguishes between "given concepts" and "made concepts," highlighting that a priori given concepts cannot be defined (KrV, A728/B756).On the other hand, "made concepts" include a priori concepts of mathematics or Logic that are not acquired through experience (a posteriori intuitions), as well as concepts that refer to artifacts (Sache der Kunst) (Refl,16: 581).The crucial feature of made concepts is that they can be defined either by an a priori intuition of their subject matter in the case of mathematics or by a functional analysis of their meaning in the case of artifacts.For instance, when considering the concept of a triangle, it can be defined as a polygon whose angles add up to 180 degrees.This definition provides a clear understanding of the nature of a triangle.Similarly, the concept of a shovel can be defined as an artifact created with the purpose of digging.This definition provides a functional analysis of the meaning of shovel meaning, elucidating its intended use.By contrasting given and created concepts, Kant emphasizes the distinction between concepts that cannot be defined and concepts that are constructed or associated with particular objects or functions, which provides clear definitions and analysis.categories to what appears to us-are an indispensable condition for us to understand ourselves as inhabitants of an objective world.But Kant's refutation of idealism provides additional support.As we have seen, transcendental propositions are those whose truth is the condition for our ability to recognize what we represent with our senses as objects, that is, as something that exists objectively.In his refutation of idealism, Kant provides evidence that the empirically determined consciousness of my existence in time entails the consciousness of something that exists objectively in as something persistent.Now, whether we consider Kant's refutation successful or not (that is beyond the scope of this paper), if the consciousness of something existing objectively in space is, according to all of Kant's accounts, a transcendental proposition-the First Analogy of Experience-that is an a priori proposition that enables us to be conscious of our existence as inhabitants of an objective, the spatiotemporal world of objects and events in causal interaction.
4,408.6
2023-12-30T00:00:00.000
[ "Philosophy" ]
Introducing a bibliometric index based on factor analysis This work applies a factor analysis with VARIMAX rotation to develop a bibliometric indicator, named the Weighted Factor Index, in order to derive a new classification for journals belonging to a certain category, alternative to the one provided by the Journal Impact Factor. For this, 16 metrics from three different databases (Web of Science, Scopus and SCImago Journal Rank) are considered. The Weighed Factor Index entails the advantage of incorporating and summarizing information from all the indicators; so as to test its performance, it was applied to rank journals belonging to the category Information Science & Library Science. Introduction The Journal Impact Factor (JIF), introduced by Garfield (1955), is widely considered as the reference indicator to establish the quality of a journal, hence used as a tool in several evaluation processes. Authors such as Roberts (2017), believe its usefulness is limited and that it should be replaced by more valid and informative indicators. Moreover, Malay (2013) suggests that journal editors might increase JIF through coercive self-citation or even by omitting citations from competing journals. To overcome such drawbacks, other 1 3 bibliometric indicators have been developed, such as the Eigenfactor score, the Article influence score, or the Immediacy index, among others. Nevertheless, the JIF continues to be used, and proof of its importance is that more than 2000 articles have analysed it or used it as part of the title. A number of committees for the assessment of research activity in Spain and elsewhere, consider publication in journals of the first quartile-according to the ranking established by the Journal of Citation Reports (JCR) developed by Web of Science-as a priority criterion for positive evaluation in almost all fields of knowledge. Through the development of mathematical models, variables that influence the JIF can be analysed, both from the standpoint of numerical values (Valderrama et al., 2018a), and in terms of the position that a journal occupies within the ranking according to its JIF (Valderrama et al., 2018b(Valderrama et al., , 2020. Approaches to the explanation and prediction of JIF through statistical regression have been addressed by Park (2015), Qian et al. (2017), Ayaz et al. (2018), Bravo et al. (2018) and Abramo et al. (2019), among others. The purpose of this work is to define a new metric that summarizes and compiles the information contained in various indicators, specifically the main ones provided by three databases: Journal of Citation Reports (JIF, 5-year JIF, JIF without self-citations, Eigenfactor score, Article influence score, Immediacy index, total number of citations, citable items, open access papers during 2015-19, number of times that other sources have cited articles from the journal between 2015 and 2019, cited half-life and citing half-life), Scopus (journal's h-index, CiteScore and Source Normalized Impact per Paper) and the SJR index by SCImago Journal Rank. An antecedent of the bibliometric application of this technique, in a different context, was developed by Bollen et al. (2009). They performed Principal Component Analysis (PCA) of the rankings produced by 39 existing ones and proposed measures of scholarly impact, calculated on the basis of both citation and usage log data. More recently, Ortega (2020) introduced by means of PCA two groups of Altmetric impact indicators based on weights, using two metrics for different impact dimensions. Seiler and Wohlrabe (2012) also applied PCA to obtain weights of a set of 27 bibliometric indicators in journals from the field of Economics, to derive, as an application, a world ranking of economists based on the PCA. Later, Bornmann et al. (2018) again used PCA to assign weights to a set of 22 indicators to obtain a meta-ranking of economic journals. The methodology developed in the current contribution will relies on Factor Analysis with VARIMAX rotation considering three factors of the model in such a way that they explain around 82.5% of the total variability; the new indicator proposed, which we call the Weighed Factor Index (WFI), affords an alternative classification of journals with respect to those provided by the JIF or by other metrics. It is applied to rank the journals belonging to the JCR category Information Science & Library Science, with interpretation of the meaning of each factor. Methodology Factor Analysis is a classic statistical technique, introduced by Spearman (1904), to represent a set of variables through a linear combination of underlying common and unobservable factors, and a variable that synthesizes the specific part of original variables. The usual procedure considers orthogonal factors, though they could also be obliques. By selecting a suitable number of factors we can reduce the dimension of the initial problem. Later, Hotelling (1933) developed a factor extraction method based on the principal component technique. In PCA each component explains a percentage of the initial variance, and there are as many components as initial variables, so that by selecting those with the greatest variance, a high percentage of the total variability can be concentrated. Mathematically, the principal components are obtained by solving a matrix problem of eigenvalues, which represent the variances of the components. The main problem associated with the factors lies in the interpretation of their meaning. It is common to link each factor to the variables of the combination that originates it, with higher coefficients in absolute terms. The factor matrix representing the relationship between factors and initial variables can, however, be difficult to interpret the factors. To facilitate interpretation, so-called factorial rotations are carried out. They consist of rotating the coordinate axes representing the factors until they are as close as possible to the variables in which they are saturated. Factor saturation thereby transforms the initial factor matrix into another, called a rotated factor matrix, which is a linear combination of the first and explains the same amount of initial variance, but is easier to interpret. No variable should be more saturated than one factor; and it would be desirable for the factors to have very high weights for some coordinates and very low for others. In practice, this situation does not arise and it is achieved by performing a rotation of the factors. Whereas it transforms the factorial matrix and changes the variance explained by each factor, the communalities are not altered. Unless there is reason to believe that the factors are correlated, the usual technique is orthogonal rotation, the most widely used method being VARIMAX introduced by Kaiser (1958). In this work we deal with 16 metrics selected from three databases: Web of Science (WoS), Scopus, and SCImago Journal Rank. From WoS the following metrics are considered: • Journal Impact Factor (JIF) Yearly average number of citations of articles published in the last two years in a given journal • 5-year JIF The same JIF but considering a window of five years instead of two • JIF without self-citations The same JIF but removing citations of journal articles in the same where they are published • Eigenfactor score Number of times articles from the journal published in the past five years have been cited in the JCR year calculated by an algorithm according to which citations from highly ranked journals have a greater weight than those from poorly ranked journals and excluding self-citations • Article influence score (AIS) It determines the average influence of a journal's articles over the first five years after publication, again excluding self-citations • Immediacy index Average number of times an article is cited in the year it is published • Total cites • SCImago Journal Rank index (SJR) Is a measure of scientific influence of a journal that accounts for both the number of citations received and the importance or prestige of the journals where such citations come from The journal's h-index, CiteScore, and SNIP were obtained from Scopus; SJR from SCImago Journal Rank; and the remaining values from JCR. The 87 journals included in the category Information Science and Library Science of the 2019 edition of JCR (Clarivate Analytics, 2020) were considered in this study, although fourteen of them were excluded due to a lack of some of the metrics. Let us denote as f 1 , f 2 , …, f 16 the factors obtained from the 16 bibliometric variables considered, with respective variances (eigenvalues) λ 1 , λ 2 , …, λ 16 , so that V = λ 1 + λ 2 + ··· + λ 16 would be the total variance. The percentage of variance explained by each component is given by λ i /V; hence if we want to explain V up to a certain level, it is necessary to accumulate the first k components so that (λ 1 + λ 2 + ··· + λ k )/V can reach that level. We then define the Weighed Factor Index (WFI) as: and this will be the tool to obtain the new ranking of journals within the field. Let us observe that WFI is the sum of uncorrelated random variables, each collecting a piece of information from the analysis, and whose importance acts as a weighting factor. The statistical calculations were carried out using SPSS (version 26) licensed by the University of Granada. Results The journals of the category appear in Table A of the Appendix, sorted in descending order according to their JIF. As a previous step to the factor analysis, the main descriptive statistics of the bibliometric indicators have been calculated and are included in Table 1. Given the nature of each indicator, the descriptive ones take very different values. As the median is a more robust statistic than the mean, the differences observed between the different indicators are smaller, especially at very extreme values. Perhaps the most interesting aspect is reflected in the coefficient of variation that shows that those that represent characteristics of the volume of citations (Eigenfactor score, times cited, number of open access articles and total citations) present a much greater relative dispersion than the rest. After performing the corresponding Factor Analysis with VARIMAX rotation, the results shown in Table 2 were obtained. It is seen that the first three factors accumulate about 82.5% of the total variance, meaning our analysis is reduced to dimension 3. The factorial weights associated to these factors are shown in Table 3. We note that the first factor is mainly related to indicators representing averages or ratio of citations, that is, normalized metrics, while the second one is associated with variables that are expressed in terms of volume or quantity. In turn, the third factor represents the half-life of citations received and made by the journal. In view of these results, the indicator that we propose, called Weighed Factor Index (WFI), which integrates the information contained in the initial 16 metrics, is given by: that is, the sum of the three uncorrelated factors that accumulate 82.5% of the total variance weighted by their own variances. The distribution of the variances between the three factors after orthogonal rotation is quite balanced: the second accumulates approximately half the variance of the first, and the third, half that of the second. The application of the WFI to the journals of Information Science and Library Science gives rise to values and orders gathered in Table 4, where the value of WFI for a concrete journal is calculated by substituting in expression (1) the value of each factor corresponding to that journal. For example, in the case of Int. J. Inf. Manag., the factor values are: f 1 = 3.043, f 2 = 0.916 and f 3 = − 2.396, so that WFI = 21.997. As the average of the values of each factor for the different sample individuals, in our case the journals of the category, is zero, the sign of the factor for a journal indicates whether for that particular journal that factor is above or below of the mean of the values of the total of journals. In this way, positive values are interpreted as that the factor considered is higher than the average, and negative values in the opposite direction. This can therefore cause the WFI value to be negative. Table 5 displays the calculation of the bivariate Pearson and Spearman correlation coefficients for the complete set of indicators, together with the WFI. It can be seen that, in general, there is a high degree of correlation between the different metrics with (1) WFI = 7.714f 1 + 3.521f 2 + 1.963 f 3 Table 5 Pearson and Spearman correlation coefficients between bibliometric indicators: 1-JIF, 2-5 year JIF, 3-JIF without self-citations, 4-Inmediacy index, 5-Article Influence score, 6-CiteScore Finally, Table 6 shows the 5 journals with the highest and lowest values in the three factors and their positional changes. Interpretation of results and conclusion The new index introduced in this work, called the Weighted Factor Index (WFI), stems to some extent from JIF and other indicators correlated with it, yet it incorporates the information contained in other metrics that can be compartmentalized. In fact, WFI can be expressed as the sum of three dimensions: Factor 1 contains the information related to standardized indicators as representing an average or citation rate, such as the JIF and related indices (5-years JIF and without selfcitations), AIS, journals' h index, SJR index Immediacy index, CiteScore and SNIP. The classic idea of impact can be associated with them. Factor 2 represents quantity indicators as Eigenfactor score, total cites, citable items, times cited in period 2015-19, and open access papers published by the journals in this same period. None of them are normalized, but they represent volume. Factor 3 represents the long-term citation dimension insofar as it includes the halflife of received and performed citations by each journal, which are two indicators that respond to the same aging model described by Brookes (1970); they reflect the opposite of the Immediacy index included in the first component, which is the one having the narrowest window. Both are related to the aging process of the literature. In the sum that makes up the WFI, each factor is weighted by its respective contribution to the total variance through its corresponding eigenvalue. As seen in Table 1, the weight of the first factor is approximately twice that of the second, which in turn is twice that of the third. From the reordering provided by WFI, shown in Table 4, some noteworthy changes are observed for certain journals, although overall the recording is highly correlated with JIF, both quantitatively (Pearson coefficient = 0.912) and by orders (Spearman coefficient = 0.945). The greatest differences appear mainly regarding the second and third factor. Thus, on the one hand, the journals Scientometrics (ascending from position 21-7), Journal of Health Communication (from 41 to 27), and Qualitative Health Research (from 25 to 12) are seen to have a strong value for the second factor. In the opposite direction we can mention Learned Publishing (descends from position 26-40) with one of the lowest values in the half-life dimension, along with Journal of Organizational and End User Computing (from 38 to 51) and Malaysian Journal of Library & Information Science (from 46 to 59). Table 6 takes in the five journals with highest and lowest values in the three dimensions. Worth highlighting is the case of the International Journal of Information Management. It occupies the first position in terms of the JIF and the first factor, but the last with respect to the third factor, due to the short cited half-life of only 4.6 years, which makes it decline a position in the WFI order. It is displaced in the first position by MIS Quarterly due to the strength it presents in factors 1 and 3. Similarly, Journal of Strategic Information Systems, falls from position 4-9, also presenting a relatively low value in the second factor. There are two journals in the lower-middle zone of the ranking whose position is significantly altered when comparing both ranking criteria. It is Information Research, which occupies the last position in terms of Factor 1 but is fourth with respect to Factor 2, allowing it to rise from position 63 (according to JIF) to 52 (according to WFI). The second case is Social Science Information sur les Sciences Sociales, among the last five in terms of Factor 1, but occupying the second position of Factor 2, thereby rising from position 59 in the JIF to position 47 according to WFI. Based on the stated interpretation of the factors, an alternative approach to that proposed in this work, integrating the three factors in a single index, would consist of considering only some of them depending on the purpose of the analysis. Thus, for example, if when classifying journals we were interested only in the citation rate, we would consider only the first factor; or, if the interest was focused on the volume of citations, the analysis would be carried out taking into account the second factor. An interesting point to consider is that the more articles a journal publishes, the higher its impact factor and there is a direct linear relationship between the journal production and the impact factor (Rousseau & Van Hooydonk, 1996). This point can be debated and, in fact, in this work standardized indicators such as the JIF, the AIS or the Inmediacy Index (assigned to the first factor) are combined with others that are not, such as the Eigenfactor score, times cited or number of open access papers (assigned to the second factor). Although this may seem like an erroneous methodological approach, Factor Analysis itself is in charge, as we have seen, of configuring the model by giving each of the factors a homogeneous form. A similar situation occurs when estimating a regression model for a certain response variable, where the explanatory variables can collect very diverse information, and can even be qualitative. In conclusion, the indicator introduced in this article, called Weighed Factor Index, allows the information from various metrics to be aggregated through terms that are not correlated with each other in a single indicator, so that such information does not overlap. Therefore, a more reliable and complete ranking of journals within a certain category can be obtained than when using indicators that configure them in isolated fashion. Of course, the results obtained in this article correspond to the field of Information Science and Library Science, and may differ when studying other subject areas. Extrapolation to other categories included in the JCR would be interesting and will be approached in subsequent research efforts. Supplementary Information The online version contains supplementary material available at https:// doi. org/ 10. 1007/ s11192-021-04195-4. are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
4,400.2
2021-11-13T00:00:00.000
[ "Computer Science" ]
Sparsity-based multi-height phase recovery in holographic microscopy High-resolution imaging of densely connected samples such as pathology slides using digital in-line holographic microscopy requires the acquisition of several holograms, e.g., at >6–8 different sample-to-sensor distances, to achieve robust phase recovery and coherent imaging of specimen. Reducing the number of these holographic measurements would normally result in reconstruction artifacts and loss of image quality, which would be detrimental especially for biomedical and diagnostics-related applications. Inspired by the fact that most natural images are sparse in some domain, here we introduce a sparsity-based phase reconstruction technique implemented in wavelet domain to achieve at least 2-fold reduction in the number of holographic measurements for coherent imaging of densely connected samples with minimal impact on the reconstructed image quality, quantified using a structural similarity index. We demonstrated the success of this approach by imaging Papanicolaou smears and breast cancer tissue slides over a large field-of-view of ~20 mm2 using 2 in-line holograms that are acquired at different sample-to-sensor distances and processed using sparsity-based multi-height phase recovery. This new phase recovery approach that makes use of sparsity can also be extended to other coherent imaging schemes, involving e.g., multiple illumination angles or wavelengths to increase the throughput and speed of coherent imaging. Lensfree digital in-line holographic microscopy 1,2 is a rapidly emerging computational imaging technique, which allows highly compact and high-throughput microscope designs. It is enabled by leveraging constant advances and improvements in microscopy and image reconstruction techniques as well as image sensor technology and computational power, mostly driven by consumer electronics industry. Its current implementations can achieve gigapixel level space-bandwidth-products, by employing cost-effective and field-portable imaging hardware [1][2][3][4] . In order to keep the imaging setup as compact as possible, the on-chip holographic image acquisition platform employs an in-line holography geometry 5 , where the scattered object field and the un-scattered reference beam co-propagate in the same direction, and the intensity of the interference pattern between these two beams is recorded by a digital image sensor-array. Because the recorded hologram only contains the intensity information of the complex optical field, direct back-propagation of this in-line hologram to the object plane will generate a spatial artifact called twin image on top of the object's original image. Unlike an off-axis holographic imaging geometry 5 , where the twin image artifact can be robustly removed by angled wave propagation, in-line holography is more susceptible to this twin image related artifact term 5 . The negative impact of this artifact on image quality is further amplified owing to the small sample-to-sensor distances that are used in on-chip implementations of digital in-line holographic microscopy, where the sample field-of-view is equal to the sensor active area 2,3,6-10 . Twin image artifact in digital in-line holography can also be computationally eliminated by imposing physical constraints that the twin image does not satisfy. Based on such constraints, a twin-image-free object can be retrieved through e.g., an iterative error reduction algorithm 7,8 . One of the earliest explored constraints for this purpose is the object support where a threshold defines a 2D object mask and the back-propagated field on the object plane outside the mask is considered as noise and iteratively removed 1,8 . Although this simple approach requires a single hologram measurement, this constraint works better for relatively isolated objects and its implementation is challenging in dealing with dense and connected samples, such as pathology slides, which are of significant importance in biomedical diagnosis. Scientific RepoRts | 6:37862 | DOI: 10.1038/srep37862 To address this phase retrieval problem 5,7,8 of in-line holography it is common to apply measurement diversity which can include, e.g. sample-to-sensor distances 9-12 , illumination angles 13,14 and wavelengths [14][15] . However, previous efforts have shown that for imaging of spatially connected and dense biological objects such as pathology slides and tissue samples, the measurements always need to be oversampled in one domain, with several additional images acquired with different physical parameters. For instance, in imaging pathology slides using multi-height measurements, usually 6-8 holograms at different sample-to-sensor distances are required to get high quality and clinically relevant microscopic reconstructions (amplitude and phase images) of the object 10 , i.e., the number of measurements is 3-4 times of the number of variables, including the amplitude and phase pixels that needed to be retrieved in a complex image of the sample. This increase in the number of measurements also increases the data acquisition and processing times, limiting the throughput of the imaging system. Inspired by the fact that the images of most natural objects, such as biological specimen, can be sparsely represented in some wavelet domain 16 , here we introduce the use of a sparsity constraint in the wavelet domain, improving multi-height based phase retrieval, to significantly reduce the required number of measurements while maintaining the quality of the reconstructed phase and amplitude images of the objects. We experimentally demonstrate that for densely connected biological samples, such as Papanicolaou smears and breast cancer tissue slides, 2 in-line holograms with different sample-to-sensor distances are sufficient for image reconstruction, when sparsity constraints are applied during the iterative reconstruction process. The resulting reconstructed object image quality is comparable to the ones that can be reconstructed from > 4-8 different measurements using conventional multi-height phase recovery methods 2,9,10 . This means, by making use of a sparsity constraint in our reconstruction, we achieved at least 2-fold decrease in the number of holograms that need to be acquired. Furthermore, if we consider the number of unknown variables as 2 N (i.e., the amplitude and phase images of the object, each with N pixels), the number of measurements is also 2 N in this sparsity-based multi-height phase recovery approach, which means the physical measurement space is no longer oversampled unlike the previous multi-height phase and image recovery approaches. In fact, the additional sparsity constraint in wavelet domain enables us to go below a 3N-2 measurement limit that has been previously shown to be the smallest number of measurements needed for robust phase retrieval 17,18 . Note that the sparsity constraint in our approach is quite different from the previously reported compressive holography efforts [19][20][21][22] . These earlier reports imaged isolated objects and have shown that the free-space propagation in itself is an extremely efficient encoding mechanism for compressive sensing, allowing the inference of higher dimensional data from traditionally undersampled projections [23][24][25] . Here, we demonstrate the ability of wavelet domain sparsity encoding for multi-height-based phase recovery and demonstrate its success for clinically relevant dense samples, including highly connected pathology slides of breast tissue and Papanicolaou (Pap) smears that are imaged over a large field-of-view of ~20 mm 2 , equal to the active area of our image sensor chip. In fact, as a result of this significant difference in the density and connectivity of the object to be imaged, the number of measurements that we need to have without losing image quality is two, rather than a single hologram. This sparsity-based phase recovery approach can also be extended to other coherent microscopy schemes, involving e.g., multi-angle 13 or multi-wavelength-based 26 phase retrieval. Furthermore, this technique can be potentially combined with a recently introduced phasor approach for high resolution and wide field-of-view imaging 27 and/or multiplexed color imaging 28 to further reduce the number of measurements in these holographic microscopy approaches. Enabled by novel algorithmic processing, this sparsity-based holographic image reconstruction technique can be regarded as another step forward in making lensfree on-chip holography more efficient, higher throughput and more appealing in microscopy-related applications. Methods Sparse object representation. Image recovery based on sparsity constraints is a paradigm which has been applied for many different imaging-related tasks such as denoising, inpainting, deblurring, compression and compressed sensing/sampling 29 . In this framework, we wish to recover the discrete approximation of a complex sample, f , where 2 N is the total number of pixels which is required to represent this complex-valued sample/object with independent phase and amplitude channels, each having N pixels 5 . The main assumption of sparse image recovery is that the sought signal can be written as a linear combination of a small number of basis elements, , the number of significant coefficients of θ = (θ 1 , .., θ N ), which are required for accurate signal representation, S, is much less than N, i.e., S ≪ N. For dense biological samples including tissue sections and smears that we used in this manuscript, we found that the reconstructed coherent imaging results acquired by using 8 different sample-to-sensor distances (which we consider as our clinically relevant reference standard as confirmed in an earlier study 10 ) can be accurately represented using a very small S, i.e., ρ = S/N ≈ 0.07:0.15 through a mathematical transformation such as CDF 9/7 wavelet transform 30,31 which is one of the leading non-adaptive image compression techniques, also applied in JPEG-2000 image compression standard. This observation is used as a loose constraint for the number of sparse coefficients to be utilized during our iterative object reconstruction process, which will be detailed below. Another dimension of sparsity-based image reconstruction involves nonlinear operators which can be performed on the signal. One of the common sparsity promoting operators which is used in imaging-related applications is known as the total variation norm 32 , i.e., TV(f) which quantifies the magnitude of the gradient of the signal: where k and l are pixel indexes in the reconstructed image. This operator has been shown to be extremely useful in image processing tasks such as denoising, deblurring and compressed sensing, specifically with holographically acquired data 19,20,33,34 . A total variation norm based constraint aims the preservation of sharp boundaries of the object with smooth spatial textures confined between them. In this work, we apply the total variation operator in the wavelet domain in order to suppress noise within our iterative reconstruction process, which will be detailed below. Lensfree on chip imaging setup. A schematic figure of our lensfree holographic on-chip microscope is shown in Fig. 1. A broadband illumination source (WhiteLase micro, Fianium Ltd.) is filtered by an acousto-optic tunable filter to output partially coherent light within the visible spectrum with a spectral bandwidth of ~2-3 nm. The light is coupled into a single-mode optical fiber, and the emitted light from the fiber tip propagates a distance of ~5-15 cm before impinging on the sample plane, which is mounted on a 3D-printed sample holder. The sample is placed ~300-600 μ m above the active area of a CMOS image sensor chip (IMX081, Sony, 1.12 μ m pixel size, 16.4 Megapixels). In this on-chip imaging configuration, the sample field-of-view is equal to the sensor chip active area, i.e., ~20 mm 2 . The image sensor is attached to a positioning stage (MAX606, Thorlabs, Inc.), which is used for alignment, image sensor translation (to perform pixel super-resolution) and acquisition of several holograms with different sample-to-sensor distances, z i . Acquisition of several images with different sample-to-sensor distances generates a series of measurement constraints which are used for multi-height-based phase recovery, detailed in the next sub-sections. A custom developed LabVIEW program coordinates different components of this setup during the entire image acquisition stage. Hologram acquisition and pre-processing. In our experiments, a series of wide field-of-view and low resolution (1.12 μ m pixel size, before the pixel super-resolution step) holograms were acquired at each sensor-to-sample distance. For a given illumination wavelength, λ , refractive index, n, and sample-to-sensor distance z i the hologram formation at the sensor plane can be written as: where o is the complex-valued object function, A is the amplitude of the reference (plane) wave and the . ASP [ ] operator is the angular spectrum based free-space propagation of the illuminated object, which can be calculated by the spatial Fourier transform of the input signal and then multiplying it by the following filter (defined over the spatial frequency variables, υ x , υ y ): which is then followed by an inverse 2D Fourier transform. The hologram intensity, I x y ( , ) i , is sampled by the imaging sensor chip with a sampling interval which corresponds to the pixel pitch. In order to generate higher resolution holograms, the stage that holds the sensor chip was programmed to move the sensor laterally on a 6 × 6 grid and at each grid point a lower resolution hologram was acquired. Applying a conjugate-gradient-based pixel super-resolution method 35,36 on these 6 × 6 holograms results in a new hologram with an effective pixel size of ~0.37 μ m. Next, the process is repeated for N z different sensor-to-sample distances, i.e., z in order to create the measurement diversity required for standard phase recovery. Following the hologram acquisition, these super-resolved holograms are digitally aligned with respect to each other and the estimated sample-to-sensor distance is refined using an auto-focusing algorithm 37 . Initial phase estimation using the Transport-of-Intensity Equation (TIE). Previous reports 10 have demonstrated that the effectiveness of the image reconstruction process using lensfree multi-height holograms can be substantially accelerated by solving the TIE [38][39][40] to obtain an initial phase guess. The TIE is a deterministic phase retrieval method that generates the solution from a set of two or more diffraction patterns or holograms, which are acquired at different sample-to-sensor distances. Unfortunately, this analytical solution is a lower-resolution method and cannot in itself generate a high-resolution object reconstruction in lensfree on-chip microscopy, and that is why it is followed by an iterative phase refinement method, as detailed below. Iterative multi-height phase recovery. The multi-height-based phase recovery approach 9,10,41 is an iterative error-reduction algorithm which uses the acquired holograms at various sample-to-sensor distances as a set of physical constraints to correct the estimated phase in each iteration. In this algorithm, the lower-resolution phase result obtained by the TIE method is used as the initial phase term to accompany the field amplitude that is acquired at the plane that is closest to the sensor chip. This newly formed complex field is then numerically propagated to the plane of the next sample-to-sensor distance, where its amplitude is averaged with the square root of the second hologram, and the phase is retained for the next step of the iteration. The same procedure is repeated for all the other acquired holograms at different sample-to-sensor distances and then the process is repeated in a reverse fashion, i.e., from larger sample-to-sensor distances toward smaller, all the way to the first plane that is the closest to the sensor chip. This iterative algorithm is terminated after typically ~10-30 iterations or once a convergence criterion has been achieved. Following the termination of the iterative process, the refined complex field at the plane closest to the sensor plane is numerically back-propagated to the object plane. This complex-valued result,o 0 , is used as the initial object guess and fed to the sparsity constrained reconstruction algorithm which will be discussed in the following sub-section. Sparsity-based multi-height phase recovery. Sparsity-based multi-height phase recovery algorithm, as summarized in the right panel of Fig. 2, can be described as follows: Step 1 -Perform numerical forward propagation of the current guess, o i q to the i-th hologram plane, Step 2 -Update the magnitude according to: 2 and keep the phase of E i q . Typical range of values for our update parameter is α~0.5-0.9. Step 3 -Perform backward angular spectrum propagation for the updated complex field amplitude, Step 4 -Project the object function to the sparsifying (Wavelet) domain, which results in a coefficient set where (.) T refers to the transpose operation. Step 5 -Apply the object sparsity constraint to the result of Step 4: (a) Update the sparse support area for the magnitude, by keeping the largest S coefficients and update the remaining (N-S) coefficients. To achieve this, we first define the sparsity support region (Λ ) which contains the most significant S coefficients within θ  i q . (b) Keep all the elements within Λ unchanged, i.e., θ Λ θ Λ (c) Reduce the error outside of the sparse support region in the Wavelet domain, which is defined by Λ C , by performing the following iterative update: where β is a relaxation coefficient, e.g., β~0.7-0.9. (d) Perform total variation denoising in the Wavelet domain to achieve two aims: (i) Smoothen out the regions where the low frequency components of the twin image are more dominant; and (ii) preserve the edges (details) of the objects in the higher frequencies of the image, helping to reduce the measurement noise as well as self-interference related terms. The total variation denoising algorithm can be implemented by using either the original formulation of Rudin-Osher-Fatemi 31 or Chambolle's algorithm 42 . We used: Step 5(c), θ is the variable of the denoising algorithm, TV(θ) is the total-variation norm, which is defined in Equation (1) and . 2 2 is the l2-norm, which serves as a fidelity term. The parameter λ is a tuning parameter, which can be adaptively refined 32 or selected a priori and it controls the tradeoff between data fidelity and denoising. Generally, we perform ~1-2 iterations in order not to introduce spatial blurring to the reconstruction. Also, since large values of λ favors blurring, it should be carefully chosen 43 and we typically set λ to be ~0.002:0.01 for intensity normalized biological samples. Step 6 -Update the object estimate by applying an inverse wavelet transform on the solution of Equation (4), , to return to the object space for the next iteration. Following Step 6, Step 1 is repeated for all the N z acquired holograms at different sample-to-sensor distances. Following the update of the solution, o N q z and the magnitude corresponding to the hologram measured at N z -th plane, we proceed to the next iteration by incrementing ← + q q 1. This algorithm is repeated for ~100 iterations, or until a convergence criterion is met, for example when smaller than a predefined update tolerance between two consecutive iterations is achieved in either the object or hologram planes. The entire reconstruction algorithm is implemented using Matlab on a computer with an Intel Xeon E5-2667v3 3.2 GHz CPU (central processing unit) and 256 GB of RAM (Random Access Memory), running Windows Server R2012 R2 operating system. For a field-of-view of 1.1 × 1.1 mm 2 , the entire reconstruction process took ~28 minutes for N z = 2 and 6 × 6 = 36 raw holograms for pixel super-resolution at each height. Implementation of the presented algorithm using a dedicated parallel computing platform and programming environment on a GPU (Graphics Processing Unit) should yield a significant speed improvement 44 . Evaluation of the image reconstruction quality. Our image reconstruction quality assessment is based on (i) visual inspection of the results in comparison to our clinically relevant reference coherent images 10 and (ii) quantitatively applying structural similarity index (SSIM) 45 on the reconstructed object image. The SSIM has been proven to be more consistent with the human visual system when compared to peak signal-to-noise ratio (PSNR) and mean square-error (MSE) 45 based image evaluation criteria. The SSIM quantifies the changes in structural information by inspecting the relationship among the image contrast, luminance, and structure components. The contrast is evaluated as the standard deviation of an image: where μ p is the luminance (mean) of the p-th image, U p . The structural measurement is estimated using the cross-covariance between two images that are compared to each other: x y i j i j 1,2 , Based on these definitions, the SSIM between two images is given by: where C 1 , C 2 are stabilization constants, which prevent division by a small denominator. These coefficients 45 are selected as: C 1 = (K 1 L) 2 and C 2 = (K 2 L) 2 with K 1 , K 2 ≪ 1 and L is the dynamic range of the image, e.g., 255 for an 8-bit grayscale image. Sample Preparation. De-identified Pap smear slide was provided by UCLA Department of Pathology (Institutional Review Board no. 11-003335) using ThinPrep ® preparation. De-identified Hematoxylin and Eosin (H&E) stained human breast cancer tissue slides were acquired from the UCLA Translational Pathology Core Laboratory. We used existing and anonymous specimen, where no subject related information is linked or can be retrieved. Results and Discussion In order to experimentally test the proposed sparsity-based image reconstruction algorithm, we acquired a set of 8 super-resolved holograms at different sample-to-sensor distances (~300-600 μ m) corresponding to stained Papanicolaou (Pap) smears as well as H&E stained breast cancer tissue slides. First, we applied the multi-height based iterative phase recovery algorithm on all of these N z = 8 pixel super-resolved holograms in order to get clinically relevant 10 baseline reference images, which are shown in Figs 3(a,d) and 4(a,d). Once we attempt to reconstruct the images of the same samples using N z = 2 holograms acquired at different sample-to-sensor distances with the same iterative multi-height phase retrieval algorithm, spatial artifacts appear which are illustrated in Figs 3(b,e) and 4(b,e). However, using the same 2 holograms with the proposed sparsity constrained multi-height phase retrieval algorithm, the reconstruction results, shown in Figs 3(c,f) and 4(c,f), significantly improve and become comparable in image quality to the reference images. In order to quantify our reconstruction quality, we also calculated the SSIM values for these images, with the results summarized in Table 1. For the Pap smear sample, the standard multi-height reconstruction of N z = 2 pixel super-resolved holograms gave an SSIM value of 0.66, while the sparsity-based reconstruction using the same measurements gave an improved SSIM value of 0.89. Similarly, for the H&E stained breast cancer pathology slide, the SSIM value for the multi-height reconstruction (N z = 2) was 0.73, while for the sparsity-based reconstruction the SSIM value increased to 0.83. Similar improvements in SSIM values using sparsity-based multi-height phase recovery were also observed for N z = 4, as shown in Table 1. These results illustrate that we are gaining at least 2-fold imaging speed improvement with reduced number of measurements compared to the standard multi-height phase recovery approach, without compromising spatial resolution or the field-of-view of our on-chip microscope. This, in-turn, reduces the data bandwidth and storage related requirements, which is especially important for field-portable implementations of lensfree microscopy tools. The presented sparsity-based multi-height phase retrieval method could also work without using pixel super-resolution. Nevertheless, the usage of and the need for pixel super-resolution depend on the targeted resolution in the reconstructed image. In this work, we used a CMOS image sensor that has a pixel size of 1.12 μ m and to achieve a resolution comparable to a conventional benchtop microscope with e.g., a 40X objective lens we used the pixel super-resolution framework to digitally create effectively smaller pixels. As an alternative to lateral shifts between the hologram and the sensor array planes (which can be achieved by e.g., source shifting, multi-aperture illumination, sample shifting or sensor shifting), wavelength scanning 26 over a narrow bandwidth (e.g., 10-30 nm) can also be used for rapid implementation of pixel super-resolution, which also has the advantage of creating a uniform resolution improvement across all the directions on the sample plane. While the presented approach has been demonstrated for multi-height holographic imaging and phase recovery, other types of physical measurement diversities can also be utilized in the same sparse signal recovery framework, such as multi-angle illumination and wavelength scanning 26,27 which might benefit various applications in quantitative imaging of live biological samples, such as growing colonies of bacteria, fungi or other types of cells. It is also important to note that, in addition to the CDF 9/7 wavelet transform that we used in this work, other wavelet transforms can also be used in the same image reconstruction method. As described in the Methods Section, an effective way of doing this can be by applying several wavelet transforms on a known database of pre-acquired images, thus finding the best representation and getting a good approximation of the number of required coefficients for sparse signal recovery. Adaptive methods, such as dictionary learning 46 and optimal basis generation 47 , which may yield over-complete linear signal representations could also be considered for the same framework 48 . Since one of the main goals of this work is to use less number of raw measurements, while also preserving image quality, it is important to choose our measurements in a way that will help us converge to the correct result. Of great importance for a successful reconstruction is a careful initialization of the algorithm. Specifically, it has been shown that when the measurement operator is given by free-space propagation, special emphasis needs to be put on the low spatial frequencies 20 , which contain most of the information about the sample, and therefore the low frequency wavelet bands cannot be considered sparse. Since the low frequency phase curvature changes slowly as a function of the sample-to-sensor distance 39 , the distance between the first two measurements should be large enough to acquire changes in these low frequencies. However, the distance should not be too large, since the signal-to-noise-ratio (SNR) also decreases proportional to the sample-to-sensor distance. Practically, we found that an axial distance of ~100-150 μ m between the two holographic measurements gives the best result for our initialization. On the other hand, to better resolve high spatial frequencies, we should choose a small sample-to-sensor distance, typically ~300 μ m for our setup. The closer the acquired hologram, the more suitable it is to sense sparse frequency components 24 , while as we take our measurements further away from the object, the reconstruction would favor sparse objects, such as point sources and scatterers. Conclusions We developed a sparsity-based phase reconstruction algorithm for digital in-line holographic imaging of densely connected samples. This algorithm is capable of reconstructing amplitude and phase images of biological samples using only 2 holograms acquired at different sample-to-sensor distances, which is at least 2-fold less compared to the number of holograms that is utilized in previous multi-height phase retrieval approaches. Stated differently, using this sparsity-based holographic phase retrieval method, we demonstrated that the number of the reconstructed pixels (i.e., 2N, including the phase and amplitude channels of the sample) can be made equal to the number of measured intensity-only pixels. We demonstrated the success of this approach by imaging Papanicolaou smears and breast cancer tissue slides over a large field-of-view of ~20 mm 2 . This sparsity-based phase retrieval method is also applicable to other high resolution holographic imaging techniques involving e.g., multiple illumination angles or wavelengths, both of which can be used for enhancing the space-bandwidth product of a coherent holographic microscope.
6,203.4
2016-11-30T00:00:00.000
[ "Engineering", "Medicine", "Physics" ]
Genetic structure of populations of Mugil cephalus using RAPD markers Genetic structure of four populations of Mugil cephalus from Gujarat, Maharashtra, Andhra Pradesh and Tamil Nadu in India was studied using randomly amplified polymorphic DNA (RAPD) markers. Five selective primers provided distinct and consistent RAPD profiles in all the four populations. The bands in the range 400 to 1200 bp were scored for consistent results. The RAPD profiles generated by all the five primers revealed varying degrees of polymorphism, ranging from 50.76 (primer E03) to 72.41% (primer E05). Nei’s genetic diversity (h) among the four populations varied from 0.3717 ± 0.1460 (Gujarat population) to 0.5316 ± 0.1720 (Maharashtra population). Nie’s highest genetic distance (0.8556) was observed between Tamil Nadu and Gujarat populations. INTRODUCTION Information on the genetic structure of fish is useful for optimizing identification of stocks, stock enhancement, breeding programs, management of sustainable yield and preservation genetic diversity (Dinesh et al., 1993;Gracia and Benzie, 1995;Tassanakajon et al., 1997Tassanakajon et al., , 1998. DNA polymorphisms have been extensively employed as a means of assessing genetic diversity in aquatic organisms (Ali et al., 2004). Randomly amplified polymorphic DNA (RAPD) fingerprinting offers a rapid and efficient method for generating a new series of DNA markers in fishes (Foo et al., 1995). RAPD analysis is a technique based on the polymerase chain reaction (PCR) amplification of discrete regions of genome with short oligonucleotide primers of arbitrary sequence (Welsh and McClelland, 1990;Williams et al., 1990). This method is simple and quick to perform and most importantly, no prior knowledge of the genetic make-up of the organism is required (Hadrys et al., 1992). This technique has been used extensively to detect genetic diversity in plants (Williams et al., 1993), animals (Cushwa and Medrano, 1996) and microbes (Carretto and Marone, 1995). It has also been used to evaluate genetic diversity in various fish species such as tilapia (Naish et al., 1995), striped bass (Bielawski andPumo, 1997), grouper (Asensio et al., 2002) and murrel (Nagarajan et al., 2006). Clarias batrachus (Garg et al., 2010), Eutropiichthys vacha (Chandra et al., 2010) and Plecropomus maculates respectively. The striped mullet, Mugil cephalus is the most widely distributed and of aquaculture importance among mullets. It is euryhaline and also fairly resistant to changing temperature (Chondar, 1999). This species is one of the most popular warm water fishes being cultured in tropical and subtropical regions (Pillai et al., 1984). Indian aquaculture is *Corresponding author. E-mail<EMAIL_ADDRESS>Tel: +91-9967650363; Fax: +91-2226300995. Abbreviations: RAPD, Random amplified polymorphic DNA; PCR, polymerase chain reaction; EDTA, ethylenediaminetetra acetate; SDS, sodium dodecyl sulfate. mainly restricted to carps and shrimps. To achieve higher aquaculture production, species diversification must be prioritized. M. cephalus is one of the candidate species for diversification in the aquaculture sector due to its euryhaline nature easy availability of seeds along the coasts. Therefore, studying genetic variation in M. cephalus could provide base line data for identifying stock with superior traits for breeding programs and also to formulate management strategies for sustainable utilization of the species. Despite its aquaculture importance, there is no information available on genetic structure of this species. Hence, the present study was carried out to ascertain the genetic stock structure of M. cephalus populations using versatile RAPD markers. Extraction of genomic DNA Total genomic DNA was isolated from muscle tissue according to DNA extraction method of Williams et al. (1990). Tissue (150 to 200 mg) was cut into smaller pieces in the presence of 1 ml lysis buffer (50 mM Tris-HCl, pH 8.0, 10 mM ethylenediaminetetra acetate (EDTA), 100 mM NaCl) and transferred to a 2 ml Eppendorf tube. Then, proteinase K (300 µg/ml), sucrose (1%), and sodium dodecyl sulfate (SDS) (2%) were added to the tube. After incubation at 60°C, the lysate was extracted with phenol and chloroform/isoamyl alcohol. The DNA was precipitated with isopropanol and pellet was washed with 70% ethyl alcohol, dried, suspended in TE buffer (50 mM Tris-HCl, 10 mM EDTA). DNA quality and quantity were determined by 1.0% agarose gel electrophoresis and biophotometer (Eppendorf, Germany). RAPD-PCR amplification and product analysis Five random primers (E02 to E06; Operon, USA) were screened based on the presence of intense, well distinguished and reproducible bands for further analysis. PCR reactions were performed in 25 µl volume containing 200 µmol/l each dNTP, 2 mmol/l MgCl 2 , 1 x standard Taq polymerase buffer, 0.2 µmol/l random primer, 40 ng genomic DNA, and 0.75 U Taq polymerase. PCR reactions were carried out with initial denaturation of 4 min at 94°C, followed by 35 cycles of denaturation for 30 s at 94°C, 45 s annealing at 36°C, 2 min extension at 72°C, and one 8 min cycle at 72°C for final extension. Amplified products were separated on 1.5% agarose gel stained with ethidium bromide, run in 1 X TAE buffer at a constant 80 V (Sambrook and Russell 2001). The gels were imaged using a Syngene gel documentation system (USA). Data analysis Only the reproducible and intense bands ranging from 400 to 1200 bp were scored to maintain the consistency across the samples of different populations. Bands observed in each lane were compared with all the other lanes of the same gel and reproducible bands were scored as present (1) or absent (0). Fragment sizes were estimated based on the 100 bp Plus DNA Ladder (Bangalore Genie, India) according to the algorithm provided in the Gene Tools Software. Data was analyzed using the POPGENE version 1.31 software (Yeh et al., 1999). It was also used to construct dendrograms based on genetic distances (Nei, 1972;Sneath and Sokal, 1973;Reynolds et al., 1983). The robustness of the dendrogram was tested using 1000 bootstraps. RESULTS AND DISCUSSION The RAPD profiles of different populations from Navsari (Gujarat), Ratnagiri (Maharashtra), Kakinada (Andhra Pradesh) and Chennai (Tamil Nadu) were generated for four geographically different populations of M. cephalus. The RAPD fingerprints of a total of 200 individuals of M. cephalus were carried out using optimized RAPD-PCR conditions for five selected primers. The polymorphism pattern obtained for four populations is shown in Table 1. All the selected five primers produced distinct and consistent RAPD profiles for M. cephalus from all the four populations (Figures 1 and 2). The primers generated bands in the range of 200 to 2,200 bp. However, only the repeatable major bands ranging from 400 to 1200 bp were scored for consistency. A total of 142 reproducible bands were obtained in the three populations for the five primers (Table 1). Generally, the number and size of the bands generated strictly depend upon the nucleotide sequence of the primer used and the source of the template DNA; resulting in the genome-specific fingerprints of random DNA bands (Welsh et al., 1991). The RAPD profiles generated by all the five primers revealed varying degrees of polymorphism, ranging from 50.76% (primer E03) to 72.00% (primer E05). The range of number of bands and band size were 1 -6 and 416 -1196 bp, respectively. The present study revealed a wide variation of polymorphic loci (70 -88%) among the four populations. The highest level of polymorphism (88%) was exhibited by the Gujarat population whereas the lowest level of polymorphism (70%) was exhibited by the Tamil Nadu population. Nei's (1973) genetic diversity (h) among the four populations varied from 0.3717 ± 0.1460 (Gujarat population) to 0.5316 ± 0.1780 (Maharashtra population) ( Table 2). Interestingly, two population specific bands were found in the population of Andhra Pradesh (350 bp in E06 primer) and Gujarat (1000 bp in E04 primer). These population-specific unique bands can be used to detect any possible mixing of these populations, especially during selective breeding programmes (Ferguson et al., 1995). Tassanakajon et al. (1998), Mishra et al. (2009), Nagarajan et al. (2006 and Lakra et al. (2010) and Saad et al. (2012) (Table 3). The highest genetic identity (0.9214) and genetic distance (0.8556) was observed between the populations of Gujarat and Maharashtra and Tamil Nadu and Gujarat, respectively. A dendrogram based on Nei's genetic distance is shown in Figure 3. Two separate clades were identified on the dendrogram with the Maharashtra and Gujarat populations appearing one cluster, while the Tamil Nadu and Andhra Pradesh populations formed the other clade. In conclusion, genetic stock structure of M. cephalus identified in this study using RAPD primers will be helpful in developing superior strain for aquaculture practices through selective breeding and formulating stock specific management measures for conservation and sustainable utilization of the species.
1,953
2013-10-30T00:00:00.000
[ "Environmental Science", "Biology" ]
Guidance Navigation and Control for Autonomous Multiple Spacecraft Assembly: Analysis and Experimentation , Introduction The technical difficulties presented by the autonomous multiple spacecraft assembly problem relate to the development of robust and reliable guidance, navigation, and control techniques for on-orbit evolving systems.The main open challenges are: (1) propellant-efficient control of an assembling (also known as evolving system), the evolution occurring both in its mass and inertia properties, as well as in its sensors and actuators configuration and (2) accurate relative navigation among the spacecraft, especially in the event of low frequency measurements update and interruptions of measurements due, for example, to relative sensors' view's obstruction by other spacecraft.The works of [1][2][3][4] address specifically the problem of a system's evolution and its control.In [5], more emphasis is given to a potential solution for the wireless connectivity of different parts intended for the assembly of a bigger spacecraft, where a Wi-Fi bridge acts as the only real "assembly."Furthermore, wireless capability is becoming a more relevant option for exchanging data amongst close proximity spacecraft which eventually dock to each other (see [6]).Also, the high-risk situation of an assembly maneuver in space does not leave room for computationally intensive logics, such as optimal controllers (see [7]).Onboard CPUs must allocate most of their performance capabilities to platform safety issues. The use of Commercial Off-The-Shelve (COTS) relative sensors, such as low-cost cameras, justify the need for International Journal of Aerospace Engineering robust relative navigation schemes.Many different filters for tracking a maneuvering target have been considered in the literature. Approaches based on Kalman filter include the work of Singer [8], in which the target acceleration is modeled as a random process with known exponential autocorrelation.The input estimation approach for tracking a maneuvering target is proposed by Chan and Couture [9].In this approach, the magnitude of the acceleration is identified by the least-squares estimation when a maneuver is detected.The estimated acceleration is then used in conjunction with a standard Kalman filter to compensate the state estimate of the target.The standard filter alone is used during periods when no maneuver takes place.The augmented filtering approach is proposed by Bar-Shalom et al. [10,11].In this approach, the state model for the target is changed by introducing extra state components, the target's accelerations.The maneuver, modeled as an acceleration, is estimated recursively along with other states associated with position and velocity while a target maneuvers.Bogler [12] used this method as an implementation on high maneuver target tracking with maneuver detection. The input estimation filter and the augmented-dimension filter are commonly used in view of their computational efficiency and tracking performance.Among input estimation techniques, the Augmented State estimation approach yields reasonable performance without constant acceleration or small sampling time assumptions.Furthermore, it not only provides fast initial convergence rate, but it can also track a maneuvering target with fairly good accuracy as mentioned by Khaloozadeh and Karsaz [13].Bahari et al. [14] and Bahari and Pariz [15] propose an intelligent error covariance matrix resetting, by a fuzzy logic approach, necessary for high maneuvering target tracking, to improve the estimation of the target state. In space applications, particularly in the spacecraft relative navigation for the autonomous rendezvous and assembly, each vehicle is both the target and the chaser for the other spacecraft.Here, an additional challenge is considered: the frequent loss of communications for the data exchange when the application involves more than one spacecraft.Alternatives means to perform relative navigation may include a vision-based system.These types of sensors require the image processing and may result in low frequency measurement updates, especially for small spacecraft with limited computation capabilities.Such sensors suffer of problems such as limitations on the field of view and/or other spacecraft obstructing the view.Furthermore, each vehicle does not usually know the other vehicles inputs, that is, it does not possess the information about the maneuvers performed by its fellow spacecraft.This missing information needs to be reconstructed in the estimation scheme that would otherwise diverge quickly. We here focus on the utilization of low frequency update and low-cost sensors, such as COTS devices.In particular, the spacecraft are envisioned to have range and line of sight measurements, and relative attitude measurements.The navigation algorithm here presented build upon our preliminary work of [16].In this work, we build upon known techniques in order to develop guidance, navigation, and control approaches to perform three-degree-of-freedom spacecraft assembly maneuvers.Furthermore, the suggested methodologies are validated via hardware-in-the-loop testing, using four robotics spacecraft simulators. In particular, the guidance and control problems are tackled by continuously linearizing the dynamics about the current relative state vector between two spacecraft, and employing a Linear Quadratic Regulator to suboptimize propellant consumption.The LQR weighting matrices are computed in real time, depending on the relative state vector, acting as a feedback control.The LQR real-time solver developed for this research is an extension of what used during a real on-orbit spacecraft test inside the International Space Station [17], where a simplified problem-targeted LQR was executed (a version of the LQR Simulink solver for both RTAI Linux and xPC Target is available for download) (see [18]).While the system evolves, changing its mass properties and actuators' configuration, the LQR-based approach remains unaltered, controlling the growing structure by the simple online modification of a few parameters when a new spacecraft is docked. As for the relative navigation, we here propose a design based on the augmented state estimation technique.Robustness to frequent signal loss and/or darkening of the sensors is achieved.Furthermore, the suggested approach reconstructs the information of the other vehicles' maneuvers.A spacecraft is envisioned to run a copy of the augmented state estimation technique algorithm for each other spacecraft in the bunch, every vehicle being chaser and target at the same time. For the experimental part of this work, two dynamic models for the relative navigation filter are considered: (1) the classical Kalman filtering technique, [19], in which the unknown input (the maneuver command) is modeled as a random process and (2) the augmented state estimation technique, where the maneuver is estimated, using a Kalman filter scheme [19], in real time, as an additional variable in an augmented state vector. Between the two approaches, the second one proves to be the most successful.It yields satisfactory performances without constant accelerations or small sampling time assumptions.Furthermore, it does not only provide fast initial convergence rate, but it can also track a maneuvering target with a good accuracy under unpredictable loss of the data link and slow data rate, allowing the spacecraft to perform critical maneuvers such as the docking and the multivehicle assembly. The successful results of the study here presented pave the way for further research and implementation of the new GNC techniques for the full six degrees of freedom spacecraft relative motion. Main contributions of this work to the state-of-the-art for multiple spacecraft assembly GNC are as follows. (1) Development of a guidance and control approach flexible to mass and actuators' configuration changes during the assembly.The methodology is based on a suboptimal LQR for propellant-efficient rendezvous and docking maneuvers.(2) Implementation of a spacecraft relative navigation scheme based on augmented state estimation, robust to low frequency measurements updates.In particular, the spacecraft are envisioned to have the availability of range and line of sight measurements, and relative attitude measurements.No relative velocities measurements are available.This is the first time, to our knowledge, that augmented state vector estimation is used for spacecraft relative navigation.(3) The first (to the best of authors' knowledge) hardware-in-the-loop laboratory experiment involving four spacecraft simulators in a completely autonomous assembly maneuver. The paper is organized as follows.Section 2 presents the robotic spacecraft simulators employed for the experiments.Section 3 presents the equations of the three-degree-offreedom motion for the spacecraft relative maneuvering.Section 4 presents the augmented estimation approach and demonstrates the observability of the augmented state.Section 5 illustrates the guidance and control.Section 6 describes how navigation and control are performed once more spacecraft are assembled.Section 7 is dedicated to the experimental validation of the proposed methodologies.Section 8 concludes the paper. Third Generation Spacecraft Simulators at the Spacecraft Robotics Laboratory This section introduces the third generation of spacecraft simulators developed at the Spacecraft Robotics Laboratory of the US Naval Postgraduate School.Figure 1 shows the fleet of operational spacecraft simulators.The simulators float using air bearings over a very smooth epoxy floor, reproducing a nearly frictionless and weightless environment in two dimensions and three degrees of freedom, that is, two degrees of freedom for the translation and one for the rotation.This experimental testbed allows for the partial verification of guidance, navigation, and control algorithms in a simulated in-plane close proximity flight condition [20].For more details on the different families of spacecraft simulators employed throughout the world, we address the reader to [6,16,[20][21][22][23]. In order to perform docking experimentations, two separate custom designed docking interfaces have been developed and each is currently undergoing experimental testing (see Figure 2). The type 1 docking interfaces are designed in order to passively connect the spacecraft through electromagnetic mechanisms, and their design will allow data/power/fluids exchange (see Figure 3).Conversely, the type 2 design lacks the afore mentioned characteristics but enhances the robustness on the docking concept by correcting residual translational and rotational errors developed during the final docking phase of the spacecraft assembly for experimentation.This second design hosts two small permanent magnets to provide a final docking force and to keep the robots physically connected. Other key features of the spacecraft simulators include the following. (1) Ad-hoc wireless communication.Continuous data exchange amongst each simulator and the external environment over the wireless network provides for in situ communication.This greatly increases the robustness of data collection in the event of communication loss with one of the simulators. (2) Modularity.The simulators are divided into two modules where the payload can be disconnected from the consumables, thus allowing for a wide range of applications with virtually any kind of different payload (Figure 4). (3) Small footprint.The .19 m length × .19m width of each simulator allows for the working area (∼ 5 m × 5 m) on the epoxy floor to be optimally exploited. (5) Rapid Prototyping.The capability to rapidly reproduce further generations of simulators and improve existing designs via computer aided design (CAD) with the in house STRATASYS 3D printing machine. Most notably, point 1 of the previous list has provided an invaluable contribution to the success of our ongoing experimentation.The ad-hoc wireless communication system, currently employed onboard the simulators, was experimentally verified by a distributed computing test, which demonstrated the wireless communication real-time capability for the SRL (see [6]). Table 1 illustrates the characteristics of the electronics used onboard each spacecraft simulator.The PC104 (onboard computer), the sensors, and the actuators are described below (see [6]). Each robot performs absolute navigation in the laboratory environment employing indoor pseudo-GPS for position, and magnetometer and gyroscope for attitude (Table 1).The measurements from these sensors are processed by two separate Linear Digital Kalman Filters, estimating position and velocity of the center of mass with respect to the laboratory reference frame, and heading and heading rate of the robot with respect to the laboratory frame.The details on the robots' absolute navigation are beyond the scope of this work, and they will not be discussed here; for additional information, the reader can refer to [20]. The maximum computational power of 400 Mhz listed in Table 1 is not required for real-time recomputation of the LQR solution.In the SPHERES satellites [17], Texas Instruments C6701 Digital Signal Processor is employed to solve a very comparable problem. Figure 5 depicts the main concept of the testbed at the SRL.The main components and their interfaces are illustrated onboard the robot at the bottom of the sketch.Furthermore, the figure emphasizes the fact that the configuration is scalable to an arbitrary number of robots depending on the application or mission. The Wi-Fi capability of each robot is not only used to communicate with other robots, but it is also necessary for receiving its own absolute position within the laboratory, as sensed by the pseudo-GPS indoor system.The onboard real-time operating system is RTAI patched Linux (see [24]), in the Debian 2. other hand, xPC Target has some disadvantages that include support for a limited number of hardware components and no support for USB or Firewire devices.Furthermore, the inaccessibility of its source code, due to its proprietary commercial nature, makes it challenging to add or modify drivers for unsupported hardware.RTAI Linux has been successfully used as an onboard real-time OS.RTAI is a patch to the Linux kernel that allows for the execution of real-time tasks in Linux (see [26,27]).The RTAI Linux solution is being widely exploited in several engineering areas (see [28][29][30][31]).In this work, we use RTAI Linux with a wide variety of hardware interfaces to include wireless ad-hoc radio communication using UDP, RS232 interface with the sensor suite and power system and a PC/104 relay board for actuating compressed air nozzles.RTAI Linux also allows for automatic generation of C code from Simulink models through Real-Time Workshop with the executable file for the onboard computers being created outside MATLAB by simple compilation of the C code. The details on the ad-hoc wireless network and hardware-software interfaces developed for the Spacecraft Simulators are available in [6]. S/C Relative Motion Dynamics and Problem Statement In this section, we provide the dynamics of spacecraft relative motion in the three degrees of freedom case.The dynamics encompasses both the relative translation (two degrees of freedom) and rotation (one degree of freedom).We will refer in the following to a Local Vertical Local Horizontal (LVLH) reference frame (Figure 6) that rotates with the orbital angular velocity ω LVLH .The origin of LVLH moves on a virtual orbit, conveniently chosen to remain in the vicinity of the maneuvering spacecraft.This point can also be chosen as coincident with one of the spacecraft.The x-axis points from the center of the Earth to the center of LVLH, while the y-axis is in the orbital plane in the direction of the motion along the orbit and perpendicular to the x-axis.The z-axis completes the right-handed LVLH frame. The dynamics of such motion can be represented in the compact form as Ẋrel = f (X rel ) + B(X rel )u. ( From now on, we will consider the specific application of hardware-in-the-loop testing using the three-degree-offreedom spacecraft simulators at the Naval Postgraduate School.For the experimental setup, the state vector becomes ( Being u ij , i = x, y, j = C, T the control forces components of chaser and target, and M j , j = C, T the control torque of chaser and target about the z axis. It is common use in the literature to linearize the relative motion dynamics and use the Clohessy and Wiltshire linear equation [32] ẍ International Journal of Aerospace Engineering 7 with the assumption that the spacecraft have the same mass m. For maneuvers confined in the vicinity of the LVLH origin, elapsing a short time in comparison to the orbital period (3) can be further simplified into a double integrator for both x and y.A double integrator dynamics also represents the dynamics of the spacecraft simulators in the laboratory inertial reference frame.For the above-mentioned reasons, (4) will be used for the remaining of the paper; Assuming the spacecraft having the same moment of inertia about the z axis, the attitude dynamics is also represented by a double integrator The goal of this work is to develop a GNC approach for driving the state X rel to perform assembly maneuvers.This requires accurate guidance, especially in the last phases of docking, optimized or suboptimized control, to minimize propellant consumption, and a robust relative navigation scheme.These requirements are addressed in the following sections. Relative Navigation: The Augmented State Estimation Method In this section, the theory for the three-degree-of-freedom augmented state relative navigation is presented.The controls of another vehicle (target) are treated as additional terms in the corresponding state equation, so that the model provides an augmented state vector.The measurements available on each spacecraft are relative positions (from range and line of sight) and relative attitude, and we assume the knowledge of the controls of the chaser, onboard the chaser itself.No relative velocities measurements are available.Observability proof of the vector [x y θ ẋ ẏ ω u xT u yT M zT ] T from the measurements [x y θ] T is provided for the proposed estimation technique, demonstrating how the augmented state technique can reconstruct relative velocities and controls of the target. In the following developments, the estimated target's controls are considered constants within every sample time interval.It is worth underlying that the control variables u xT , u yT , and M zT do not represent the actual actuators' control variables onboard the spacecraft simulator.The way u xT , u yT , and M zT are obtained from the target does not matter from the augmented state filter's point of view.These control variables are estimated in order to add robustness to the filtering technique; they represent the target's maneuvers, but not the specific way they are performed by the target's actuation subsystem. The same assumption will be used for the observability demonstration.The navigation algorithm is developed using the Kalman filter approach. The augmented state estimation approach presents numerical efficiency comparable to the standard Kalman Filter applied on the state only.In fact, the augmented state approach introduces a few more variables in the Kalman Filter, without a significant increase on the numerical burden.Additional references with regards to the implementation of Extended Kalman Filters onboard real space missions can be found in [33,34]. Relative Motion Estimation. The assumption is made of independent estimation and control for the attitude and the position, so that we can proceed as follows.For the relative position, the state vector can be written as (see ( 2) The discrete dynamics for the problem is the following: The expressions of the matrices: G, H, Cj Ti B(k), and Ψ as functions of the measurement update time T s for this planar case can be written as The augmented dynamics adds the estimation of u xT and u yT , assuming knowledge of the chaser's controls u xC and u yC .The related state equation matrices can be written as 4.2.Relative Attitude Estimation.The same algorithm is implemented for target's attitude and control torque estimation.Assuming the target rotating only about the vertical axis (z-axis), the ith Target attitude state vector, with respect to the jth Chaser spacecraft, is chosen to be The discrete dynamics for the attitude problem is and the principal dynamics matrices, as function of the time sampling T s , are being The formulation of the augmentation of the state dynamics adds the estimation of M T , assuming the knowledge of the chaser's control torque M C .The related state equation matrices can be written as Observability of the Augmented Dynamics. For sake of simplicity, considering that the controls are constant in each sample time, we provide, for the planar case, the proof of the observability for the continuous models of the relative dynamics.The observability property holds for the discrete models [35].The augmented relative motion dynamics can be expressed as The measurements are related to the state as follows: It is of immediate demonstration that the following observability matrix has full rank: 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 m 0 0 0 0 0 0 1 m 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Similar developments lead to observability for the relative attitude motion.The dynamics can be expressed as The measurements are related to the state as follows: The observability matrix is as following: that has full rank. Guidance and Control for the Assembly Maneuver This section describes guidance and control for the autonomous assembly.only represents one possible configuration and that the docking ports do not have to be aligned with any particular body axis.r goal is the vector originating from the center of a docking interface, terminating at the center of mass of the other spacecraft going to dock to it.r rsw is the vector originating from a spacecraft's center of mass, terminating at the other spacecraft's center of mass. Guidance. The guidance problem is here expressed in terms of desired state vector for each spacecraft, defined dynamically during the maneuver.The state vector error to minimize is The subscript "des" indicates a desired relative state vector component.The desired state is dynamically changed throughout the assembly maneuver according to the following two-phase guidance logic. The center of mass trajectory is unconstrained, free to be optimized, unless in the vicinity of the docking phase.As for the attitude, we reproduce a realistic condition in which the spacecraft has to show one particular side (usually the one with the docking port) towards the current target spacecraft.In other words, the docking port side is commanded to be perpendicular to either the r rsw or the r goal vector (Figure 7), depending on the phase.Each spacecraft in Figure 7 can be considered either a single agent or an already assembled structure; the following, description applies to both scenarios.In the following the vectors are always intended to be parallel to the xy plane.r dock is a user defined distance threshold, specifying when the docking phase begins. (1) |r rsw | > r dock , RENDEZVOUS: the spacecraft is at a far away distance from its target docking port.The state vector error is x err = [r goalx r goaly θ − θ des ẋ ẏ ω] T .The desired attitude θ des is such to align r port to r rsw (Figures 7 and 8). (2) |r rsw | ≤ r dock , DOCKING APPROACH: the spacecraft is close to its target docking port.The desired state vector to minimize is: (a) If cos −1 ((r goal • r port )/|r goal ||r port |) < α, that is, the spacecraft is within the security docking cone, there are two subcases.SUBCASE 1.The distance between the spacecraft is greater than the chosen impingement stand-off range, then x err = [r goalx r goaly θ − θ des ẋ ẏ ω] T .The desired attitude ϑ des is such to align r port to r rsw (Figures 7 and 8).SUBCASE 2. The distance between the spacecraft is less than the chosen impingement stand-off range, then any thrusters causing plume impingement on the other spacecraft are shut off, and only used if an emergency brake is needed, in the event of docking occurring at high velocity (above a chosen threshold). For the NPS spacecraft simulators, this will mean shutting off two thrusters, as it will be clear later on.The remaining actuators will compensate for attitude alignment in the last phase of docking and will provide required forces to push the spacecraft together. (b) If cos −1 ((r goal • r port )/|r goal ||r port |) ≥ α, that is, the spacecraft is outside the security docking cone.In this case, referring to spacecraft 2 of Figure 7, the vehicle maneuvers orbiting around the one hosting its target docking port, moving along the direction perpendicular to the r rsw vector, towards the way that is the shortest in order to reach the safety corridor.The amount of commanded rotation at each time step, around the target docking port, is a chosen parameter β = const.In terms of state vector error to minimize, defining a reference frame which has as a basis the unit vectors r rsw⊥ , r rsw , the r rsw can be rotated of an angle β into r rot and easily expressed as function of the basis r rot = |r rsw |(cos β r rsw + sin β r rsw⊥ ).The state error to minimize is x err = [r rotx − r goalx r roty − r goalx θ−θ des ẋ ẏ ω] T .The desired attitude θ des is such to show the chosen side to the target docking spacecraft, that is, ⊥ r goal (Figure 7).In simple terms, the satellites circle around each other, in the direction of shortest angular displacement, to allow the docking interfaces to be in the mutual fields of view.Each spacecraft needs to be in the safety corridor one of the other, with the respective docking interfaces' r port vectors and r goal vectors aligned.The respective r port vectors of two satellites will need to be at 180 degrees (plus-minus the tolerance); the same applies, as a consequence, to the r goal vectors. LQR Control. The LQR problem ( 23) is solved at each time step, with dynamically sized weighting matrices Q and R, adapting to the current situation, avoiding high control values when the state vector error is relevant, and vice versa.This choice results in a smoother behavior, in terms of requested control actions, with respect to classical fixed gain matrices LQR; The cost function in (23) aims to minimize control effort, while reducing the relative state vector between two satellites.The mutual relevance between state vector error and control effort is dictated by the relative values of the weighting matrices Q and R. The control vector u is chosen as a four-component vector of forces, expressed in the spacecraft body frame (Figure 9).The choice of u in the spacecraft body frame, removes the need for thruster mapping [22]. For the phases described in the previous section the weighting matrices for the LQR are chosen as (24) A (dynamics matrix), B (control matrix), C (state-output mapping matrix), D (control-output mapping matrix), and Q and R weighting matrices.The outputs are the LQR gain matrix, K, which is the solution to the associated algebraic Riccati equation, the matrix S, and a two-dimensional vector, E, whose first element indicates an error when it is greater than zero or a somewhat unreliable result when it is negative.The second element of E is the condition number of the R matrix. Each time step solution of the LQR generates a gain matrix K LQR , used to implement the required suboptimal control vector The values of the constants a = 3.05 • 10 −2 and V = 0.06 are chosen as in [23].In particular, their values are chosen looking at variables with physical meaning, but we do not assign dimensions to them here, being their dimensions the appropriate ones for consistency in the cost function (23).The a constant weighs the terms in the matrix R with respect to the maximum translational acceleration achievable on the spacecraft simulator.This value is computed considering two thrusters simultaneously activated on the same side of the vehicle.The thrust values and mass of the simulator can be found in Table 2, and the interested reader can find more details on the thrusters in [36].The originating idea for scaling the R matrix as in (24) is the desire to maintain the controls required by the LQR solution below the maximum hardware-achievable controls.With regard to the constant V in the Q matrix, it is set to be the maximum translation speed allowed for the simulator.V weighs the part of the state vector which is related to linear velocities.The choice to introduce the above mentioned parameters does not a priori guarantee controls below the maximum onboard control authority and a limited translational velocity, but the scaling in (24) has been proven very effective in mitigating high requests on the control and undesired fast maneuvers on the testbed.This result was previously found in numerical simulations, and then experimentally verified [23].Figure 10 shows the required inputs to the LQR solver, implemented in Simulink. International Journal of Aerospace Engineering The LQR solver employed for developing the proposed approach was downloaded from [18], adapted for automatic generation of code through Real-Time-Workshop for RTAI Linux (it was originally only compatible with Windows Operating Systems), and uploaded again on the MathWorks file exchange website [18]. In specializing the design of Figure 9 to the SRL spacecraft simulators, we treat the eight body fixed thrusters in couples, so that symmetric thrusters are reduced to one control variable, which can be either u max , −u max , 0. Figure 11 shows the thruster couplings: 1-4, 2-7, 3-6, and 5-8.The control vector is u = [u 1 u 2 u 3 u 4 ] T .The red arrows along the couples in Figure 11 show the positive directions assumed for the controls.Ultimately, thrusters coupling allows the LQR to solve a reduced problem in which the control vector has four components instead of eight. Given the choice for the control vector, the control distribution matrix becomes nonlinear, as in (25).Equation (25) also shows the system dynamics matrices, when the expression Ẋ = AX + BU, Y = CX + DU is used.The spacecraft orientation θ C is replaced with θ for simplicity In order to employ the LQR approach, the control distribution matrix is linearized at each time step, in the vicinity of the desired attitude. By inserting the matrices defined in ( 26) and ( 27) and the weighting matrices described in (24) into the LQR solver of Figure 10, the optimal four components vector of forces is obtained, at each time step during the maneuver.The obtained control vector will be a continuous signal.In order to drive the on/off thrusters from the continuous signal Pulse Width Modulation is used; the PWM collects commanded controls over 10 sample times before actuating.Furthermore, a Schmitt Trigger is implemented, to filter out low commanded controls and reduce the amount of chattering. Navigation and Control of the Assembled Structure Once the S/Cs are assembled, the mass and inertia properties along with the thrusters configuration change.Figure 12 shows an example, applied to the SRL testbed, in which thrusters six and seven of both spacecraft cannot be used anymore.The assembled new spacecraft has doubled mass, different moment of inertia and four more thrusters, differently allocated with respect to the single spacecraft.In assembled configuration, one of the robots acts as master and performs both navigation and control of the new bigger robot.In order to keep using the same logic employed for controlling a single simulator, the twelve thrusters of the new assembled spacecraft are associated according to the following sets: (1) u 1 is generated by firing either thruster 8 (u 1 < 0) or 3 (u 1 > 0), (2) u 2 is generated by firing either thruster 9 (u 2 < 0) or 2 (u 2 > 0), (3) u 3 is generated by firing either thrusters 6 and 7 synchronously (u 3 < 0) or 11 and 10 synchronously (u 3 > 0), (4) u 4 is generated by firing either thrusters 4 and 5 synchronously (u 4 < 0) or 1 and 12 synchronously (u 4 > 0). The input matrices to the LQR solver will be changed once an additional portion of the structure is connected.Also, the new control vector will have maximum and minimum values reduced, due to the increase of mass.For instance, the case represented in Figure 12 leads to the new matrices 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 where J z comb is the inertia of the assembled system about the vertical axis and m comb is the new mass.Linearization of the new control distribution matrix leads to The thrusters remained available after docking will be commanded by either spacecraft one or two, thanks to the real-time wireless link (see [6]).Navigation for the assembled structure is performed onboard the robot acting as the master.For the two-robot configuration in Figure 12, it follows the rigid body equations, being the attitude and attitude rate of the new spacecraft the same of the master, and its center of mass position and velocity deducted by those of the master (see also Figure 12). Experimental Results: Four Spacecraft Simulators Assembly In this section, assembly maneuvers are employed to experimentally test the suggested guidance, navigation, and control schemes.For our experiment, we do not implement any collision avoidance algorithm, which has been, however, previously successfully tested [23].At the time of writing this paper, the simulators do not have hardware dedicated to relative measurements.Relative measurements are assumed to be the range, line of sight, and relative attitude.This information is obtained via software, by having the robots exchanging data over the ad-hoc wireless channel [6].This feature has the benefit of flexibility in imposing the desired frequency of measurement update, by simple modification of the software.Furthermore, for the following experiments, we do not assume any particular noise or bias characteristics for the measurements, that is, the filter does not have that information.Noise is present, and it comes from the wireless communication.This assumption does not conflict with the previously stated contribution of our work in designing an estimation technique more robust than standard Kalman Filtering in the presence of low frequency updates, as demonstrated in the following.Two experimental runs are presented.The first one demonstrates the unsuccessful relative navigation when classical Kalman Filtering is employed, considering the other S/C's maneuvers as a random process.Only two simulators are involved.The second experiment involves the four vehicles, showing how augmented state estimation can handle low measurement updates and unpredictable interruptions of updates, and still perform correct relative navigation, driving the mission to success.In particular, we are here imposing, via the wireless network, an update of 2 seconds.Once the two couples of robots are docked, each of the assembled structure is considered to be a new vehicle with new mass and geometry.For this reason, the augmented state estimator is reinitialized for the new structure with different mass and inertia as in Table 2.For this part of the experimentation, the software is running only onboard the master S/C, that is, one pre-chosen unit for each couple. The time step, or simulation sampling time, was chosen to be: (1) higher than the thrusters minimum actuation time (Tables 3 and 4), (2) in compatibility with CPU computational power, and (3) so to maintain the dynamics within the linearity range.The choice was also justified by previous experience with the employed hardware and by prior computer numerical simulations.In fact, the experimental activities at the Spacecraft Robotics Laboratory are always anticipated by high fidelity numerical simulations of the test-bed dynamics in Simulink, by visualizing on the computer how the experiment will develop.This prototyping approach reduces the time-to-market and trouble-shooting costs of newly developed GNC methodologies, by significantly cutting down the need for intermediate hardware prototypes and the number of experimental tests. The Classical Kalman Filter Technique.Figure 13 is the bird's eye view of the experiment, demonstrating the unfeasibility of classical Kalman filter for spacecraft relative navigation, when relative measurements updates occur at low frequency.Two spacecraft simulators start maneuvering, with the goal of docking, from a short distance.The sides opposite to the bolded lines are the designated docking sides.After approximately 1 minute of maneuver, the accumulated error in relative state vector (position) exceeds the tolerance of the docking interfaces (Figure 2), driving the vehicles into a failed docking maneuver.A video of the experiment can be found online at [37]. The Augmented State Estimation Technique.Figure 14 is the bird's eye view of the experiment, demonstrating the feasibility of the augmented state estimation for spacecraft relative navigation.The main data for the filters are presented in Tables 3 and 4. Four spacecraft simulators start maneuvering, with the goal of assembling into a line-shaped structure, from short distances.The sides opposite to the bolded lines are the designated docking sides.After less than 3 minutes of maneuver, the four vehicles successfully complete the given mission.The rectangular black and blue vehicles represent two spacecraft simulators docked and maneuvering as a single bigger unit.A video of the experiment can be found online at [38]. Once the simulators are assembled in couples, they maneuver as a single bigger unit.In particular, the augmented state estimation is reinitialized in order to switch to a new target vehicle in terms of relative navigation.In Figure 14, for the left couple, the cyan-represented unit acts as master of the new assembled cyan-red spacecraft.Likewise, for the right couple, the green vehicle is the master in the green-magenta assembly. Conclusion In this work, we are suggesting a complete solution for guidance, navigation, and control of planar multiple spacecraft assembly maneuvers.Guidance is performed by dynamically defining a desired state vector, so that the spacecraft can prepare for docking and correctly connect.The control is based on a real time LQR approach.As for the relative navigation, augmented state estimation is proposed, allowing for correct awareness of the other spacecraft configuration, even in the event of low frequency measurements update.The framework adapts itself to the evolving spacecraft, by switching among different values of mass properties and sensors and actuators configuration, when a new unit assembles to the aggregate. Theoretical developments are presented for the threedegree-of-freedom case, considering a planar motion for the relative position and a single axis of rotation. The experimental validation of the proposed methodology is presented, via floating spacecraft simulators, using an assembly maneuver as baseline.Experiments show how the augmented state estimation can cope with low frequency measurement updates, correctly performing the relative navigation, driving the mission to success.On the other hand, Classical Kalman Estimation, is not accurate for close distances with low frequency measurement updates as demonstrated in the three-degree-of-freedom experimental section.The dynamic guidance and control demonstrate real-time feasibility and the capability of performing autonomous assembly. Figure 1 : Figure 1: Multispacecraft testbed at the Spacecraft Robotics Laboratory of the Naval Postgraduate School. Figure 2 : Figure 2: (a) Patent pending docking interface design (electromagnet and fluid transfer capability).(b) Concept (male/female) docking interface used for the experiments in this paper. Figure 3 : Figure 3: Main components of the patent pending docking interface. Figure 4 : Figure 4: Detailed collocation of the hardware on the spacecraft simulators. Figure 5 : Figure 5: Ad-hoc wireless network at the SRL test-bed. Figure 6 : Figure 6: Local vertical local horizontal and inertial frames. Figure 7 : Figure 7: Relative vectors used in the alignment and assembly logic.All vectors are in the LVLH xy plane. 1 Figure 9 : Figure 9: Locations of controls for the planar assembly. Figure 10 : Figure10: LQR Solver Simulink Block[18].This routine solves the complete algebraic Riccati equation accepting the input matrices: A (dynamics matrix), B (control matrix), C (state-output mapping matrix), D (control-output mapping matrix), and Q and R weighting matrices.The outputs are the LQR gain matrix, K, which is the solution to the associated algebraic Riccati equation, the matrix S, and a two-dimensional vector, E, whose first element indicates an error when it is greater than zero or a somewhat unreliable result when it is negative.The second element of E is the condition number of the R matrix. Table 1 : Electronics hardware description. Attitude state vector of ith target S/C with respect to jth chaser S/C Cj Ti B A (k): Discretized augmented control matrix referred of ith target S/C with respect to jth chaser S/C State vector of ith target S/C with respect to jth chaser S/C, T : Target Cartesian coordinates in chaser S/C body frame [ ẋ ẏ ż] T : Target linear velocities in chaser S/C body frame.
9,132.2
2011-03-13T00:00:00.000
[ "Engineering", "Physics" ]
Higgs Mechanism in Scale-Invariant Gravity We consider a Higgs mechanism in scale-invariant theories of gravitation. It is shown that in spontaneous symmetry breakdown of scale invariance, gauge symmetries are also broken spontaneously even without the Higgs potential if the corresponding charged scalar fields couple to a scalar curvature in a non-minimal way. In this gravity-inspired new Higgs mechanism, the non-minimal coupling term, which is scale-invariant, plays a critical role. Various generalizations of this mechanism are possible and particularly the generalizations to non-abelian gauge groups and a scalar field with multi-components are presented in some detail. Moreover, we apply our finding to a scale-invariant extension of the standard model (SM) and calculate radiative corrections. In particular, we elucidate the coupling between the dilaton and the Higgs particle and show that the dilaton mass takes a value around the GeV scale owing to quantum effects even if the dilaton is massless at the classical level. Introduction Current understanding of elementary particle physics is based on two celebrated fundamental principles, which are gauge symmetry and spontaneous symmetry breakdown of the gauge symmetry. In four interactions among elementary particles, strong, weak and electro-magnetic interactions are known to be described on the same footing in terms of a gauge theory which is the standard model (SM) on the basis of SU(3)×SU(2)×U(1) gauge group, and gravitational interaction is believed to be also described by a gauge theory whose final formalism is still far from complete at present. The gauge principle alone, however, cannot describe the known structure of elementary particles. The gauge principle requires elementary particles to be massless 2 , so in order to generate masses for elementary particles the SU(2)×U(1) gauge symmetry must be spontaneously broken at any rate. The idea of spontaneous symmetry breakdown itself is not new for elementary particle physics but has emerged as a universal phenomenon in physics, in particular, condensed matter physics. An alternative and indeed older description of super-conductivity, which was developed by Ginzburg and Landau, turned out to be a phenomenological representation of the BCS theory [2]. In this transcription, the complex "Ginzburg-Landau" scalar field is nothing but the Higgs boson representing a bound pair of electrons and holes, and its phase and amplitude components correspond to the massless Nambu-Goldstone boson and the massive Higgs type of excitations, respectively. As it happens, the collective excitations of both the types do exist in all phenomena of the super-fluidity type. This universality of the spontaneous symmetry breakdown, however, seems to have no implication in gravity so far. The concept of the mass is intimately connected with general relativity since the righthand side of Einstein's equations is constructed out of the energy-momentum tensor. In this article, we will investigate an idea such that a scale-invariant gravity induces the spontaneous symmetry breakdown of gauge symmetry without assuming the existence of the Higgs potential. The SM based on SU(3) × SU(2) × U(1) gauge group, together with classical general relativity, describes with amazing parsimony (only 19 parameters) our world over scales that have been explored by experiments: from the Hubble radius of 10 30 cm all the way down to scales of the order of 10 −16 cm. In other words, with the help of cosmological initial conditions when the universe was much smaller, the SM is believed to encode the information needed to deduce all the physical phenomena observed so far. There are, however, some obvious chinks in the armor of the SM. In particular, the origin of different scales in nature cannot be answered at all by the SM. One should recall that there is only one fundamental constant with the dimension of mass: Gravity comes with its own mass scale M p = 2.4 × 10 18 GeV . All units of mass should be scaled to this fundamental scale. It is a source of great intellectual worry that the SM appears to be consistent at a scale which is so different from the Planck mass scale. Naive expectations are that all physical phenomena should occur at their natural scale which is of course the Planck scale. Coleman-Mandula theorem [3] allows the Poincare group to be generalized to two global groups, one is the super-Poincare group and the other is the conformal group. It is remarkable to notice that these two groups might yield resolution of the gauge hierarchy problem by a completely different idea, and they also yield a natural generalization of local gauge group, the former gives rise to the local super-Poincare group leading to supergravity whereas the latter does the local conformal group leading to conformal gravity. According to recent results by the LHC [4,5], supersymmetry on the basis of the super-Poincare group seems not to be taken by nature as resolution of the gauge hierarchy problem. Then, it is natural to ask ourselves if the conformal group, the other extension of the Poincare group, gives us resolution of the gauge hierarchy problem. Indeed, inspired by an interesting idea by Bardeen [6], there has appeared to pursue the possibility of replacing the supersymmetry with the conformal symmetry near the TeV scale in an attempt to solve the hierarchy problem [7,8]. It is worth noting that the principle of conformal invariance is more rigid than the supersymmetry in the sense that in many examples the conformal symmetry predicts the number of generations as well as a rich structure for the Yukawa couplings among various families. This inter-family rigidity is a welcome feature of the conformal approach to particle phenomenology [9]. In the conformal approach, it is thought that the electro-weak scale and the QCD scale as well as the masses of observed quarks and leptons are all so small compared to the Planck scale that it is reasonable to believe that in some approximation they are exactly massless. If so, then the quantum field theory which would be describing the massless fields should be a conformal theory as it has no mass scale. In this scenario, the fact that there are no large mass corrections follows from the condition of conformal invariance. In other words, the 'tHooft naturalness condition [10] is satisfied in the conformal approach, namely in the absence of masses there is an enhanced symmetry which is the conformal symmetry. Of course, the breaking of conformal invariance should be soft in such a way that the idea of the conformal symmetry is relevant for solving the hierarchy problem. In passing, in the present context, it seems to be of interest to consider the issue of renormalizability. Usually, in quantum field theories, the condition of renormalizability is imposed on a theory as if it were a basic principle to make the perturbation method to be meaningful, but its real meaning is unclear since there might exist a theory for which only the non-perturbative approach could be applied without relying on the perturbation method at all. To put differently, the concept of renormalizability means that even if one is unfamiliar with true physics beyond some higher energy scale, one can construct an effective theory by confining its ignorance to some parameters such as coupling constants and masses below the energy scale. Thus, from this point of view, it is unclear to require the renormalizability to theories holding at the highest energy scale, the Planck scale, such as quantum gravity and superstring theory. On the other hand, given a scale invariance in a theory, all the coupling constants must be dimensionless and operators in an action are marginal ones whose coefficient is independent of a certain scale, which ensures that the theory is manifestly renormalizable. In this world, all masses of particles must be then generated by spontaneous symmetry breakdown. In previous works [11,12], we have shown that without resort to the Coleman-Weinberg mechanism [13], by coupling the non-minimal term of gravity, the U(1) B-L gauge symmetry in the model [8] is spontaneously broken in the process of spontaneous symmetry breakdown of global or local scale symmetry at the tree level and as a result the U(1) B-L gauge field becomes massive via the Higgs mechanism. One of advantages in this mechanism is that we do not have to introduce the Higgs potential in a theory. Then, we have the following questions of this mechanism of symmetry breaking of gauge symmetry: 1. Is it possible to generalize to the non-abelian gauge groups? 2. Is it possible to generalize many scalar fields? 3. What becomes of applying it to the standard model and what its radiative corrections are? In this article, we would like to answer these questions in order. The structure of this article is the following: In Section 2, we present the simplest model which accomodates global scale symmetry and the abelian gauge symmetry, and explain our main idea. In Section 3, we generalize this simple model to a model with the non-abelian gauge symmetry. In Section 4, we extend our idea to a model of a scalar field with many of components. Moreover, we apply our finding to a scale-invariant extension of the standard model and calculate radiative corrections in Section 5. We conclude in Section 6. Two appendices are given, one of which is to explain a derivation of the dilatation current and the other is to put useful formulae for the calcualtion of radiative corrections. Review of a globally scale-invariant Abelian model We start with a brief review of the simplest model showing a gravitational Higgs phenomenon which was previously discovered in case of a global scale invariance and the abelian gauge group [11]. With a background curved metric g µν , a complex (singlet) scalar field Φ and the U(1) gauge field A µ , the Lagrangian takes the form 3 : where ξ is a certain positive and dimensionless constant. The covariant derivative and field strength are respectively defined as with e being a U(1) real coupling constant. Let us note that the Lagrangian (1) is invariant under a global scale transformation. In fact, with a constant parameter Ω = e Λ ≈ 1 + Λ (|Λ| ≪ 1) the scale transformation is defined as [15] 4 Then, using the formulae To prove that this current is conserved on-shell, it is necessary to derive a set of equations of motion from the Lagrangian (1). The variation of (1) with respect to the metric tensor produces Einstein's equations where d'Alembert operator ✷ is as usual defined as ✷( Here the energy-momentum 3 We follow notation and conventions by Misner et al.'s textbook [14], for instance, the flat Minkowski metric 4 In this article, we use the terminology such that scale or conformal transformation means global transformation whereas its local version is called local scale transformation or local conformal transformation. 5 The case ξ = − 1 6 corresponds to conformal gravity, for which there is no dilatation current. tensors T (A) µν for the gauge field and T (Φ) µν for the scalar field are defined as, respectively where we have used the notation of symmetrization Finally, taking the variation with respect to the gauge fields A (i) µ produces "Maxwell" equations Now we wish to prove that the current (4) for the scale transformation is indeed conserved on-shell by using these equations of motion. Before doing so, let us first take the divergence of the current, whose result is given by In order to show that the expression in the right-hand side of Eq. (9) vanishes on-shell, let us take the trace of Einstein's equations (5) Next, multiplying Eq. (7) by Φ † , and then eliminating the term involving the scalar curvature, i.e., ξΦ † ΦR, with the help of Eq. (10), we obtain At this stage, it is useful to introduce a generalized covariant derivative defined as D µ = D µ + Γ µ where Γ µ is the usual affine connection. Using this derivative, Eq. (11) can be rewritten as Then, adding its Hermitian conjugation to Eq. (12), we arrive at The quantity Φ † Φ is a scalar and neutral under the U(1) charge, we obtain Using this equation, the right-hand side in Eq. (9) is certainly vanishing, by which we can prove that the current of the scale transformation is conserved on-shell as promised. Now we are willing to explain our finding about spontaneous symmetry breakdown of gauge symmetry in our model where the coexistence of both scale invariance and gauge symmetry plays a pivotal role. Incidentally, it might be worthwhile to comment that in ordinary examples of spontaneous symmetry breakdown in the framework of quantum field theories, one is accustomed to dealing with a potential which has the shape of the Mexican hat type and therefore induces the symmetry breaking in a natural way, but the same recipe cannot be applied to general relativity because of the lack of such a potential. 6 Let us note that a very interesting recipe which induces spontaneous symmetry breakdown of scale invariance via local scale transformation has been already known [15]. This recipe can be explained as follows: Suppose that we started with a scale-invariant theory with only dimensionless coupling constants. But in the process of local scale transformation, one cannot refrain from introducing the quantity with mass dimension, which is the Planck mass M p in the present context, to match the dimensions of an equation and consequently scale invariance is spontaneously broken. Of course, the absence of a potential which induces symmetry breaking makes it impossible to investigate a stability of the selected solution, but the very existence of the solution including the Planck mass with mass dimension justifies the claim that this phenomenon is nothing but a sort of spontaneous symmetry breakdown. This fact can be also understood by using a dilatation charge as seen shortly. The first technique for obtaining spontaneous symmetry breakdown of both scale and gauge invariances is to find a suitable local scale transformation which transforms dilaton gravity in the Jordan frame to general relativity with matters in the Einstein frame. Of course, note that our starting Lagrangian is invariant under not the local scale transformation but the global transformation, so the change of form of the Lagrangian after the local scale transformation is reasonable. Here it is useful to parametrize the complex scalar field Φ in terms of two real fields, Ω (or σ) and θ in polar form, defined as where Ω(x) = e ζσ(x) is a local parameter field and the constants ζ, α will be determined later. Let us then consider the following local scale transformation: Note that apart from the local property of Ω(x), this local scale transformation is different from the scale transformation (3) in that the complex scalar field Φ is not transformed at all. Under the local scale transformation (16), the scalar curvature is transformed as where we have defined as f = log Ω = ζσ and✷f = 1 the non-minimal term in (1) reads the Einstein-Hilbert term (plus part of the kinetic term of the scalar field σ) up to a surface term as follows: Then, the second term in (1) is cast to the form where we have chosen α = e for convenience, and defined a new massive gauge field B µ as In terms of this new gauge field B µ , the Maxwell's Lagrangian in (1) is described in the Einstein frame as It is worthwhile to stress again that in the process of local scale transformation we have had to introduce the mass scale into a theory having no dimensional constants, thereby inducing the breaking of the scale invariance. More concretely, to match the dimensions in the both sides of the equation, the Planck mass M p must be introduced in the ciritical choice (18) (recovering the Planck mass) It is also remarkable to notice that in the process of spontaneous symmetry breakdown of the scale invariance, the Nambu-Goldstone boson θ is absorbed into the U(1) gauge field A µ as a longitudinal mode and as a result B µ acquires a mass, which is nothing but the Higgs mechanism! In other words, the U(1) gauge symmetry is broken at the same time and on the same energy scale that the scale symmetry is spontaneously broken. The size of the mass M B of B µ can be read off from (20) as M B = e √ ξ M p which is also equal to the energy scale on which the scale invariance is broken. Putting (19), (20) and (22) together, and defining ζ −2 = 6 + 1 ξ (by which the kinetic term for the σ field becomes a canonical form), the Lagrangian (1) is reduced to the form where we have recovered the Planck mass M p for clarity. Let us note that the first term coincides with the Einstein-Hilbert term in general relativity, the second term implies that the dilaton σ is massless at the classical level, and the last two terms means that the gauge field becomes massive via the new Higgs mechanism. As an interesting application of our finding to phenomenology, we can propose two scenarios at the different energy scales. One scenario, which was adopted in case of the classically scale-invariant B-L model [8,11,12], is the spontaneous symmetry breakdown at the TeV scale where e √ ξ ≈ 10 −15 , so the gravity is in the strong coupling phase. The other scenario is to trigger the spontaneous symmetry breakdown of both scale and gauge symmetry at the Planck scale, for which we take e √ ξ ≈ 1 and the gravity is in the weak coupling phase. Finally, let us comment on the physical meaning of the dilaton σ. The dilaton is a massless particle and interact with the other fields only through the covariant derivativẽ D µ = D µ + ζ(∂ µ σ), but owing to its nature of the derivative coupling, at the low energy this coupling is so small that it is difficult to detect the dilaton experimentally. To understand the physical meaning of the dilaton more clearly, it is useful to evaluate the dilatation current J µ in (4) in the Einstein frame. The result reads This is exactly the expected form of the current seen in the case of the conventional spontaneous symmetry breakdown, with 1 ζ playing the role of the vacuum value of the order parameter, and the dilaton σ doing of the Nambu-Goldstone boson associated with the spontaneous symmetry breakdown of the scale invariance. This result can be also reached by constructing the corresponding charge which is defined as Q D = d 3 xJ 0 . Note that this charge does not annihilate the vacuum because of the linear form in σ Of course, it is also possible to show ∂ µ J µ = 0 in terms of equations of motion in the Einstein frame as proved in the Jordan frame before. It therefore turns out that the dilaton σ is indeed the Nambu-Goldstone boson associated with spontaneous symmetry breakdown of the scale invariance. We will see later that although the dilaton is massless at the classical level, the trace anomaly makes the dilaton be massive at the quantum level. The generalization to non-Abelian groups In this section, we wish to extend the present formalism to arbitrary non-Abelian gauge groups. For clarity, we shall consider only the SU(2) gauge group with a complex SU(2)doublet of scalar field Φ T = (Φ 1 , Φ 2 ) since the generalization to a general non-Abelian gauge group is straightforward. Let us start with the SU(2) generalization of the Lagrangian (1) where a is an SU(2) index running over 1, 2, 3, and the covariant derivative and field strength are respectively defined as Here g is an SU(2) coupling constant (Do not confuse with the determinant of the metric tensor since we use the same letter of the alphabet). Furthermore, the matrices τ a are defined as half of the Pauli ones, i.e., τ a = 1 2 σ a , so the following relations are satisfied: In order to see the Higgs mechanism discussed in the previous section explicitly, it is convenient to go to the unitary gauge. To do that, we first parametrize the scalar doublet as where a unitary matrix U(x) is defined as U(x) = e −iατ a θ a (x) with α being a real number. Then, we will define new fields in the unitary gauge by Using these new fields, after an easy calculation, we find the following relations where D µ Φ u and F a µν (B) are respectively defined as To reach the desired Lagrangian, we can follow a perfectly similar path of argument to the case of the Abelian gauge group in the previous section. In other words, we will take a critical choice and make use of the local scale transformation by the local parameter Ω(x) to move from the Jordan frame to the Einstein frame. After performing this procedure, the final Lagrangian reads whereF a µν ≡ F a µν (B). The mass of the massive gauge field B a µν is easily read off to be As in the Abelian group, we can see that the massless dilaton σ is the Nambu-Goldstone boson of spontaneous symmetry breakdown of scale symmetry by making the conserved dilatation current and its charge. 4 The generalization to scalar field with many components Let us recall that the quantum field theory of a scalar field with many components goes in much the same way as that of a single component except that new interesting internal symmetry arises. This general fact is also valid even in the present formalism if we take a common "radial" field in all the components. When we consider a general "radial" field, we must face an annoying issue of getting the canonical kinetic term for the dilaton. The procedure of obtaining the canonical kinetic term is just a problem of matrix diagonalization and is not in principle a problem, but the general treatment makes our formalism very complicated. We will therefore focus on the case of the common "radial" field in this section. In the next section, we will meet the same situation since we consider two scalar fields coupling to a curvature scalar in the non-minimal manner, but a reasonable approximation can serve to avoid this annoying issue. The starting Lagrangian is just a generalization of (1) to n complex scalar fields, or equivalently a complex scalar field with n independent components Φ i (i = 1, 2, · · · , n) where ξ i are positive and dimensionless constants. The covariant derivative and field strength are respectively defined as Since the Lagrangian (36) includes only dimensionless coupling constants, it is manifestly invariant under a global scale transformation. Following the Noether theorem, the current for the scale transformation reads In a similar way to the cases of both Abelian and non-Abelian gauge groups, we can show that this current is conserved on-shell. As mentioned in the above, the point is to take a common "radial" (real) field Ω(x) such that This is a great simplification in the sense that n real component fields in Φ i (x) is reduced to a single one, but this restriction is needed to obtain the canonical kinetic term for the dilaton in a rather simple way. Now we would like to show that the Lagrangian (36) has the property of spontaneous symmetry breakdown of gauge symmetry when scale symmetry is spontaneously broken. To do that, we proceed similar steps to the case of the single scalar field with the Abelian gauge group in Section 2. With the critical choice the non-minimal term in (36) yields the Einstein-Hilbert term and part of the kinetic term of the scalar field σ) up to a surface term Moreover, the second term in (36) is reduced to where we have selected α i = e i and defined new massive gauge fields B (i) µ by In terms of the new gauge fields B (i) µ , the Maxwell's Lagrangian in (36) is cast to the form To summarize, the Lagrangian (36) is given by where the definition ζ −2 = 6 + 1 n n i=1 1 ξ i is used. It is then obvious that this system also exhibits the new Higgs mechanism triggered by spontaneous symmetry breakdown of scale symmetry. Radiative corrections and dilaton mass In this section, as an example, we wish to apply our idea discussed so far to the standard model and evaluate the quantum effects. Since the standard model is known to not be classically scale-invariant because of the presence of the (negative) mass term of the Higgs field, we must replace the mass term with a new scalar field. It is then natural to identify this new scalar with the Φ field which couples to a scalar curvature in a non-minimal manner. Let us first recall that in our previous work [11] we have already considered one-loop effects of a classically scale invariant B-L model [8]. However, our finding of gravitational spontaneous symmetry breakdown of gauge symmetry as a result of spontaneous symmetry breakdown of scale symmetry is very universal in the sense that our ideas can be generalized not only to local scale symmetry as clarified in [12] but also to non-Abelian gauge groups and even a scalar field with many of components as discussed in this article. In fact, it is obvious that our ideas can be applied to any model which is scale-invariant and involves the non-minimal coupling terms between the curvature scalar and charged scalars associated with local gauge symmetries. In our calculation, we are not ambitious enough to quantize the metric tensor field and take a fixed Minkowski background g µν = η µν . Moreover, we restrict ourselves to the calculation of radiative corrections between dilaton and matter fields in the weak-field approximation. One of the motivations behind this study is to calculate the size of the mass of dilaton. As shown above, the dilaton is exactly massless at the classical level owing to scale symmetry, but it is well-known that radiative corrections violate the scale invariance thereby leading to the trace anomaly. Consequently, the dilaton becomes massive in the quantum regime. Since the dilaton is a scalar field like the Higgs particle, one might expect that there could be quadratic divergence for the self-energy diagram. On the other hand, since the dilaton is the Nambu-Goldstone boson resulting from the scale symmetry, the dilaton mass would be much lower in the such a way that the pion masses are very smaller as the pions can be understood as the Nambu-Goldstone boson coming from SU(2) L × SU(2) R → SU(2) V flavor symmetry breaking. As long as we know, nobody has calculated the dilaton mass in a reliable manner, so we wish to calculate the dilaton mass within the framework of the present formalism and determine which scenario, quadratic divergence and huge radiative corrections like the Higgs particle or very lower mass like the pions, is realized. Remarkably enough, it will be shown that although the dilaton mass is quadratic divergent, the cutoff scale, which is the Planck mass in the formalism at hand, is exactly cancelled by the induced coupling constant, by which the dilaton mass is kept to be around the GeV scale. Whenever we evaluate anomalies, the key point is to adopt a suitable regularization method respecting classical symmetries existing in the action as much as possible. In this article, as a regularization method, we make use of the method of continuous space-time dimensions, for which we rewrite previous results in arbitrary D dimensions [17]. Like the dimensional regularization, the divergences will appear as poles 1 D−4 , which are cancelled by the factor D − 4 that multiplies the dilaton coupling, thereby producing a finite result leading to an effective interaction term. Basic formalism Our starting Lagrangian, which is a scale-invariant extension of the standard model coupled to the non-minimal terms, is of form where L m denotes the remaining Lagrangian part of the standard-model sector such as the Yukawa couplings and various definitions are given by the following expressions: with e i (i = 1, 2) being U(1) coupling constants and g being an SU(2) coupling constant. As in the Appendix A, in this model, we can also calculate the Noether current for scale transformation It turns out that this dilatation current is conserved on-shell as well. For simplicity of presentation, we take the vanishing SU(2) gauge field, A a µ = 0 since this assumption does not change the essential conclusion for our purpose. In general D space-time dimensions, as a generalization of Eq. (16), the local scale transformation is defined aŝ Under this local scale transformation (49), with the definition f = log Ω, the scalar curvature is transformed as for which we set D = 4 in what follows since we do not quantize the metric tensor and therefore do not have poles from the curvature. In a physically more realistic situation, the scale symmetry must be broken spontaneously in the higher energy region before spontaneous symmetry breaking of the electro-weak symmetry since all quantum field theories must in principle contain the gravity from the beginning although contributions from the gravity can be usually ignored when dealing with particle physics processes. Therefore, let us first break the scale invariance by taking the following value for the charged scalar field Φ: where we have defined Ω(x) = e 2 D−2 ζσ and ζ −2 ≡ 4 D−1 D−2 + 1 ξ 1 = 6 + 1 ξ 1 . Then, the first term in (46) takes the form Similarly, the third term in (46) is changed to the form where we have defined a new coupling constant and massive gauge field aŝ and chosen α =ê 1 for convenience. Adding (52) and (53) together yields the expression On the other hand, the Lagrangian of matter fields turns out to depend on the dilaton field σ in a non-trivial manner in general D space-time dimensions. First, the non-minimal term for H field becomes Second, the kinetic term for H is cast to where the new covariant derivative is defined aŝ with beingê 2 = Ω D−4 2 e 2 . Third, the electro-magnetic terms are reduced to the form where the new field strengths are defined as 7 Finally, the potential term can be rewritten as To summarize, the starting Lagrangian is now of the form Next, we are ready to deal with spontaneous symmetry breakdown of the electro-weak symmetry, which is assumed to occur at the lower energy, GeV scale, than breaking of scale symmetry. To realize the spontaneous symmetry breakdown of the electro-weak symmetry, we assume the conventional ansatz With the parametrizationĤ T = (0, v+h √ 2 )e iϕ , after the spontaneous symmetry breakdown of the electro-weak symmetry, the potential term can be described as 7 The presence of the Nambu-Goldstone mode θ inF (1) µν merely shows that scale invariance of the theory under consideration is violated in any space-time dimension except four dimensions. where a constant in the square bracket is discarded, and the vacuum expectation value v and the Higgs mass m h are respectively defined as where the Planck mass M p is explicitly written. Now we wish to consider couplings between the dilaton field σ and matter fields which vanish at the classical level (D = 4) but provide a finite contribution at the quantum level, interpreted as the trace anomaly. In the weak field approximation, let us extract terms linear in the dilaton σ in V (Ĥ) as Then, the potential V (Ĥ) is devided into two parts where we have defined as Using the parametrizationĤ T = (0, v+h √ 2 )e iϕ , the remaining part including the field H except the Higgs potential is also rewritten, and consequently the whole Lagrangian (62) takes a little longer expression where we have recovered the Planck mass scale M p . As mentioned in Section 4, given two non-minimal terms, we need to diagonalize the kinetic terms for the dilaton σ and the Higgs field h to get the canonical form. However, in the present context, the energy scale v of the electro-weak symmetry breaking is much lower compared to that of the scale symmetry one, so it is reasonable to take the approximation With this approximation, the Lagrangian is rather simplified to Based on this Lagrangian, we wish to calculate quantum effects, in particular, on the dilaton coupling below. Since we are interested in the low energy region, the derivative coupling of the dilaton appearing inF (i) µν andD µĤ will be ignored in the calculation. The coupling between dilaton and Higgs field In this subsection, we first switch off the U(1) fields and calculate the coupling between the dilaton σ and the Higgs particle h and derive an effective Lagrangian at the one-loop level. The contribution from the U(1) fields will be discussed in the later subsection. We will see that the σh n (2 ≤ n ≤ 4) (n + 1)-point diagrams are non-vanishing whereas the σh n (n ≥ 5) diagrams are vanishing. First, let us consider three-point (with two Higgs h and one dilaton σ as the external particles), one-loop diagrams. Inspection of the vertices reveals that we have three types of one-loop divergent diagrams in which the Higgs field is circulating in the loop and one dilaton field, whose momentum is assumed to be vanishing, couples. Note that the divergences stemming from the Higgs one-loop diagrams provide us with poles 1 D−4 , which cancel the factor D − 4 multiplying the dilaton coupling in V (1) (Ĥ), thereby yielding a finite contribution. One type of one-loop divergent diagram, which we call the diagram (A1), is a tadpole type and is given by the Higgs loop to which the dilaton couples by the vertex −(D − 4)3!ζλ H in V (1) (Ĥ). The corresponding amplitude T A1 is of form where we have used the familiar formula in the dimensional regularization which corresponds to a specific case of the general formula in Appendix B and the property of the gamma function Γ(m + 1) = mΓ(m). The second type of one-loop divergent diagram, which we call the diagram (A2), is given by the Higgs loop to which the dilaton couples by the vertex −(D − 4)ζm 2 h in V (1) (Ĥ) and with the Higgs self-coupling vertex −3!λ H in V (0) (Ĥ). The amplitude T A2 is calculated as where we have used the equation ). The final type of one-loop diagram, which we call the diagram (A3), is a little more involved and given by the Higgs loop to which the dilaton couples by the vertex −(D − 4)3!ζ λ H 2 m h in V (1) (Ĥ) and with the Higgs self-coupling vertex −3! λ H 2 m h in V (0) (Ĥ). The amplitude T A3 reads where q is the external momentum of the Higgs field. In order to reach the final result in Eq. (76), we have evaluated the integral as follows: Here at the second equality, we have used the Feynman parameter formula (145) and at the fourth equality, we have shifted the momentum k + xq → k, which is allowed since the integral is now finite owing to the regularization, and at the fifth equality we have used the on-mass-shell condition q 2 = −m 2 h and Eq. (75). Thus, adding three types of contributions, we have From this result, we can construct an effective Lagrangian at the one-loop level where we have explicitly written down the Planck mass dependence in such a way that we can recognize dimensions clearly. Next, let us take account of four-point (with three Higgs and one dilaton as the external particles), one-loop diagrams. In this case, inspection of the vertices reveals again that there are two types of one-loop divergent diagrams where the Higgs field is circulating in the loop. One type of one-loop divergent diagram, which we call the diagram (B1), is given by the Higgs loop to which the dilaton couples by the vertex −(D − 4)3!ζλ H in V (1) (Ĥ) and with the Higgs self-coupling −3! λ H 2 m h in V (0) (Ĥ). The corresponding amplitude T B1 reads where we have used Eq. (77). The other type of one-loop divergent diagram, which is called the diagram (B2), is given by the Higgs loop to which the dilaton couples by the vertex −(D − 4)3!ζ λ H 2 m h in V (1) (Ĥ) and with the Higgs self-coupling −3!λ H in V (0) (Ĥ). The amplitude T B2 takes the form Putting the two types of contributions together, we obtain This result gives rise to an effective Lagrangian where we have recovered the Planck mass again. Now we turn our attention to five-point (with four Higgs and one dilaton as the external particles), one-loop diagrams. In this case, we find that there is only one type of one-loop divergent diagram where the Higgs field is circulating in the loop. This type of one-loop divergent diagram, which we call the diagram (C), is given by the Higgs loop to which the dilaton couples by the vertex −(D − 4)3!ζλ H in V (1) (Ĥ) and with the Higgs self-coupling −3!λ H in V (0) (Ĥ). The corresponding amplitude T C reads where p and q are external momenta of the two Higgs fields. This quantum effect gives us an effective Lagrangian Finally, it is straightforward to evaluate (n + 1)-point (with n ≥ 5 Higgs and one dilaton as external particles), one-loop diagrams in a similar manner. It turns out that these higherpoint, one-loop diagrams do not yield any divergence, thereby leading to the vanishing effective Lagrangian. Moreover, we find that there are no divergences for the σ n h m (n ≥ 2)-type of amplitudes at the one-loop level. After all, we have a total effective Lagrangian at the one-loop level Note that this effective Lagrangian has the similar form to the potential as V (1) (Ĥ) but each coefficient is suppressed by the Planck mass, which means that effects of radiative corrections are very tiny in the low energy region. Yukawa coupling In the standard model, the fermion masses arise from the Yukawa coupling between the fermions and the Higgs field. It is therefore of interest to evaluate radiative corrections of the Yukawa coupling in the present model. It is easy to see that there are no radiative corrections to the coupling between the dilaton and the fermions at the one-loop level, but it turns out that the one-loop induced vertex produces radiative corrections to this coupling, so we are willing to calcuclate this quantum effect in this subsection. Before delving into the calculation, let us go back to the basics of the Yukawa coupling. The Yukawa coupling between the fermions and the Higgs field is generically given by the following Lagrangian where g Y is the Yukawa coupling constant, ψ L and ψ R are respectively a left-handed, SU(2)doublet spinor and a right-handed singlet spinor. To move the Jordan frame to the Einstein frame, we use the local scale transformation (49) and its fermionic onê Under this local scale transformation, the Lagrangian (87) takes the same form With the following definitions of spinors and the unitary gauge for the Higgs fieldĤ, the Lagrangian is reduced to where we have defined M ψ = g Y √ 2 v. We are now in a position to calculate the one-loop amplitude where two fermions and one dilaton appear as the external particles. In this case, there is no divergent diagram but we have a finite diagram where the fermion and the Higgs field propagate in the loop, which we call the diagram (D). In this diagram, the dilaton couples to the Higgs by the vertex − 6 Mp in (86), which is a one-loop effect 8 , and two fermions couple to the Higgs by the 8 As will seen later, we also have the similar contribution from the U (1) gauge sector at the one-loop level, but we will now neglect it since the contribution from the gauge sector is smaller than that from (86). vertex − g Y √ 2 in (91). Thus, this diagram is essentially a two-loop effect. The correponding amplitude is given by where q is the external momentum of the fermion field, which satisfies the on-mass-shell condition q 2 = −M 2 ψ and the function f (x) is defined as To obtain the result in Eq. (92), we have calculated the integral as follows: Here at the second equality, we have used the Feynman parameter formula (146). At the fourth equality, we have shifted the momentum k − xq → k, and used that d D k k µ F (k 2 ) = 0 for a general function F in addition to q µ ≈ 0 at the low energy. Furthermore, at the final equality, we have made use of the integral formula which is a specific case of a general formula (141). From the above result, an effective Lagrangian for the interaction between the dilaton and fermions can be derived to where the effective coupling g σ is defined by the absolute value of T D , i.e., g σ = |T D |. Dilaton mass As seen in the Lagrangian (71), the dilaton is exactly massless at the classical level since it is the Nambu-Goldstone boson stemming from spontaneous symmetry breakdown of scale symmetry. However, it is well-known that the scale symmetry is violated by the trace anomaly at the quantum-mechanical level, and as a result, the dilaton becomes massive. It is very interesting to evaluate the size of the dilaton mass within the present formalism. It is in general expected that if any, the Nambu-Goldstone boson would not be so heavy as in the pions. Of course, the size of the dilaton mass would be closely related to an energy scale where the scale symmetry is broken spontaneously. On the other hand, because the dilaton is a representative example of scalar particle as well as the Higg particle, it is of interest to investigate if the dilaton would receive the quadratic divergence like the Higgs particle or not. It turns out that at the one-loop effect, there is no quantum correction for the self-energy of the dilaton and it is at the two-loop effect that radiative corrections appear for it in the formalism at hand. Actually, we have a one-loop divergent diagram for the self-energy of the dilaton, which we call the diagram (E), where two external dilatons couple to the Higgs loop by the vertex −(D − 4)ζm 2 h in V (1) (Ĥ) and the vertex − 6 Mp in (86) which is already a one-loop effect. Therefore, this one-loop diagram is essentially a two-loop contribution. The amplitude takes the form This amplitude directly gives rise to an effective action for the mass term of the dilaton where we have defined the induced dilaton mass m σ as As expected, it has turned out that the dilaton, which is massless classically, becomes massive because of radiative corrections. From the result (99), one might be tempted to conclude that the dilaton mass induced by radiative corrections is very small since the size of the mass is suppressed by the Planck mass and ζ ≈ λ H ≈ O(1). But the story has not ended yet because we have to take the quadratic divergence, which is the root of the hiearchy problem in case of the Higgs particle, into consideration. Since there is no interaction vertex σ 2 h 2 in the present formalism, the most severe quadratic divergence appears when the fermion is circulating in the loop via the vertex in the Lagrangian (96). The amplitude T F , which is essentially a five-loop effect, is certainly quadratically divergent by power counting where Λ is the ultra-violet cutoff. Then, with the reasonable choice Λ = M p , the mass of the dilaton is approximately given by which is around the GeV scale since |f ( which holds approximately for ψ = top-quark. (Here it is reasonable to take ζ ≈ λ H ≈ g Y ≈ O(1) at the low energy.) Note that the factor 1 Mp in g σ is cancelled by the cutoff M p . It is remarkable that the quadratic divergence, which leads to the burdensome hierarchy problem in case of the Higgs particle, gives the GeV scale mass to the dilaton! At first sight, it appears that the GeV scale mass of the dilaton is against the results of the LHC owing to null results in searches for new scalar particles except the Higgs particle below a few TeV scale. However, as seen in the relation g σ = |T D | and Eq. (92), the coupling between the dilaton and the Higgs particle is so tiny that it is extremely difficult to detect the dilaton in the LHC. Contributions from gauge fields In the previous subsections, we have switched off the U(1) gauge fields. In this final subsection, we switch on the U(1) gauge fields, and wish to calculate the coupling between the dilaton and the Higgs particle by using propagators and vertices from the sector of the gauge fields in the Lagrangian (71). The result is very simple and illuminating in the sense that we can obtain the similar form of the effective Lagrangian to (86) at the one-loop level, but the coefficient of each term is multiplied by the square of the "fine structure constant". For convenience, let us pick up part of the Lagrangian (71) which contains the gauge fields With the following definitions of the mass of the gauge fieldŝ the Lagrangian (102) can be rewritten as Conclusion In this article, we have investigated a Higgs mechanism in scale-invariant theories of gravitation in detail. After reviewing this new Higgs mechanism found in our previous articles [11,12] in terms of the simplest model, we have extended the Higgs mechanism to non-Abelian gauge groups and a scalar field with many components. Since we have already considered the Higgs mechanism in a locally scale-invariant theory of gravitation, i.e., conformal gravity, the validity of this mechanism in the scale-invariant gravitational theories is very universal and therefore would have some phenomenological applications in future. Moreover, we have spelled out quantum effects of a scale-invariant extension of the standard model in a flat Minkowski background, and examined the coupling between the dilaton and the Higgs particle. An intriguing observation done in our analysis is that although the mass of the dilaton is exactly zero at the classical level owing to the Nambu-Goldstone theorem, it becomes non-zero and takes a finite size around the GeV scale because of radiative corrections. It is worthwhile to mention that we have succeeded in deriving the size of the dilaton mass deductively by starting with a fundamental theory and without any specific assumption. As long as we know, the dilaton mass has not thus far been obtained in such a priori manner, so we think our derivation of the dilaton mass to be very interesting. As mentioned in the article, the GeV scale mass of the dilaton is consistent with the recent null results of new scalar particles except the Higgs particle in the LHC since the coupling constant of the dilaton is too small to detect the dilaton in the LHC. However, the dilaton with the GeV scale mass would have some implication in cosmology, e.g., the dilaton could become one of candidates of dark matter if it is somehow stable because of some unknown mechanism. Our consideration in this article is confined to the quantum analysis in a fixed Minkowski background. In other words, quantum effects coming from the gravity are completely ignored because of non-renormalizability of quantum gravity. Since quantum gravity effects are of course not so dominant as quantum effects from matter fields in the low energy region, it is physically reasonable to neglect quantum effects of the gravity as the first approximation of the calculation. Nevertheless, it is of interest to take into consideration the quantum effects from the gravity. In the future, we wish to study the quantum effects from the gravitational sector in the present formalism. Another interesting study for an application of our finding is the Higgs inflation [18]. We wish to return this problem as well in near future. A Derivation of current for scale transformation In Appendix A, we will present a derivation of the dilatation current (4) via the Noether theorem. It is easy to show that the Lagrangian (1) is invariant under the scale transformation (3) without surface terms. Therefore, the expression of the Noether current is of form Under the scale transformation (3) with a global parameter Ω = e Λ ≈ 1 + Λ (|Λ| ≪ 1), the current reads so we have to calculate three objects ∂L ∂∂µgρσ , ∂L ∂∂µΦ , ∂L ∂∂µΦ † to obtain the expression of the dilatation current J µ . In particular, calculating the first object ∂L ∂∂µgρσ is so complicated that we will present its derivation in detail. (121) L 1 includes terms with second derivative of the metric, i.e., ∂ 2 g, so we need to perform the integration by parts to transform them to terms with first derivative, i.e., ∂g. After the integration by parts, L 1 is devided in two parts, one of which contains terms proportional to ∂ϕ and the other part does terms proportional to ϕ itself Now let us focus on the second term, which we call A, and show that A is equal to −2L 2 . Inserting Eq. (124) to Eq. (123), we reach the result that A is equal to −2L 2 : Next, plugging this result into Eq. (122) leads to Moreover, substituting Eq. (126) into Eq. (119), we have Here we have defined where K α is defined as where at the second equality we have used Eqs. (121) and (124). With this expression (129), it is straightforward to take the variation of L K with respect to ∂ µ g ρσ whose result is given by ∂L K ∂∂ µ g ρσ = − √ −g ∂ α ϕ(g α(ρ g σ)µ − g αµ g ρσ ). Accordingly, Eqs. (131) and (134) give us Since ∂ µ g ρσ is only included in L N M , from the definition ϕ = ξΦ † Φ, we have Furthermore, it is easy to calculate the variation of the Lagrangian with respect to ∂ µ Φ, ∂ µ Φ † . The results read Putting together Eqs. (136) and (137), the dilatation current (118) is calculated to be B Useful formulae in the loop calculation In Appendix B, we summarize useful formulae in calculating radiative corrections in Section 5. Following Ref. [15], as a regularization method, we adopt the method of continuous spacetime dimensions D in a flat Minkowski space-time. In this regularization method, all the quantities are extended from four dimensions to D dimensions. Let us therefore focus on the following loop integral: where m, n are integers and ∆ is a constant. By power counting, this integral is convergent as long as D < 2n − 2m + 4. With a Wick rotation k 0 = ik D and the ansatz of spherical symmetry, the integral (139) can be rewritten as where V (D) = 2π D 2 Γ( D 2 ) is a D-dimensional volume form, e.g., V (4) = 2π 2 . Via the change of variables from k to t = k 2 ∆ , the integral is reduced to where the definition of the beta function is used: Since Γ(z) has isolated poles at z = 0, −1, −2, · · ·, the integral (141) has isolated poles at D = 2(n − m + 2), 2(n − m + 3), · · ·. We often make use of a relation for the gamma function, which holds for positive real numbers x > 0 and Γ(1) = 1. In Section 5, to combine propagator denominators we utilize the Feyman parameter formula 1 A 1 A 2 · · · A n = 1 0 dx 1 · · · dx n δ( x i − 1) (n − 1)! (x 1 A 1 + x 2 A 2 + · · · + x n A n ) n . In the case of only two denominator factors, this formula reduces to Taking differentiation of Eq. (145) with respect to B, we can derive another formula
12,297.6
2013-08-20T00:00:00.000
[ "Physics" ]
Selected Properties of Bio-Based Layered Hybrid Composites with Biopolymer Blends for Structural Applications In this study, layered composites were produced with different biopolymer adhesive layers, including biopolymer polylactic acid (PLA), polycaprolactone (PCL), and biopolymer blends of PLA + polyhydroxybutyrate (PHB) (75:25 w/w ratio) with the addition of 25, 50% microcrystalline cellulose (MCC) and 3% triethyl Citrate (TEC) for these blends, which acted as binders and co-created the five layers in the elaborated composites. Modulus of rupture (MOR), modulus of elasticity (MOE), internal bonding strength (IB), density profile, differential scanning calorimetry (DSC), thermogravimetric analysis (TGA), and scanning electron microscopy (SEM) analysis were obtained. The results showed that among the composites in which two pure biopolymers were used, PLA obtained the best results, while among the produced blends, PLA + PHB, PLA + PHB + 25MCC, and PLA + PHB + 25MCC + 3TEC performed best. The mechanical properties of the composites decreased with increases in the MCC content in blends. Therefore, adding 3% TEC improved the properties of composites made of PLA + PHB + MCC blends. Introduction The continuous development of science and technology increases the demand for environmentally friendly products of natural origin and the increased reuse of forestry and agricultural byproducts that are treated mostly as waste [1,2]. Due to concerns related to the depletion of petroleum and greenhouse gas emissions resulting from the production of petroleum products, the use of renewable, recyclable, and compostable raw materials is becoming increasingly desirable [3]. In recent years, environmental regulations have forced producers of wood-based composites to think about using sustainable materials in producing new products. This is one of the reasons why the use of alternative raw materials in wood-based composites is of growing interest [4,5]. New political strategies aim to improve efficiency and reduce impacts on health and the environment, and are also an essential element in promoting competitiveness and the idea of sustainable development [6]. Trends in wood-based products for commodities show an increase in demand, which corresponds to a combination of different factors such as aesthetic aspects, awareness of the origin of products, service life, and performance. However, some of these factors can be contradictory, as products from natural origin have a shorter service life than petroleum products, and products with enhanced performance usually imply a more complex disposal of the residues. On the other hand, there is pressure for new products to perform perfectly during use without harming health and the environment, and, at the end of their life cycle, to be easily reduced, reused, recycled, composted, or recovered as energy. The elaborate production of biocomposites, defined as materials composed of at 270 • C, yielding a small processing range for melt extrusion [30]. Blending two or more polymers with different properties to produce composite materials is a well-known strategy to obtain specific physical properties without the need for complex polymeric systems [31]. PLA and PHB blends have been intensively studied because of the good synergy they can form, with PHB increasing the crystallinity of the blend [32][33][34] and PLA increasing the stiffness [35]. The mechanical blending of these two polymers can be achieved in a melt state because of their similar melting temperatures, which allows for better blending. Many studies concluded that the best blend is achieved by blending 75 wt% PLA with 25 wt% PHB [32,[35][36][37]. Polymers reinforced with natural fibers are replacing synthetic fiber-reinforced plastics in different industrial sectors, including the automotive industry, packaging, and furniture production, providing lighter materials with better thermal properties [38]. This investigation aimed to assess the impact of natural biopolymer binders on selected mechanical and physical properties of a five-layer lignocellulosic composite produced with different biopolymer layers (including, for example, PLA and PCL), using biopolymer blends as an adhesive and to co-create the five layers in the composites. In the light of the state of art mentioned above, this study fills the gap between the development of formaldehyde-free binders for wood, made of renewable resources, and that of wood-based layered composites modified by additional layers made of biopolymers and their blends. Materials This study produced layered composites from beech (Fagus sylvatica L.) veneers. The nominal dimensions of the commercial veneers were 2500 mm × 200 mm × 0.60 mm, length × width × thickness, respectively. The veneers were cut into 200 mm × 200 mm sheets. The moisture content of ca. 5% of every veneer was measured using an ultrasonic moisture control device. Pure, laboratory-purpose polylactide (PLA, Sigma-Aldrich, product no. 38534, Burlington, MA, USA), polycaprolactone (PCL, Sigma-Aldrich, product no. 704105) in drops with a diameter of 3 mm, and five variants of biopolymer blends, obtained under laboratory conditions, were used as binders. The following components were used to achieve biopolymer blends: PLA was provided by Futerro (Belgium) at extrusion grade; Polyhydroxybutyrate (PHB, P309E) was provided by Biomer (Germany); Sigma-Aldrich provided microcrystalline cellulose (MCC); triethyl citrate (TEC) was provided by Acros Organics. Biopolymer Blend Elaboration The PLA-PHB masterbatch (MB) was blended in a 75:25 w/w ratio according to recommendations from the literature [29,33], and composites were elaborated by mixing MB with different contents of MCC and TEC, as shown in Table 1. Blends were manufactured using a twin-screw extruder (M250, LabTech Engineering, Thailand) with a screw speed of 30-200-100-100 rpm (feed-mix-extrusion) and a temperature profile process of 180-185-190-195 • C. Composites were extruded twice to guarantee dispersion and then granulated into pellets. Manufacturing of Biopolymer Adhesive Layers The biopolymer adhesive layers were manufactured with an intended thickness of 1 mm. The granules were manually spread onto a frame mold with an average total of 62 g (1550 g m −2 ) over PTFE mats and placed on pressing steel plates. The granules were evenly distributed over the entire surface, which was limited by the frame (Figure 1a). The first stage involved heating slightly above the melting point of the binder; therefore, only the bottom steel plate with biopolymers/blends spread on the PTFE mat was placed in the press. Adequately thick spacer bars allowed the press shelves to be closed without the upper shelf contacting the granules, while simultaneously trying to provide heat from both sides. This treatment with the bottom steel plate and spacer bars made it possible to control the melting of these materials. The heating time was 10 min, enough to reach the melting point of the blends (Figure 1b). When the biopolymers and blends achieved a molten consistency, they were covered from above with a second PTFE mat and a steel plate and re-inserted between the press shelves to be pressed to a thickness of 1 mm. The temperature of the press was 185 • C. A water bath was used to cool the layers after removal from the press, and then the obtained adhesive layers were cut along the sides of the frame to obtain an adhesive sheet ( Figure 1c). Manufacturing of Biopolymer Adhesive Layers The biopolymer adhesive layers were manufactured with an intended thickness of 1 mm. The granules were manually spread onto a frame mold with an average total of 62 g (1550 g m −2 ) over PTFE mats and placed on pressing steel plates. The granules were evenly distributed over the entire surface, which was limited by the frame (Figure 1a). The first stage involved heating slightly above the melting point of the binder; therefore, only the bottom steel plate with biopolymers/blends spread on the PTFE mat was placed in the press. Adequately thick spacer bars allowed the press shelves to be closed without the upper shelf contacting the granules, while simultaneously trying to provide heat from both sides. This treatment with the bottom steel plate and spacer bars made it possible to control the melting of these materials. The heating time was 10 min, enough to reach the melting point of the blends (Figure 1b). When the biopolymers and blends achieved a molten consistency, they were covered from above with a second PTFE mat and a steel plate and re-inserted between the press shelves to be pressed to a thickness of 1 mm. The temperature of the press was 185 °C. A water bath was used to cool the layers after removal from the press, and then the obtained adhesive layers were cut along the sides of the frame to obtain an adhesive sheet (Figure 1c). Layered Composite Manufacturing Five-layer composites (alternating layers of veneer with biopolymer layers) were manufactured with dimensions 200 mm × 200 mm with an average thickness of 3 mm. The middle layer grain directions were oriented 90° relative to the surface veneers. As a result, seven types of composites were produced with different adhesive layers (subsequently Layered Composite Manufacturing Five-layer composites (alternating layers of veneer with biopolymer layers) were manufactured with dimensions 200 mm × 200 mm with an average thickness of 3 mm. The middle layer grain directions were oriented 90 • relative to the surface veneers. As a result, seven types of composites were produced with different adhesive layers (subsequently denoted by the shortcodes listed in Table 2), and no less than four layered panels of each binder type. The composites were pressed for 5 min without pressure, after which they were overheated and the adhesive layers were allowed to melt between the veneers; then, 0.6 MPa pressure was applied for 1 min, followed by 1.5 MPa for 1 min. The total pressing time was 7 min. The press temperature was 185 • C. Following the research plan, the produced composites were conditioned in ambient conditions (20 • C; 65% R.H.) to a constant weight for seven days before being cut. Mechanical and Physical Properties The physical and mechanical properties were tested according to European standards. The moduli of rupture (MOR) and moduli of elasticity (MOE) were determined according to EN 310 [39] and were reported as the average of ten measurements. Internal bond (IB) was measured according to EN 319 [40]. Five samples of the layered lignocellulosic composite for each binder variant were used for bond quality. All mechanical properties were examined with an INSTRON 3369 (Instron, Norwood, MA, USA) standard laboratory testing machine. The density profile (DP) of samples was analyzed using a DA-X measuring instrument (GreCon, Alfeld, Germany). Measurement based on direct scanning X-ray densitometry was carried out with a speed of 0.05 mm s −1 across the panel thickness with a sampling step of 0.02 mm. The nominal dimensions of all samples were 50 mm × 50 mm. Graphs of the density distribution for each composite type were obtained based on three replicates. Differential Scanning Calorimetry (DSC) tests were performed using a DSC Q20 Instrument (TA Instruments, New Castle, DE, USA). The measurements were carried out at a heating rate of 10 • C min −1 under an inert gas (nitrogen) atmosphere with a flow rate of 50 mL min −1 . Samples of 5 mg were tested with two repetitions. Thermogravimetric analysis (TGA) was performed on a Q500 (TA Instruments, New Castle, DE, USA) apparatus in air (flow rate 40 mL min −1 ) in a temperature range of 50-600 • C at a heating rate of 10 • C min −1 . Samples of 5-10 mg were tested with two repetitions. Scanning electron microscopy images (SEM) were obtained with a Quanta 200 (FEI, Hillsboro, OH, USA) scanning electron microscope. Pictures of the cross-sections of the manufactured five-layer composites were obtained with a NIKON SMZ 1500 (Kabushiki-gaisha Nikon, Minato, Tokyo, Japan) optical microscope. Statistical Analysis Analysis of variance (ANOVA) and t-test calculations were used to test (α = 0.05) for significant differences between factors and levels, where appropriate, using IBM SPSS statistical software (IBM, SPSS 20, Armonk, NY, USA). In addition, a comparison of the means was performed by employing the Duncan test when the ANOVA indicated a significant difference. Results and Discussion TGA and DSC analysis was carried out for more exhaustive characterization of all produced biopolymer adhesive layers (Figures 2 and 3). The obtained TGA results give information regarding the thermal resistance of the tested materials. The results for samples of PLA and PCL were widely analyzed and compared to the findings in Gumowska et al. (2021) [41]. TGA analysis recorded the degradation temperature, which should not be exceeded during further tests with these materials. TGA analysis of the biopolymers and blends showed one main thermal degradation stage at temperatures of 280-430 • C, with a mass loss of approximately 90%. Figure 2 presents two characteristic curve profiles. The first is a smooth transition (PLA and PCL) and the second shows a deflection characteristic of materials consisting of two or more components (biopolymer blends). The data in Table 3 display the thermal stability established for 50% and 80% weight loss of the tested samples. The higher temperature obtained is related to higher thermal resistance. The highest thermal stability was noted for PCL. The rest of the samples showed similar thermal stability between them. The endothermal melting process of PLA and PCL was similar to that found in published data. In Figure 3 multiple melting peaks can be seen, which were previously reported for PHB and copolymers. These multiple peaks could be caused by melting, recrystallization, and remelting during heating; polymorphism; different molecular weight species; different lamellar thickness, perfection, or stability; and physical aging or relaxation of the rigid amorphous fraction, among others [42]. For example, melting-recrystallization-remelting was considered the cause of complex double melting in PHBV [43]. A similar mechanism may be supposed for the PHB reference, but the occurrence of different molecular weight species due to chain scission during melt processing or the presence of β crystals in addition to the common α form of PHB should not be excluded [44]. The results of the density profiles are summarized in Figure 4. The average densities of all composites ranged from 980 to 1080 kg m −3 . The density profiles for individual samples were symmetrical to the middle of the thickness of the composites; therefore, the graph presents the density profiles for their axis of symmetry to facilitate analysis. The graph shows the estimated boundaries of the layers in the composites. The tested composites consisted of five layers, alternating between veneer layers and biopolymer layers. Regardless of the biopolymer or blend layer used, the shapes of the profiles did not differ significantly. The most remarkable information corresponds to PCL, which is usually less dense than PLA and its derived blends. Every determined profile shows an increase in density characteristic of the layer materials, precisely in the bonding zones, which is related to compaction of the resin due to resistance of the wood to impregnation [45,46]. This image justifies calling the produced composites multi-layered composites, as they could be perceived as five-layer panels, of which the biopolymer layer acted as a binder, and separate layers were visible even to the naked eye. The bonding lines are accurately shown on the graphs; they were flat over the entire section of each sample's width. The highest average values of bonding line density were recorded for biopolymer blends with 50% MCC due to the density of cellulose (around 1500 kg m −3 ). Another remarkable fact is the uneven distribution of densities at the 90 • veneer (right side of the plots), which corresponds to an inverted section in comparison to the face veneer, thus presenting a different structure along the section with a lower density in the zones with tracheids. The results of the density profiles are summarized in Figure 4. The average densities of all composites ranged from 980 to 1080 kg m −3 . The density profiles for individual samples were symmetrical to the middle of the thickness of the composites; therefore, the graph presents the density profiles for their axis of symmetry to facilitate analysis. The The results of the density profiles are summarized in Figure 4. The average densities of all composites ranged from 980 to 1080 kg m −3 . The density profiles for individual samples were symmetrical to the middle of the thickness of the composites; therefore, the graph presents the density profiles for their axis of symmetry to facilitate analysis. The and separate layers were visible even to the naked eye. The bonding lines are accurately shown on the graphs; they were flat over the entire section of each sample's width. The highest average values of bonding line density were recorded for biopolymer blends with 50% MCC due to the density of cellulose (around 1500 kg m −3 ). Another remarkable fact is the uneven distribution of densities at the 90° veneer (right side of the plots), which corresponds to an inverted section in comparison to the face veneer, thus presenting a different structure along the section with a lower density in the zones with tracheids. The average values of modulus of rupture (MOR) and modulus of elasticity (MOE) under three-point bending stress for the tested composites are presented in Figure 5. The highest average value of MOR (153 N mm −2 ) was found for PLA + PHB samples, while the lowest (93 N mm −2 ) was found for PLA + PHB + 50MCC. In the case of MOE, the highest average value was achieved for PLA + PHB (13,718 N mm −2 ) and MOR. The lowest was for PCL (11,437 N mm −2 ). Adding MCC to PLA + PHB reduced MOR and MOE, while adding 3% TEC increased MOR and MOE in composites with 25 and 50% MCC in biopolymer blends. The tasks of plasticizers are, among others, to enhance polymer chain mobility [47] and to reduce intermolecular interactions, thus giving the material greater flexibility and a plasticization effect [48]. The addition of 3% TEC to blends with 25% and 50% MCC caused the bonding to anchor deeper in the wood structure than blends without added TEC, which was confirmed by the density profile graph for these samples and potentially also affected the mechanical properties [16]. Based on statistical analysis, there were no statistically significant differences between the average MOR values for PLA, PLA + PHB, PLA + PHB + 25MCC, PLA + PHB + 25MCC + 3TEC, PLA + PHB + 50MCC + 3TEC, or between PCL and PLA + PHB + 50MCC binders. The average values of modulus of rupture (MOR) and modulus of elasticity (MOE) under three-point bending stress for the tested composites are presented in Figure 5. The highest average value of MOR (153 N mm −2 ) was found for PLA + PHB samples, while the lowest (93 N mm −2 ) was found for PLA + PHB + 50MCC. In the case of MOE, the highest average value was achieved for PLA + PHB (13,718 N mm −2 ) and MOR. The lowest was for PCL (11,437 N mm −2 ). Adding MCC to PLA + PHB reduced MOR and MOE, while adding 3% TEC increased MOR and MOE in composites with 25 and 50% MCC in biopolymer blends. The tasks of plasticizers are, among others, to enhance polymer chain mobility [47] and to reduce intermolecular interactions, thus giving the material greater flexibility and a plasticization effect [48]. The addition of 3% TEC to blends with 25% and 50% MCC caused the bonding to anchor deeper in the wood structure than blends without added TEC, which was confirmed by the density profile graph for these samples and potentially also affected the mechanical properties [16]. Based on statistical analysis, there were no statistically significant differences between the average MOR values for PLA, PLA + PHB, PLA + PHB + 25MCC, PLA + PHB + 25MCC + 3TEC, PLA + PHB + 50MCC + 3TEC, or between PCL and PLA + PHB + 50MCC binders. The results of internal bonding (IB) tests are presented in Figure 6. The outcomes show that the highest average value of IB was that of PLA (8.67 N mm −2 ), and the lowest value was found for PLA + PHB + 50MCC (3.29 N mm −2 ). When analyzing the biopolymer blends, it was noticed that composites with 25% MCC fared better than 50% MCC. Increasing to 25% and 50% MCC in blends resulted in decreases in internal bonding of more than 4% and 37%, respectively. This occurred because the mixture of PLA, PHB, and cellulose resulted in packed phases rather than a new polymer or copolymer, potentially generating contact zones prone to failure due to bad mixing. Even if the blends with PLA and PHB showed good properties overall, bad blending at the extruder or a shorter pre-melting time during the first steps of pressing might generate interstices in which fracture could occur. This was further noted in samples containing MCC as the powder also generated surface-surface defects, as shown before [18]. The results of internal bonding (IB) tests are presented in Figure 6. The outcomes show that the highest average value of IB was that of PLA (8.67 N mm −2 ), and the lowest value was found for PLA + PHB + 50MCC (3.29 N mm −2 ). When analyzing the biopolymer blends, it was noticed that composites with 25% MCC fared better than 50% MCC. Increasing to 25% and 50% MCC in blends resulted in decreases in internal bonding of more than 4% and 37%, respectively. This occurred because the mixture of PLA, PHB, and cellulose resulted in packed phases rather than a new polymer or copolymer, potentially generating contact zones prone to failure due to bad mixing. Even if the blends with PLA and PHB showed good properties overall, bad blending at the extruder or a shorter premelting time during the first steps of pressing might generate interstices in which fracture could occur. This was further noted in samples containing MCC as the powder also generated surface-surface defects, as shown before [18]. On the other hand, adding 3% TEC to the blends with 25% MCC increased IB by 14%, whereas adding 3% TEC to 50% cellulose blends increased IB by 28%, thus proving the positive contribution of TEC to composite blends. Based on statistical analysis, statistically-significant differences existed between the average values of IB for PLA and the rest of the samples. There were no statistically significant differences between PCL, PLA + PHB + 50MCC, PLA + PHB + 50MCC + 3TEC, and PLA + PHB, PLA + PHB + 25MCC, and PLA + PHB + 25MCC + 3TEC. The two major forms of damage to the samples resulting from the IB test are presented in Figure 7. The first type of damage occurred in the nearsurface zone, along with partial destruction in the wood structure obtained as a biopolymer adhesive. The second group includes samples in which the damage took place in the near-surface zone along with destruction on the surface of the adhesive layers. On the other hand, adding 3% TEC to the blends with 25% MCC increased IB by 14%, whereas adding 3% TEC to 50% cellulose blends increased IB by 28%, thus proving the positive contribution of TEC to composite blends. Based on statistical analysis, statistically-significant differences existed between the average values of IB for PLA and the rest of the samples. There were no statistically significant differences between PCL, PLA + PHB + 50MCC, PLA + PHB + 50MCC + 3TEC, and PLA + PHB, PLA + PHB + 25MCC, and PLA + PHB + 25MCC + 3TEC. The two major forms of damage to the samples resulting from the IB test are presented in Figure 7. The first type of damage occurred in the near-surface zone, along with partial destruction in the wood structure obtained as a biopolymer adhesive. The second group includes samples in which the damage took place in the near-surface zone along with destruction on the surface of the adhesive layers. SEM analysis and optical microscopy observation showed the penetration of the biopolymers and biopolymer blends into the wood structure (Figure 8.). There was a recognizable difference in the adhesion zone between the adhesive layers and the wood, where partial penetration of the binder into the pores of the wood could be seen. Penetration into the wood structure is visible in the optical microscope photographs; for an example, see the images of PLA + PHB + 25MCC and the same binder with the addition of 3% TEC (Figure 8e,f). It can be seen that the veneers were not 100% impregnated with the binder, which was also confirmed by the density profiles of the manufactured five-layer composites and which allows for the presence of an exposed face of the veneer with no traces of SEM analysis and optical microscopy observation showed the penetration of the biopolymers and biopolymer blends into the wood structure (Figure 8.). There was a recognizable difference in the adhesion zone between the adhesive layers and the wood, where partial penetration of the binder into the pores of the wood could be seen. Penetration into the wood structure is visible in the optical microscope photographs; for an example, see the images of PLA + PHB + 25MCC and the same binder with the addition of 3% TEC (Figure 8e,f). It can be seen that the veneers were not 100% impregnated with the binder, which was also confirmed by the density profiles of the manufactured five-layer composites and which allows for the presence of an exposed face of the veneer with no traces of any polymer resin, which in turn permits either a raw or a finished appearance on the surface face of the elaborated composites. In addition, the addition of TEC affected the penetration depth into the pores of the wood, which could be the origin of the better mechanical properties shown by these composites, in which the wood structure acted as the skeleton of the solidified polymer resin, resulting in a tougher composite. any polymer resin, which in turn permits either a raw or a finished appearance on the surface face of the elaborated composites. In addition, the addition of TEC affected the penetration depth into the pores of the wood, which could be the origin of the better me chanical properties shown by these composites, in which the wood structure acted as the skeleton of the solidified polymer resin, resulting in a tougher composite. Conclusions In the above study, five-layer composites were produced in which biopolymer adhe sive layers (PLA, PCL, blends of PLA + PHB (75:25 w/w ratio) with or without the additio of 25% or 50% MCC, and 3% TEC for these blends) were used as binders. The densit profiles of the composites manufactured with biopolymers and biopolymer blends a binders did not deviate from the characteristic profiles of plywood, with increased densi ties at the bonding line. The produced layers acted as binders and co-created the five lay ers in the composites. The mechanical properties of the composites decreased with in creases in the amount of MCC in blends. MOR, MOE, and IB values for PLA + PHB 25MCC and PLA + PHB + 50MCC decreased by 11% and 39%, 6% and 15%, and 4% and 40%, respectively. Therefore, adding TEC improved the properties of composites made o PLA + PHB + MCC blends. Among the composites in which two pure biopolymers wer used, PLA obtained the best results, while among the produced blends, PLA + PHB, PLA + PHB + 25MCC, and PLA + PHB + 25MCC + 3TEC performed best. The results achieved herein, regarding an attempt to produce layered wood-based Conclusions In the above study, five-layer composites were produced in which biopolymer adhesive layers (PLA, PCL, blends of PLA + PHB (75:25 w/w ratio) with or without the addition of 25% or 50% MCC, and 3% TEC for these blends) were used as binders. The density profiles of the composites manufactured with biopolymers and biopolymer blends as binders did not deviate from the characteristic profiles of plywood, with increased densities at the bonding line. The produced layers acted as binders and co-created the five layers in the composites. The mechanical properties of the composites decreased with increases in the amount of MCC in blends. MOR, MOE, and IB values for PLA + PHB + 25MCC and PLA + PHB + 50MCC decreased by 11% and 39%, 6% and 15%, and 4% and 40%, respectively. Therefore, adding TEC improved the properties of composites made of PLA + PHB + MCC blends. Among the composites in which two pure biopolymers were used, PLA obtained the best results, while among the produced blends, PLA + PHB, PLA + PHB + 25MCC, and PLA + PHB + 25MCC + 3TEC performed best. The results achieved herein, regarding an attempt to produce layered wood-based composites with different biopolymers and their blends as special properties layers and binders, allow for the conclusion that it is possible to create a formaldehyde-free wood-based layered composite that enhances the properties of both materials-wood and biopolymer. However, additional work should be performed in the field of biopolymer blend composition to improve adhesion to wood. Institutional Review Board Statement: Not applicable. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
6,756.4
2022-10-01T00:00:00.000
[ "Materials Science" ]
Exploring the impact of atmospheric forcing and basal boundary conditions on the simulation of the Antarctic ice sheet at the Last Glacial Maximum Little is known about the distribution of ice in the Antarctic ice sheet (AIS) during the Last Glacial Maximum (LGM). Whereas marine and terrestrial geological data indicate that the grounded ice advanced to a position close to the continentalshelf break, the total ice volume is unclear. Glacial boundary conditions are potentially important sources of uncertainty, in particular basal friction and climatic boundary conditions. Basal friction exerts a strong control on the large-scale dynamics of the ice sheet and thus affects its size, and is not well constrained. Glacial climatic boundary conditions determine the net 5 accumulation and ice temperature, and are also poorly known. Here we explore the effect of the uncertainty in both features on the total simulated ice storage of the AIS at the LGM. For this purpose we use a hybrid ice-sheet-shelf model that is forced with different basal-drag choices and glacial background climatic conditions obtained from the LGM ensemble climate simulations of the third phase of the Paleoclimate Modelling Intercomparison Project (PMIP3). For a wide range of plausible basal friction configurations, the simulated ice dynamics vary widely but all simulations produce fully extended ice sheets 10 towards the continental-shelf break. More dynamically active ice sheets correspond to lower ice volumes, while they remain consistent with the available constraints on ice extent. Thus, this work points to the possibility of an AIS with very active ice streams during the LGM. In addition, we find that the surface boundary temperature field plays a crucial role in determining the ice extent through its effect on viscosity. For ice sheets of a similar extent and comparable dynamics, we find that the precipitation field determines the total AIS volume. However, precipitation is highly uncertain. Climatic fields simulated by 15 climate models show more precipitation in coastal regions than a spatially uniform anomaly, which can lead to larger ice volumes. We strongly support using these paleoclimatic fields to simulate and study the LGM and potentially other time periods like the Last Interglacial. However, their accuracy must be assessed as well, as differences between climate model forcing lead to a range in the simulated ice volume and extension of about 6 m sea-level equivalent and one million km. 1 https://doi.org/10.5194/tc-2020-28 Preprint. Discussion started: 10 March 2020 c © Author(s) 2020. CC BY 4.0 License. Introduction Sea-level variations on long timescales are driven by the waxing and waning of large continental ice sheets. The characterisation of the sensitivity of ice sheets to past climate changes is fundamental to gaining insight into their underlying dynamics as well as their response to future climate change. In addition, understanding past sea-level changes is important for quantifying sealevel rise (Nicholls and Cazenave, 2010;Defrance et al., 2017;King and Harrington, 2018;Golledge et al., 2019;Robel et al., 5 2019) and for assessing the risk of crossing tipping points within the Earth System, such as the collapse of the West Antarctic Ice Sheet (Kopp et al., 2009;Sutter et al., 2016;Pattyn et al., 2018). The Antarctic Ice Sheet (AIS), in particular, plays a fundamental role as it is the largest ice sheet on Earth and stores ca. 58 meters of sea-level equivalent (msle; Fretwell et al. (2013)). Due to its size it is potentially the largest contributor to future sea-level projections, but it is also the most uncertain (Collins et al., 2013). Assessing the AIS contribution to the total sea-level budget at different time periods has proven to be challenging. The Last Glacial Maximum (LGM, 21 ka BP) represents an ideal benchmark period since there is a large availability and variety of proxy data that, furthermore, indicate important AIS changes relative to present day (PD). Both, marine and terrestrial geological data, indicate that at the LGM, the AIS extended up to the continental-shelf break (Anderson et al., 2002(Anderson et al., , 2014Hillenbrand et al., 2012Hillenbrand et al., , 2014The RAISED Consortium, 2014;Mackintosh et al., 2014). However, its exact extent is not well constrained everywhere. Whereas its advance in the Amundsen 15 region, the Bellingshausen Sea and the Antarctic Peninsula is well established, in the Ross Sea and the East Antarctic region it remains controversial (Stolldorf et al., 2012;The RAISED Consortium, 2014). Furthermore, the total AIS ice volume is even less well constrained (Simms et al. (2019) and references therein). Geological data furthermore do not provide direct information on past thickness and volume of ice sheets, which must hence be inferred. There have been several approaches to infer past ice-volume change of an individual ice sheet as the AIS. One approach is to use direct ice-sheet modelling to simulate 20 the volume of the AIS at the LGM (e.g Huybrechts (2002); Whitehouse et al. (2012a); Golledge et al. (2012); Gomez et al. (2013); Maris et al. (2014); Briggs et al. (2014); Quiquet et al. (2018)). An alternative is to use Glacial Isostatic Adjustment (GIA) modelling, which describes the viscous response of the solid Earth to past changes in surface loading by ice and water (e.g. Ivins and James (2005); Bassett et al. (2007)). This approach has also been used in combination with direct ice-sheet modelling (e.g. Whitehouse et al. (2012b)) and/or by making use of constraints on ice-thickness from reconstructions based 25 on exposure age dating, as well as satellite observations of current uplift (Whitehouse et al., 2012b;Ivins et al., 2013;Argus et al., 2014b). Whereas older studies estimated large sea-level contributions generally above 15 m (e.g. Nakada et al. (2000); Huybrechts (2002); Peltier and Fairbanks (2006); Philippon et al. (2006); Bassett et al. (2007)), more recent modelling studies and reconstructions have lowered these estimates to 7.5-13.5 m (Mackintosh et al., 2011;Whitehouse et al., 2012a;Golledge et al., 2012Golledge et al., , 2014Gomez et al., 2013;Argus et al., 2014b;Briggs et al., 2014;Maris et al., 2014;Sutter et al., 2019). Several 30 factors have contributed to a decrease in the estimate of the LGM AIS volume. On one hand, the state of the art of ice-sheet modelling has considerably advanced in the last years, for example through the inclusion of more complex physics, increased spatial resolution and sub-grid scale grounding-line treatment (e.g. Goelzer et al. (2017); Pattyn (2018)). On the other hand, external processes, like the ice-ocean interaction or the GIA, are now treated with more accurate parameterisations and models 2 https://doi.org/10.5194/tc-2020-28 Preprint. Discussion started: 10 March 2020 c Author(s) 2020. CC BY 4.0 License. Given that ablation and basal melting were probably negligible at the LGM in the AIS, ice-sheet dynamics and accumulation must have been the two main factors controlling ice-mass gain during this period. The representation of ice dynamics in icesheet models is a key feature that can potentially lead to important discrepancies. Most ice-sheet models simulating the past 5 long-term evolution of large-scale ice sheets are hybrid models that rely on the Shallow Ice Approximation (SIA) and the Shallow Shelf Approximation (SSA). Moreover, there is no universally accepted friction law, and basal friction is treated in different manners in ice-sheet models. Ritz et al. (2015) emphasize the importance of the basal friction, as it can favour the occurrence of the marine instability in future AIS projections. Generally, basal stress follows either a power-law formulation on the basal ice velocity (a special case being the Weertman (1957) friction law) or a Coulomb friction law (Schoof, 2005) with 10 different power-law coefficients, a friction coefficient and potentially a regularization term. Ice-sheet models thus use friction formulations that can range from linear viscous and regularized Coulomb friction laws, typical of hard bedrock sliding (Larour et al., 2012;Pattyn et al., 2013;Joughin et al., 2019) to Coulomb-plastic deformation, characteristic of ice flow over a soft bedrock with filled cavities (Schoof, 2005(Schoof, , 2006Nowicki et al., 2013). In the simplest cases a constant friction coefficient is prescribed over the whole domain (Golledge et al., 2012), but generally this parameter incorporates the dependency of 15 basal friction on the effective pressure exerted by the ice, as well as on bedrock characteristics by making use of assumed till properties Albrecht et al., 2019;Sutter et al., 2019) or basal temperature conditions (Pattyn, 2017;Quiquet et al., 2018). The sensitivity of the simulated ice volume to these features is substantial. For instance, Briggs et al. (2013) obtained differences of more than 5 msle for an Antarctic LGM state depending only on the friction coefficients used for hard and soft beds. Some studies have attempted to overcome the uncertainty in basal friction by optimising the friction 20 coefficient through inversion methods in order to obtain an accurate PD ice-sheet state Le clec'h et al., 2019). However, these optimizations are based on a particular configuration of the PD state, and it is unclear whether they remain valid for glacial conditions. All in all, basal friction is poorly characterised, and the potential consequences of the associated uncertainty should be considered in ice-sheet modeling. Glacial atmospheric boundary conditions over Antarctica are also far from being well constrained. It is clear from ice-core 25 records and marine deep-sea sediment data that, at the continental scale, temperatures were lower than today and that the climate was drier (Frieler et al., 2015;Fudge et al., 2016). Typically, ice-sheet models use two approaches for simulating the atmospheric conditions at the LGM. On one hand, some studies prescribe a spatially-uniform temperature anomaly (generally between 8 K and 10 K below PD) and a uniform reduction in precipitation (generally by 40-50% compared to PD), as inferred from individual ice-core records (Huybrechts, 2002;Golledge et al., 2012;Whitehouse et al., 2012a;Gomez et al., 2013;30 Quiquet et al., 2018). However, this approach provides only a crude representation of glacial climate anomalies. In reality, even if ice cores show a similar temperature decrease, estimated precipitation changes are less homogeneous. Thus imposing a constant change over the whole domain will potentially misrepresent climatologies in localized areas (Frieler et al., 2015;Fudge et al., 2016). In addition, ice cores are extracted from domes and the recorded changes are not necessarily representative of coastal regions. Because the LGM is a cold state, with presumably no (or negligible) ablation and oceanic basal melt, the 35 reduction of precipitation with respect to the PD should have an important impact on the size of the simulated ice sheet. In addition, because the temperature and/or precipitation anomalies are uniform, the PD pattern is imprinted on the LGM atmospheric forcing fields, and changes in atmospheric patterns are thus neglected. Another commonly used method is to prescribe the LGM temperature and precipitation fields for the whole Antarctic domain from climate simulations (Briggs et al., 2013;Maris et al., 2014;Sutter et al., 2019). Output from simulations using 5 a hierarchy of climate models has been used in the literature, from global general circulation models (GCMs) (Sutter et al., 2019), sometimes downscaled with regional models (Maris et al., 2014), to Earth System Models of Intermediate Complexity (EMICs) (Blasco et al., 2019). Briggs et al. (2013) went a step forward to investigate the effect of uncertainty in the climate forcing fields by assessing the effect of the inter-model variance through an empirical orthogonal function (EOF) analysis. However, some model outputs do not simulate the temperature anomalies correctly at specific sites where proxies are available, 10 such as Vostok or Dome C. This may lead to an unrealistic configuration and thus it is necessary to evaluate the accuracy of model outputs (Cauquoin et al., 2015). In this work we aim to assess the effects of the uncertainty in basal friction and climatic (in particular atmospheric) boundary conditions on the simulated LGM AIS. We focus on basal-drag choices which can lead to realistic LGM states. For these we then investigate the effect of different temperature and precipitation fields. To this end, we use a thermomechanical ice-sheet- Methods and experimental setup For this study we use the three-dimensional, hybrid, thermomechanical ice-sheet-shelf model Yelmo . where the computed basal velocity is corrected with the corresponding basal friction. Ice shelves are solved within the SSA solution without basal drag. The initial topographic conditions (ice thickness, surface and bedrock elevation) are obtained from the RTopo-2 dataset (Schaffer et al., 2016). The internal ice temperature is calculated via the advection-diffusion equation. Yelmo computes the total mass balance (MB) as a sum of the surface mass balance (SMB), the basal mass balance at the ice 30 base and calving at the ice front. The SMB is obtained as a difference between the ice accumulation through precipitation and surface melting using the positive degree-day method (PDD; Reeh (1989)). Although there are more comprehensive methods that account for short-wave radiation for instance (Robinson et al., 2011), the PDD scheme is commonly used in ice models 4 https://doi.org/10.5194/tc-2020-28 Preprint. Discussion started: 10 March 2020 c Author(s) 2020. CC BY 4.0 License. in the Antarctic domain, because ablation at these latitudes is limited Pollard and DeConto, 2012;Pattyn, 2017). Furthermore, in this particular study, the transient character of the AIS evolution is not simulated, as we focus on the LGM period. Thus, there is no need for explicitly accounting for the effects of changes in insolation on melting. Calving occurs when the ice-front thickness decreases below an imposed threshold (200 m in this study) and the upstream ice flux is not large enough to provide the necessary ice for maintaining the previous thickness (Peyaud et al., 2007). Present-day basal 5 melting rates at the ice-shelf base and at the grounding line are obtained from Rignot et al. (2013) and extrapolated over all 27 basins identified by Zwally et al. (2012). Below grounded ice, the basal mass balance is determined through the heat equation as in Greve and Blatter (2009), where the geothermal heat flux field is obtained from Shapiro and Ritzwoller (2004). The glacial isostatic adjustment (GIA) is computed with the elastic lithosphere-relaxed asthenosphere (ELRA) method (Le Meur and Huybrechts, 1996), where the relaxation time of the asthenosphere is set to 3000 years. 10 Yelmo does not explicitly model the impact of ice anisotropy on the ice flow, so the classical "enhancement factor" are used as a tuning parameter (Ma et al., 2010;Pollard and DeConto, 2012;Maris et al., 2014;Albrecht et al., 2019). For this study we found realistic PD states for E grounded =1.0 and for ice shelves E floating =0.7. Basal-drag law As mentioned above basal sliding is calculated within the SSA solution, which is a function of the basal stress. Yelmo computes 15 the basal stress at the ice base (τ b ) through a linear viscous friction law. It depends on the basal ice velocity (u b ), the effective ice pressure (N eff ) and a tunable friction coefficient (c b ): and , is a coefficient that reflects the bedrock characteristics, and N eff is the effective ice pressure, given in [kPa]. Here we have parameterized c b as a function of the bedrock elevation, z b (positive above sea level), analogous to previous work (e.g., Martin et al. (2011)): Here, z 0 is an internal parameter that determines the bedrock e-folding depth over which the friction coefficient c b decreases 25 from a maximum value of c max reached for bedrock elevations above sea level (z b ≥ 0) and a minimum threshold value c min . For higher values of z 0 (i.e., lower absolute values of z 0 ), c b falls more rapidly with depth. This parameterisation captures the phenomenon by which the occurrence of sliding (and its intensity) is favoured at low bedrock elevations and specifically within the marine sectors of ice sheets. It follows a similar approach as in Albrecht et al. (2019) and Martin et al. (2011), where the bedrock friction (in their case the "till friction angle") depends on the bedrock elevation. The effective pressure is represented by the Leguy et al. (2014) formulation, under the assumption that the subglacial drainage system is hydrologically well connected to the ocean so that there is full support from the ocean wherever the icesheet base is below sea level. We thus assume that the exerted basal pressure at the land-ice interface depends on the difference between the overburden pressure and the basal water pressure (i.e. the distance from flotation as measured in ice thickness), hence: where ρ i is the density of ice, g is gravity, H is the ice thickness and H f is the flotation thickness, given by H f = max 0, − ρw ρi z b , where ρ w is the seawater density, respectively, and z b is the bedrock elevation (positive above sea-level). In this way, far from the grounding line, H f = 0 and N eff = ρ i gH, while at the grounding line, where H = H f , N eff = 0. This ensures continuity of τ b at the grounding line. Climate forcing To simulate the AIS at the LGM, Yelmo is run over 80 kyr with constant LGM conditions. The atmospheric forcing field is given by the following equation: where T atm 0 is the PD temperature field at sea level obtained from RACMO2.3 forced by the ERA-Interim reanalysis data (Van Wessem et al., 2014) and ∆T atm LGM-PD is the LGM surface temperature anomaly relative to the PD. The monthly-mean temperature fields are obtained from each of the the eleven PMIP3 models, as well as by the ensemble mean (Fig. 1a). We apply a lapse rate correction that accounts for changes in elevations (0.008 K m −1 for annual temperatures and 0.0065 K m −1 for summer temperatures). The LGM precipitation is calculated as where P 0 is the PD monthly-mean precipitation obtained in the same way as the PD temperature and δP LGM/PD is the relative anomaly between the LGM and PD obtained from the PMIP3 ensemble. Figure 1b shows the resulting precipitation field, P LGM , for the PMIP3 ensemble mean. Precipitation is corrected with local temperature anomalies through Clausius-Clapeyron scaling, which assumes more accumulation for warmer temperatures and therefore lower elevations. Note that precipitation is 25 given in water equivalent and transformed into accumulation via changes in density (i.e. 1 m yr −1 water equivalent ca. 1.09 m ice). Basal-melting rates for floating ice shelves are set to zero in the LGM state. Basal friction To investigate the impact of changes in basal friction on the LGM AIS we assess the sensitivity to the friction in marine zones via the minimum friction allowed (c min ) and the elevation parameter (z 0 ) in Eq. 3 that controls how quickly friction decreases with depth. For this purpose we force Yelmo with a single reference climatic state obtained from the average anomaly of the 5 PMIP3 ensemble for the LGM climate (Fig. 1) and a range of friction parameters. This range was determined in two steps. Figure S1). Climatic fields To understand the impact of changes in climatic forcing on the ice sheet, we fix the friction parameter values to a single, reference set of values (z 0 = -175 m and c min = 1·10 −5 yr m −1 ) and analyze the AIS simulated at the LGM for the climatic 15 forcing derived from each of the 11 models in the PMIP3 ensemble, using the aforementioned forcings for temperature (Eq. 5) and precipitation (Eq. 6). We focus on how the temperature and precipitation fields control the size and extent of the ice sheet. In all experiments the sea-level change estimates are computed with respect to the simulated PD state for the reference friction parameter values. Impact of basal friction Here we present our LGM simulated AIS for different basal friction parameters. Ice volume is converted into a sea-level contribution by subtracting the floating portion and taking isostatic depression of the bedrock into account . (Lambeck and Johnston, 1998;Lambeck and Chappell, 2001;Lambeck et al., 2002Lambeck et al., , 2003. Note that in order to avoid biases due to Yelmo's coarse spatial resolution, these extensions were computed using the ice-sheet 5 margins of each of the reconstructions at Yelmo's spatial resolution. The three lowest bound simulations correspond to cases for which the corresponding PD AIS ice volume deviates from PD observations by more than 3.5 msle (see SI, Fig. S1, S2). The simulated surface velocity pattern shows a distribution with low values near the summit and increasing values towards the margins (Fig. 3). Our friction parameterisation reproduces the fact that ice streams become faster on topographic lows with the Amery, Wilkes and Victorias Land showing active ice streams of more than 50 m yr −1 (Fig. 3a,b). The WAIS, due to its 10 marine character, is also a very active sector. Ice volume differences between a slowly decreasing friction (z 0 =-200 m) and a more rapidly decreasing friction (z 0 =-150 m) primarily originate in the WAIS and the coastal marine regions of the EAIS and its surroundings (Fig. 3c), and are the result of higher basal velocities with lower friction values (Fig. 3d) leading to thinner ice. Subtle differences are found when comparing the extension of grounded ice in our simulated AIS with previous reconstruc- Our simulated extension stands between the ICE-6G model (green line in Fig. 4) and the RAISED Consortium (red line) and 20 the ANU (blue line) model. The largest discrepancies between models occur on the Ross shelf. Whereas ANU and RAISED estimate an advance close to the continental-shelf break, ICE-6G is more retreated, while our results support a nearly complete advance. Impact of climatic forcing Here we present the simulated LGM AIS of each individual PMIP3 model for the reference friction parameters (Fig. 5). The 25 simulated ice-volume anomaly ranges from 7.8 msle to 14.0 msle (Fig. 6), a difference of 6.2 msle. The total ice extension ranges from 14.6 million km 2 to 15.8 million km 2 , a difference of 1.2 million km 2 . Thus, while the spread in ice volume is somewhat smaller than found when investigating the sensitivity to friction, the spread in extension is significantly larger. Because the underlying dynamics in Yelmo are the same in all cases, the differences in size and extension can only be explained by differences in the climatic fields. To determine the causes underlying these differences, we investigate the sen-30 sitivity of the ice thickness and extension to the climatic fields used to force the ice-sheet model (Fig. 7). We find that higher accumulation results in a thicker ice sheet (Fig. 7a), but has no appreciable effect on the ice extension (Fig. 7b). For model climatologies for which the LGM ice sheet extends close to the continental-shelf break (an extension of around 15.5 million km 2 , see Fig 7d), the AIS ice volume increases with increasing accumulation (Fig. 7c). However, there are four climate models 8 https://doi.org/10.5194/tc-2020-28 Preprint. Discussion started: 10 March 2020 c Author(s) 2020. CC BY 4.0 License. (CNRM-CM5, GISS-E2-R-150, GISS-E2-R-151, FGOALS-g2) that despite having higher accumulation on average than the ensemble mean, do not allow the ice sheet to advance as much as the other models, leading in all cases to extensions below 15 million km 2 (Fig. 7b). Therefore, the simulated AIS volume is smaller for these less advanced ice sheets, despite the relatively high accumulation rates imposed. For all the others, for which extension is around 15.5 million km 2 , the AIS ice volume clearly increases with increasing accumulation (Fig 7c). 5 Further inspection allows us to identify the atmospheric temperature close to the grounding line (Fig 7d) as a critical factor in determining how far the AIS advances. Whereas low temperatures present similar ice extension, as it becomes warmer the ice sheet is more retreated. Given the low temperature values, ablation can be generally discarded as the source of this behaviour (SI Fig. S3; there is, however, one exception, as discussed below), so we turn our attention to ice viscosity. A necessary condition for marine-based ice sheets to advance is that the ice thickness at the grounding line overcomes the flotation criterion 10 as sustained through accumulation and/or by inland ice flow. This condition is fulfilled when the ocean depth (z b ) is shallower than ∼90% of the ice thickness. Warmer ice temperatures lower the ice viscosity (Fig. 7e) and prevent the grounding-line to thicken, as a consequence of enhanced ice flow, and advance towards more depressed bedrock zones. Therefore, simulations with lower ice viscosity as GISS-E2-R-150, GISS-E2-R-151 and FGOALS-g2 do not fully advance in the Ross shelf, Pine Island or the Amery (Fig. 5,6). 15 Finally, CNRM-CM5 is a particular case which does not fulfil any of our proposed hypotheses. Viscosity describes the fluidity of a material, therefore warmer temperatures enhance ice flow. Thus, following the same reasoning as before, one would expect a low viscosity as a consequence of a warmer ice column for CNRM-CM5, which is not the case (Fig. 7e). This model expands fully at the Ross shelf and Antarctic Peninsula zone, but the Ronne shelf is far from the grounding-line and the Amery shelf is even more retreated than PD (Fig. 5). The ice sheet does not advance in these regions due to the presence of abnormal 20 ablation, which impedes the ice expansion (see SI, Fig. S3). We argue that the unexpected large viscosity is a consequence of two competing effects. The fully advanced regions, as the Ross basin, contribute to a rather low ice temperature and hence a high viscosity. On the other hand, the ablation zones such as the Ronne and Amery basin, have warmer ice temperatures which conclude into low viscosity. Therefore Fig. 7e shows that CNRM-CM5 has on average a warm ice column and a high viscosity. A similar reasoning can be applied to Figure 7a where the mean ice thickness is low despite its high accumulation. 25 In summary, we find that the choice of the boundary climate is crucial for the simulated LGM ice sheet. On one hand, the atmospheric temperatures near the coastal regions control the ice extension through viscosity. If the viscosity is too low, then the ice flows too fast, preventing the necessary thickening. Particularly, if the bedrock is too deep, the ice sheet's expansion will be hampered. Secondly, if the ice sheet extends close to the continental-shelf break, then the accumulation pattern will determine the total amount of ice volume. We find that for similarly extended ice sheets (IPSL-CM5A-LR and MRI-CGCM3), 30 the sea-level difference due to accumulation differences is about 3.5 msle. Spatially homogeneous approach Applying a simple scheme that lowers the ice accumulation and surface temperature homogeneously over the whole domain is a common and valid approach at first order, because during the LGM, at continental scale, a colder and drier climate is 9 https://doi.org/10.5194/tc-2020-28 Preprint. Discussion started: 10 March 2020 c Author(s) 2020. CC BY 4.0 License. expected (Huybrechts, 2002;Golledge et al., 2012;Whitehouse et al., 2012a;Gomez et al., 2013;Quiquet et al., 2018). We thus tested a spatially homogeneous scaling (hereafter, the homogeneous method) for comparison. All simulations simulated realistic SLE and ice extensions during the LGM for the same friction coefficients. In overall, consistently lower ice volumes as well as reduced ice extensions are simulated with the homogeneous method (Fig. S4). Again, because the ice dynamics are the same, this difference can only be explained by the climatic forcing. Moreover, because temperatures are not sufficiently high 5 to produce ablation it points to ice accumulation differences. Fig. 8c illustrates the ice thickness difference between the two methods for a similar ice extension (Fig. 8a,b). The anomaly shows that the main source of this difference in ice volume and extension comes from the WAIS. The Antarctic Peninsula in particular shows a high positive thickness anomaly for the average PMIP3 climatic fields relative to the homogeneous case because the grounding-line does not advance there in the latter case. In the EAIS, the anomalies are not so pronounced, however inland ice is slightly thinner, whereas closer to the coast it is thicker. 10 This anomaly pattern can be explained by the difference between the accumulation fields (Fig. 8d). The spatially homogeneous method accumulates more ice inland and and leads to a reduced accumulation towards the continental-shelf break, especially at the Ross shelf, Pine Island and the Antarctic Peninsula. Because ice cores are generally extracted from dome regions with colder conditions, it is expected that precipitation and air temperatures near the coast are underestimated by the homogeneous approach. Basal dragging law Even at present-day it is difficult to estimate bed properties like basal temperature or ice velocities, which could improve our understanding of basal friction. Therefore, estimating bed properties at the LGM, where the total ice volume and extension is not fully constrained, adds a degree of difficulty. We covered a range of friction values which lead to realistic LGM and PD 20 configurations. The simulated sea-level differences were about 7 msle between the extreme cases (Fig. 2). We found that the choice of different bedrock frictions has an impact on ice-stream activity in marine-based regions. For example, an AIS that extends up to the continental-shelf break, but with a relatively low volume increase, can be achieved through a very dynamically active ice sheet. In that case, marine regions, and more specifically the WAIS, have the potential to maintain fast ice streams at the LGM and still agree with PD observations. 25 The choice of a given and unique friction law for the whole AIS is still somewhat arbitrary and unconstrained. We focused on a linear viscous friction law commonly used in other studies Quiquet et al., 2018;Alvarez-Solas et al., 2019). We are aware that other types of friction laws could have been tested, such as a regularized Coulomb law (Joughin et al., 2019) or a Coulomb-plastic behaviour (Nowicki et al., 2013), typically for ice flowing over a bedrock filled with cavities. However, the importance of saturated tills is specially determinant for transient simulations with a retreating grounding line. 30 Given the large uncertainty we quantified for only one friction formulation, we expect that this range would increase further considering additional formulations. Sea-level and ice extent uncertainty For our reference friction parameters we used the individual climate simulations of the participating PMIP3 groups as surface boundary forcing. The sea-level difference between the models was about 6.2 msle. The lowest sea-level contribution was 7.8 msle (CNRM-CM5) and the largest 14.0 msle (IPSL-CM5A-LR). These sea-level estimates were inside the range of other studies and reconstructions. From this point of view, we were not able to discard any specific model field. Nonetheless, it seems 5 unrealistic that air temperatures were high enough to produce ablation during the LGM as seen in CNRM-CM5. The simulated ice extension is determined through air temperatures. Warmer temperatures lower the ice viscosity. Due to the marine character of the AIS, a lower viscosity enhances ice flow leading to thin ice in regions where the bedrock is too deep, which prevents a complete advance towards the continental-shelf break. Forcing from the models GISS-E2-R-150 and GISS-E2-R-151 for instance do not allow a full advance in the Ross shelf, resembling the ICE-6G reconstruction (Fig. 4). 10 Similarly, with FGOALS-g2 the advance into the Pine Island region or the Amery shelf advance is impeded (Fig. 5). On the other hand, if temperatures are sufficiently cold, less than -20ºC or so, then the ice fully advances as in the ANU reconstruction ( Fig. 4). The RAISED Consortium has a similar extension, but presents two large ice shelves at the margins of the Ronne shelf, which we are not able to simulate. Again, the simulated ice extensions were inside the range of the reconstructions, and we could not exclude any case. But we found that in addition to the precipitation field, temperature fields play a crucial role as 15 they have the potential to accelerate the ice by lowering the viscosity and determine the total grounded ice area, which in turn affects the grounded ice volume. Of course there are several sources which could impact AIS volume estimates, aside from the climatology and basal friction. A change in bedrock depth, for instance, has profound implications on the simulated AIS, as it does not only change the local sea level, but it can also facilitate (or impede) the ice advance and retreat (Philippon et al., 2006). Here we used a simple 20 parameterization that accounts for the elasticity of the lithosphere and a non-local response caused by lateral shift (Le Meur and Huybrechts, 1996). This formulation does not capture differences in the mantle viscosity as it applies the same spatially homogeneous time response. Nonetheless, the Antarctic bedrock is a complex component with different rheological properties. The WAIS for instance is a low-viscosity region where the bedrock deformation happens on a shorter timescale (Whitehouse, 2018;Whitehouse et al., 2019). The next generation of ice-sheet models coupled to GIA models may produce more realistic 25 bedrock responses and hence help to improve the sea-level budget at the LGM. This can be helpful for instance to constrain the phase space of friction parameters. Forcing methods Overall, a homogeneous anomaly relative to present day simulates a lower ice volume as a consequence of low accumulation near the ice-sheet margins (Fig. 8b). This indicates that the AIS could have stored more ice at the LGM than estimated by 30 studies applying such a scheme. As opposed to a spatially homogeneous method, GCM outputs are capable of representing local atmospheric effects, such as atmospheric circulation changes or localized precipitation structures. Thus, the latest icesheet models have begun to be forced by more detailed and arguably more realistic climatic fields (Briggs et al., 2013;Maris et al., 2014;Sutter et al., 2019). Nevertheless we have shown here that the spread of the simulated ice volume and ice extension for different climatic outputs can be equal to or larger than that resulting from different basal-dragging choices. The PMIP3 LGM climatologies are built with a prescribed ice extension and surface elevation (Abe-Ouchi et al., 2015). It is clear then, that by construction, ice models should be driven towards these particular configurations. Nonetheless GCM models may exhibit biases in the temperatures and precipitation in localized regions. A way to potentially test the plausibility of the employed 5 climatic fields is to compare with ice proxies. We strongly recommend that paleo ice-sheet simulations should be performed with GCM outputs, as they capture more complex processes than a spatially homogeneous method, but the choice of the climatic fields has to be consistent with reconstructions. In the future with PMIP4 results, more accurate climatic fields are expected. 10 The ice dynamics and the boundary climatology are two essential building blocks for the simulation of an Antarctic LGM state. Here we studied the uncertainty in LGM ice volume associated with these two factors, by investigating the effect of the representation of basal friction and of the atmospheric forcing, respectively, in simulations. First, we tested a range of potential basal friction values of marine zones which simulated plausible LGM states. We found that for a simple linear friction law lower (larger) friction values enhance (diminish) the ice dynamics of marine zones and result in ice sheet configurations with 15 less (more) ice volume, but still similar grounded ice extension. This led to several potential configurations of the AIS with a sea-level difference with respect to today in the range of 11.2 msle and 17.5 msle and with a total ice extension in the range of 15 to 16 million km 2 . Then, for a particular friction configuration within the estimates of ice volume and extension, we studied the individual sea-level contribution from simulations driven by LGM climates provided by the eleven PMIP3 participating groups. We found ice volume anomalies ranging from 7.8-14.0 msle and extensions of 14.6 to 15.8 million km 2 . 20 Imposing the PMIP3 fields, whose climate simulations include dynamic adjustment to the LGM boundary conditions, translate into higher precipitation rates along the Antarctic coast, hence leading to a larger simulated ice volume compared to using a homogeneous anomaly method. The grounding-line advance is strongly determined by the atmospheric temperatures as well. Higher temperatures enhance ice flow reducing the ice viscosity. Because of the marine character of the AIS, relatively high temperatures near the coast can prevent ice expansion. Thus, along with improved knowledge of basal conditions, constraining 25 broader possible climatic changes during the LGM is imperative to be able to reduce uncertainty in the AIS volume estimates for this time period. Code and data availability Yelmo is maintained as a git repository hosted at https://github.com/palma-ice/yelmo under the licence GPL-3.0. Model documentation can be found at https://palma-ice.github.io/yelmo-docs/. The results used in this paper are archived on Zenodo
8,513.4
2020-03-10T00:00:00.000
[ "Environmental Science", "Geology" ]
Precision requirement of the photofission cross section for the nondestructive assay Principle of the new NDA technique based on the photofission reaction rate ratio (PFRR) has been developed by Kimura et al for measurement of uranium enrichment by using the only relative measured counts of neutron produced by photofission reactions of 235U and 238U at different specific incident photon energies. In the past analysis, no attentions have been paid for relatively large uncertainty of photonuclear cross section of special nuclear materials around 10%. In the present paper, quantitative analysis was performed to reveal the impact of photonuclear cross section uncertainty to predicted value of the uranium enrichment by the PFRR methodology. And also, the requirement of photofission cross section precision was evaluated as less than 3%, to satisfy the uncertainty of PFRR methodology to within 5%. Introduction The nondestructive assay (NDA) techniques for quantifying special nuclear materials (SNMs) have been developed by many organizations and some of which have been successfully applied to uranium enrichment measurement [1][2][3][4][5][6][7][8][9].One of the recent projects is Next Generation Safeguards Initiative in the United States which has been examined in a spent fuel NDA technique [2].The other challenge of the NDA technique for quantification or even detection of SNMs in unknown forms, such as unknown waste, debris or concealed and shielded highly enriched uranium in containers, these have some technical difficulty as follows [10]; (1) Few self-generated neutron or photon emissions because of shielding (2) Difficulty of measurement because of intensive gamma-ray backgrounds (3) Low measurement reliability due to impurities and unknown information. Recently, the development of the compact and quasimonochromatic photon (X-ray) source generator has proceeded, which is expected to be realized as portable photon generator device with higher energy than the photonuclear threshold energy [11][12][13][14].Its application is expected to be one of the NDA techniques. A new NDA technique is aimed for uranium enrichment measurement, characterized by mathematical process which represents the correlation of the target enrichment and relative measured counts of neutron produced by the photofission reactions of 235 U and 238 U at different specific incident photon energies of 6 MeV and 11 MeV.Principle of the nuclear material isotopic composition measurement method based on the a e-mail<EMAIL_ADDRESS>reaction rate ratio (PFRR) was validated by small scale numerical simulation with good reproducibility of within 2% difference of predicted uranium enrichment and reported by Kimura et al. [10].However, cross sections of the photonuclear reaction of interested nuclides relating to PERR have, in general, around 10% uncertainty, which may lead the huge impact to the accuracy of uranium enrichment measurement by the PFRR methodology.In the present paper, quantitative analysis was performed to reveal the impact of photonuclear cross section uncertainty to predicted value of the uranium enrichment by the PFRR methodology.And also, the requirement of photonuclear cross section precision was evaluated Principle of the NDA technique based on the Photofission reaction rate ratio The PFRR methodology mechanism is based on the difference of photonuclear cross section of different nuclides and different incident photon energies, these functions of the incident photon energies for the typical fertile and fissile nuclides of ENDF/B-VII.1 are shown in Fig. 1 [14].These differences of cross sections make the differences of neutron production rate at the target of SNMs, for example, as shown in Fig. 2 [10]. The neutron production rates shown in Fig. 2 include the (γ , n), (γ , 2n), (γ , fission), and other neutron production reactions.In case of the maximum incident photon energy is under 11.27 MeV as threshold energy of (γ , 2n) reaction at 238 U and 235 U target, (γ , fission) counts can be extracted from the neutron counts by coincidence counting.In the PFRR methodology, the information of photofission reactions is utilized to improve the precision by the simplified mathematical process as removal of other reactions from the equation. The photofission reaction rate R i (i represents the specific incident photon energy spectrum) is described ND2016 by Eq. ( 1), where E is the photon energy, φ i (E)is the photon flux, N nuc andσ f,nuc (E) are number density and microscopic photofission cross section of nuclide nuc.In addition, parameters i and nuc are defined as 1, 2, 3 . . .n and I, I I, 1) for each iand nuc can be transformed as Eq. ( 2), where A i,nuc is known.The PFRR methodology requires the measurement value of the photofission reaction rate ratioR i R n in order to calculateN nuc N n .The isotopic composition IC of nuclide nuc is calculated from N nuc N n and Eq. ( 3). (2) Hence, the PFRR methodology induces the isotopic composition by only measuring relative value of the photofission reaction [10]. Calculation model and methodology MCNP6 as a Monte Carlo code and ENDF/B-VII.1 as an evaluated nuclear data library were used for simulating the photonuclear reaction in the target [14,15].Figure 3 shows the calculation model of the present study.In this model, the photon beam is assumed to be injected to the center of the thin target.This target consists of metallic uranium ( 235 U and 238 U, 235 U enrichment is 5-90%) which density is 19.1g/cm3.Incident photons from the pencil beam (10 8 histories in this study) cause the photofission reaction at the target.The fission reaction which occurred at the target is tallied as "R i " of Eq. ( 2).This fission reaction include (γ , fission) and (n, fission) because signal of (γ , fission) and (n, fission) cannot be separated in the actual measurement by coincidence counting. The error propagation formula of predicted 235 U enrichment in the 235 U-238 U system was derived as Eq. ( 4), where, N U 235 /N U 238 and R ratio was N nuc /N n and R i /R n of Eq. ( 2), ε 0,238U and ε 0,235U were relative error of the photofission cross section of 238 U and 235 U. Other parameters in the Eq. ( 4) were described as follows: shown to be applicable to predict of SNM isotopic composition. Implication of the photofission cross section uncertainty Assuming 10% uncertainty of the photofission cross section of 235 U and 238 U, predicted value of the 235 U enrichment had a 13% uncertainty at 20% enrichment as shown in Fig. 5.This uncertainty was reduced by decreasing of the cross section uncertainty.In addition, as shown in Fig. 6, 3% or less cross section uncertainty was required to reduce the uncertainty of predicted value of the 235 U enrichment to less than 5%. Conclusion The effect of the photofission cross section uncertainty to the predicted value of the 235 U in the PFRR methodology was evaluated.This uncertainty was required to be 3% or less to keep less than 5% uncertainty of the predicted value of the 235 U enrichment. However, the current photonuclear cross section data of nuclear materials, namely, uranium and plutonium nuclides have generally 10% or more cross section uncertainty.Therefore, the photonuclear cross sections, especially photofission cross sections of uranium and plutonium, of these nuclides are strongly desired of precision improvement for uncertainty reduction of the PFRR methodology. Figure 1 . Figure 1.Photonuclear reaction cross sections versus the incident photon energy.The cross section of each nuclides and reactions are written as "nuclides(reaction)" [10, 14]. Figure 2 . Figure 2. Difference in the neutron production for different photon energies and nuclides [10]. 4. 1 . Estimation of the 235 U enrichment based on the PFRR methodThe results of the 235 U enrichment prediction by PFRR method was shown in Fig.4.The incident photon energies are 11 and 6 MeV that has the Gaussian shaped energy distribution (σ = 0.5 MeV)[10].As shown in this figure, the present method showed good reproducibility of 235 U enrichment, the principle of PFRR methodology was Figure 4 . Figure 4.The predicted value of the 235 U enrichment based on the PFRR due the 11 MeV/6 MeV incident photon that has the Gaussian shaped energy distribution [10]. Figure 5 . Figure 5.The predicted value of 235 U enrichment and its uncertainty with 10% cross section uncertainty. Figure 6 . Figure 6.The predicted value of 235 U enrichment and its uncertainty with 3% cross section uncertainty.
1,901.2
2017-01-01T00:00:00.000
[ "Physics" ]
Ionic liquid crystals based on viologen dimers: tuning the mesomorphism by varying the conformational freedom of the ionic layer ABSTRACT We investigated the liquid crystal behaviour of newly synthesised bistriflimide salts of symmetric viologen dimers. A smectic A phase was observed for intermediate spacer lengths and for relatively long lateral alkyl chains. The systems were characterised by thermal analysis, polarised optical microscopy, X-ray scattering and solid-state NMR. An intermediate ordered smectic phase was also exhibited by the compounds (except for systems with very short lateral chains) consisting of molten layers of alkyl chains and partially ordered ionic layers. These results, relating to the mesomorphic behaviour of viologen salts, are qualitatively compared to those of the more common imidazolium salts, highlighting the importance of the conformational degrees of freedom of the anions and of the cationic core. It appears that fine tuning of the conformational degrees of freedom of the ionic layer is an important component of mesophase stabilisation. Graphical Abstract Introduction Ionic liquid crystals (ILCs) have recently attracted the attention of the chemistry and materials science communities as systems with the potential to combine the many applications and features of liquid crystals (LCs) and ionic liquids (ILs); see Ref. [1] for a comprehensive review and Refs. [2,3] for more recent updates on the subject. ILCs are usually obtained from organic salts of quaternised nitrogen as imidazolium, [4][5][6][7][8] piperidinium, [9] pyridinium, [10,11] bipyridinium (also known as viologens), [12][13][14] pyrrolidinium, [15] phenantrolinium, [16] and guanidinium [17][18][19] with common inorganic anions, such as halides, bistriflimide, tetrafluoroborate and hexafluorophospate. These salts usually form ILs but when one or more alkyl chains is sufficiently long they also exhibit mesomorphism. Since the driving force is micro-segregation between the hydrophobic chains and the ionic layers, a smectic phase (for calamitic systems) is frequently obtained, though rare cases of ionic nematic phases have been reported. [20][21][22][23] Besides the effect of these chains, the role of head groups has received some attention recently. [24] Computer simulations, either coarse-grained [25][26][27][28][29] or fully atomistic, [30,31] and theoretical models [32] have tried to shed light on the relation between molecular structure (size, shape and charge of both cations and anions) and the type and thermal range of stability of the mesophase formed. This is, however, a formidable challenge: prediction of the type of mesophases formed and their transition temperatures is still beyond our reach. For example, in a review concerning the various empirical methods and protocols for the prediction of the melting points of ILs, [33] those exhibiting LC phases, that is ILCs, were purposely left out of the set of compounds investigated because they were deemed too difficult to treat, due to the limited understanding and modelling currently available. On the other hand, the huge amount of literature and the vast knowledge base concerning the structureproperty relationships of non-ionic LCs, though essential for a comparison, might be hard to apply directly in the case of ionic compounds because the electrostatic interactions (not the key factor in LC science) are instead dominant for ILCs. Excluded volume effects, already important for the stabilisation of LC smectic phases, [34] might be even more important in regard to ILCs due to the presence of two kinds of particle (cation and anion) of very different shape and size. A deeper understanding of the relationship between structure and phase properties of ILCs would be highly desirable: ILCs have been used recently for applications in the field of solar cells, [35] membranes for water desalination, [36] battery materials, [37] electrochemical sensors [38,39] and electrofluorescence switches. [40] In these applications, the microscopic structure of the ILC molecules and the conductive properties of the ionic mesophase were found to have a significant impact on the performance of ILC-based devices compared with analogous devices based on isotropic ILs. The only way to improve the performance is thus to learn how to design a particular cation-anion combination in order to obtain the relevant mesophase with tailored properties. Our interest has been focused more recently toward viologen-based ILCs that have added value due to the interesting red-ox chemistry of the viologen unit. [41,42] Non-symmetric viologen monomers, [12] symmetric tetramethylviologens [43] and symmetric viologen dimers with a short lateral ethyl chain [44] have been synthesised and characterised by a range of experimental techniques. Interestingly, a SmA phase has been found only recently [45] by dimerisation of viologens salts which, in their monomeric form, exhibit either a crystal-to-isotropic transition or an intermediate ordered mesophase, labelled as SmX, the detailed nature of which has not been completely elucidated to date. In this work, we extend our investigation by presenting a series of symmetric viologen dimers n.m.n, where '.' represents the bipyridinium core while n and m indicate the number of carbon atoms in the lateral and middle (spacer) alkyl chains, respectively. These compounds exhibit a rich polymorphism, including the ionic SmA and SmX phases. Dimeric systems are also interesting for several other reasons: non-ionic LC dimers have been extensively investigated in the literature for about 30 years when they were proposed as model systems for the mesomorphic behaviour of LC polymers, [46] and the research in this field is still very active. [47] A critical review covering this field was presented by Imrie and Henderson. [48] Recent examples encompass hybrid systems of rod-like and bent-core units exhibiting a biaxial SmA phase, [49] chiral dimers, [50] systems based on isoflavone moiety [51] and dimers with a sulfur-sulfur link in the spacer. [52] The novel twistbend nematic phase observed in some cyanobiphenyl dimers has very recently contributed to boosting and renewal of interest in the intriguing properties of LC dimers. [53][54][55][56][57][58][59] One of the key properties of LC dimers is a pronounced odd-even effect in thermodynamic transitional properties, that is a dependence on the length, and especially the parity, of the spacer: higher clearing points and larger enthalpy and entropy changes are observed for the even members. [48] Significant theoretical [60][61][62] and computational [63] work has been devoted to a rationalisation of this behaviour in order to overcome the oversimplified view where only the alltrans arrangement of the spacer is considered. The second interesting feature of LC dimers is the somehow unexpected dependence of the stability of the smectic phase on the length of the spacer. Since microphase segregation is also the driving force for the formation of smectic phases in non-ionic LCs, it comes as no surprise that smectic phase stability generally increases with increased length of the terminal chains of monomeric LCs. Similarly, for main-chain polymeric LCs the stability of the smectic phase also increases with increased length of the chains connecting the mesogenic units. [48] For symmetric dimers the stability of the smectic phase increases with increased length of the terminal chains but, at variance with other cases, it is diminished and the smectic phase suppressed when the spacer is too long. Luckhurst and co-workers found that, in order to have a smectic phase, the number of carbon atoms in the terminal chains, n, should be greater than m/2, where m is the number of carbon atoms in the spacer of the symmetric dimers. [64] In contrast to non-ionic LC dimers, ILC dimers have been less investigated. ILCs based on imidazolium dimers were described by Bara et al., [65] where a significant effect of the length and type of spacer (alkyl vs. oligo (ethylene glycol)) was observed. A marked dependence of the stability of the smectic phase on the spacer is indeed expected since for ionic systems, micro-phase segregation is significantly stronger than for the analogous non-ionic LCs. Gin and coworkers studied imidazolium trimers exhibiting rich mesomorphism. [66,67] Results and discussion The compounds investigated are shown in Figure 1. Synthetic protocols are described in the Supplemental Information . One option is to first prepare the dimeric core, 1,1ʹ-(alkane-1,3-diyl)bis(4-pyridine-4-yl)pyridinium) dihalide by refluxing an excess of bipyridine with X-CH 2m -X, followed by quaternisation with C n H 2n+1 -X. The second option is a multi-step quaternisation of bipyridine, first with C n H 2n+1 -X to obtain the 1-alkyl-4-(pyridine-4-yl)pyridinium halide (X = Br, I), followed by reflux with excess X-CH 2m -X to yield the dimeric viologen halide. In both cases a final metathesis with LiTf 2 N yielded the desired salt. The compounds were structurally characterised by 1 H and 13 C NMR, ESI-MS and elemental analysis. Details are in the supplemental information. Material characterisation Previous TGA studies on the thermal stability of monomeric [12] and dimeric [44] bistriflimide viologen salts revealed the decomposition temperature always to be above 350°C. Such investigation has not been repeated here since the salts are similar in terms of molecular properties. In Table 1 we report the thermodynamic properties of the dimers investigated. In addition we also include, for the sake of comparison, some dimers with a short end-chain (ethyl) studied previously. [44] The enthalpies of transition at the clearing or melting point into the isotropic phase are also displayed in Figure 2, above the histograms representing the temperature range of the phase of each compound. We identified four different phases: a crystal phase (Cr), which is observed in all compounds at sufficiently low temperature; an isotropic liquid phase (Iso), which is also observed in all compounds at sufficiently high temperature; and two intermediate phasesat least one mesophase for dimers with long end-chains which appears to be the same as that found in monomeric viologens, here called SmX since its full characterisation turned out to be rather difficult; [12] and two mesophases in some dimers, the higher-temperature examples being an ionic smectic A phase, SmA. This behaviour was previously observed in few selected example reported in Ref. [45], namely 12.4.12, 14.4.14 and 16.4.16. We note that Cr-to-Cr transitions were frequently observed in the DSC differential scanning calorimetry (DSC), but these will not be discussed here. The enthalpy of melting into the isotropic phase shows a clear pattern: melting from the crystal phase (see the 2.n.2 series) has a relatively high value (in the range of several tens of kJ/mol) and a marked odd-even effect (see also the more detailed discussion in Ref. [44]). Melting from the intermediate SmX phase was accompanied by a much lower enthalpy change, in the range 20-30 kJ/mol. This is, however, still much higher than typical enthalpy of melting of SmA phases. Finally, melting from the SmA phase was characterised by a somewhat low enthalpy of transition, in the range of a few kJ/mol to even below 1 kJ/mol, typical of SmA-Iso transitions. Entropies of transition at the clearing point, ΔS m /R, are also in the expected ranges (i.e. 0.6, 2.4 and 0.6 for 14.3.14, 14.4.14 and 14.5.14, respectively). The higher entropy (and enthalpy) change for dimers with an even spacer is well documented for non-ionic dimers. [48,64] It is noteworthy that the enthalpy of transition from the Cr to SmX is not too dissimilar to typical values of the enthalpy of melting of alkanes: these range, for example, from ΔH melt (C 10 H 22 ) = 28.7 kJ/mol at 243.5 K to ΔH melt (C 16 H 34 ) = 51-53 kJ/mol at 291 K. [68] This observation is consistent with the mechanism we proposed previously, [45] where the Cr-to-SmX phase transition was ascribed essentially to melting of the hydrophobic layers. Table 1. Polarised optical micrographs (POMs) of some of the samples investigated are shown in Figure 3. Focal conic and fan-shaped textures support the identification of the high-temperature mesophase observed in some samples as a SmA. In contrast, samples exhibiting only one mesophase between the crystal and the isotropic liquid showed the mosaic or spherulitic texture (see Figures S31 in SI) that can be observed in ordered smectic phases, similar to what has been reported for non-symmetric monomers. [12] X-ray diffraction (XRD) was performed to investigate the nature of the phases. In Figure 4 we show the powder XRD profiles of sample 16.3.16 as an example; the complete series 16.n.16 for n = 3, 4, 5 and 6 is shown in Figure S32 in the SI. In all instances, except in the case of sample 16.6.16, the most intense peak was the (100) reflection, which appeared between 2 and 3°2θ. The higher-order reflections appear at positions indicative of a lamellar morphology in the solid state. The position of the (100) peak allows the calculation, according to the Bragg law, of the distance d 100 between the lamellae formed by the molecules. This peak shifts to lower angles as the temperature was increased (see Table 2 and Figure S32 in the SI). In Table 2 we list the observed inter-lamellar distance, d 100 , for each of the phases of the materials (i.e. the layer thickness of the smectic phases) and compare these to the calculated distance between the two methyl ends of the lateral chains of the molecules (i.e. to the full length of the dimers). Even within the same phase, the layer thickness depends on the temperature so comparison can be made only at a qualitative level. However, it is clear that the inter-layer distance obtained by XRD is consistently much lower than the length of the dimer with fully extended lateral alkyl chains. This is indicative of a relevant degree of interdigitation of the alkyl side chains. Table 1. Thermodynamic properties of the samples investigated. Transition temperatures T n /°C (ΔH n /kJ/mol). h: 1 st heating; c: 1 st cooling. In some cases the cooling transition(s) were not observed due to large hysteresis. n.m.n phase [45]. Reproduced by permission of the PCCP owner societies. Figure 4 shows that the complexity of the XRD patterns sharply decreases with increasing temperature. Such data confirm the occurrence of the phase sequences identified by thermal analysis and POM (Figures 2 and 3). The Cr phase is characterised by very rich room-temperature XRD patterns, especially in the wide-angle range, indicative of a significant degree of order in the short-distance range. As the temperature was increased, modifications were observed in the XRD pattern: the halo located at about 20 degrees in 2θ became more intense and concurrently the sharp crystalline peaks in the same angular region became weaker. This behaviour is indicative of the formation of a smectic LC phase that retains a significant degree of order (i.e. the SmX phase). [12,67] The XRD patterns of materials 16.3.16, 16.4.16 and 16.5.16 at 175°C are typical of a smectic A mesophase (see also Figure S32 in the ESI). These show the presence of a long-range order, reflected by a first-order peak and possibly a weak second-order one, with only a broad halo at high angles consistent with the lack of short-range order. [45] It is worth noting that each of these phase transitions resulted in a decrease in the degree of order of the solid-phase framework. This is evidenced, in the XRD patterns, by both the disappearance of high-order peaks and the looser packing of the molecules, reflected by an increase in the d 100 interlamellar distance. Examination of To investigate the Cr-to-SmX transition, 13 C CP-MAS NMR spectra were also recorded for the series 16.m.16 with m = 3-6, and for 14.m.14 with m = 5, 6, from room temperatures up to 80°C. For the longer spacers (m = 5, 6), an interesting feature emerging from these spectra, and already noted in the solidstate NMR spectrum of the monomer 14.14, [45] is the collapse of cross-polarisation on the methyl signals (the most shielded) upon transition from Cr to SmX (see Figure 5). The disappearance of methyl resonance can be explained by an efficient zero-averaging of C-H dipolar interactions as a result of a liquid-like dynamics, a feature consistent with an almost complete melting of the C16 alkyl chains to form a disordered layer. As counterproof, the N-13 CH 2 -spins resonating around 60 ppm and located opposite methyls in the C16 chains always display an intense cross-polarised signal, since their mobility is hampered both before and after phase transition. The sharpening of pyridine ring signals (120-155 ppm) after phase transition can also be explained by the increased mobility of the bispiridyl moiety. In this case, however, the appearance of intense spinning sidebands (see asterisks in Figure 5) indicates that the motion is not completely isotropic, and is likely to be a rotation along the N-N axis roughly elongated along the director of the smectic phase. As a result, the strong dipolar interactions in the Cr phase are also partially reduced in the SmX phase, and the heteronuclear decoupling during acquisition becomes much more efficient. Similar features are exhibited by the other compounds (see the solidstate NMR spectra in the SI). Therefore the SmX phase of the dimers, whether or not these also exhibit a higher-temperature SmA phase, shares some common features with the SmX phase observed for the viologen monomers discussed in previous works. [12,44,45] This appears to correspond to a melting of the hydrophobic layers while the in-plane ordering of the ionic layers does not disappear since clear Bragg's reflections are observed in the XRD traces. Variable-temperature 19 F MAS NMR spectra (368.7 MHz, 10 kHz MAS, see the SI) run on the same sample in the interval 25-80°C also reveal a phase transition between 50°and 55°C (note that any discrepancy in 13 C NMR and DSC data can be attributed to an additional sample heating induced by the fast MAS). In this case, assessing the phase transition from linewidth of the isotropic chemical shift (-79 ppm) is more difficult, as the Tf 2 Nanion is intrinsically less ordered than its counter-ion. To summarise, these new data confirm the hypothesis presented in our previous work. [45] The layered phases differ in the degree of bidimensional order within the alternating hydrophobic and ionic layers. This 2-D order is in addition to that along the director, which is common to all three phases. Based on comparison of XRD and SSNMR data we can say that in the Cr phase both layersthe ionic and the hydrophobicare ordered as obviously expected for a crystal; in the SmA phase both layers are disordered and fluid; in the intermediate SmX phase only the hydrophobic layer is liquid-like while the ionic layer retains some degree of in-plane ordering. We note that a very similar behaviour, featuring the sequence of an ordered low-temperature SmX phase and a fluid high-temperature SmA phase, was recently reported by Gin and co-workers for polycationic salts based on imidazolium trimers. [66,67] Therefore, such behaviour may be related to the presence of an ionic layer of significant thickness compared to the hydrophobic layer, while simpler ILCs, such as monocationic imidazolium salts, only exhibit a SmA mesophase. However, at present it is not possible to propose a better-defined description of the ordered SmX mesophase. Solution and aggregation behaviour In order to unambiguously assign the 13 C NMR resonances of 16.5.16, the following protocol was adopted. Selective TOCSY spectra were first run on a sample of 16.5.16 dissolved in CD 3 OD. Both inner and outer N-CH 2 -CH 2 -resonances were separately excited, and the magnetisation was propagated to the respective spin systems by means of a 100 ms DIPSI-2 isotropic mixing scheme. These preliminary experiments allowed a clear separation of the 1 H signals stemming from the inner C5 and outer C16 alkyl chains. A 1 H-13 C HSQC spectrum was then acquired to extract the corresponding 13 C resonances, which were finally compared to those of the solid-state spectra (see SI for details). We also studied the aggregation behaviour of a representative sample, 14.6.14, in methanol and dichloromethane by means of DOSY spectroscopy, using stimulated echoes with bipolar gradient pulse pairs and a longitudinal eddy current delay (STE LED-BPP). The results are reported in Table 3. The analysis of the apparent diffusion coefficients was complicated by the fact that signal broadening was observed upon increasing the concentration of 14.6.14, in both methanol and dichloromethane. Interestingly, the self-diffusion coefficients of the residual protonated solvent lie close to the values found for CH 3 OH and CH 2 Cl 2 at the same temperature (2.36 × 10 -9 and~3 .0 × 10 -9 m 2 /s, respectively. [69,70] This evidence suggests that the microscopic solvent viscosity is preserved in the different samples, and that the signal broadening likely results from sample inhomogeneity due to micro-segregation of 14.6.14. Comparison to imidazolium salts A rich mesomorphism is exhibited by viologen salts, which has been investigated by both us and others (see the Introduction for references). Precursors salts, bromides and iodides usually display crystal phases stable up to the decomposition temperature, [3,12,44,45] while metathesis with the flexible and conformationally disordered bistriflimide ion (Tf 2 N -) is responsible for the appearance of smectic phases after destabilisation of the crystal phase. This behaviour is the converse to that observed for imidazolium salts, while bromides and iodides, as well as BF 4 and PF 6 -, usually show smectic mesophases (indeed for sufficiently long alkyl chains, [71,72]) their bistriflimide analogues behave as ILs and the smectic phase is suppressed in favour of the isotropic phase. For example, we cite here the case of 1,1ʹ-diheptyl and 1,1ʹdioctylviologen dibromides (HV and OV, respectively, in Figure 6) studied in Ref. [73] These compounds remain in the crystalline phase up to 277 and 275°C, respectively, above which they decompose; therefore no liquid or liquid crystal phase is observed for them. In contrast, replacing the anion with Tf 2 Nresults in the appearance of a mesophase between 41 Table 3. Apparent diffusion coefficients (m 2 /s) of 14.6.14 in MeOD and CD 2 Cl 2 at 298 K, first entry. Solvent diffusion coefficient from the same spectra in the second entry. CD and 112°C for the former salt and between 37 and 136°C for the latter. Above the clearing point, the systems exhibit an isotropic liquid phase stable up to the decomposition temperature of 360 and 364°C for the heptyl and octyl derivative, respectively. These results should be paired with those of common imidazolium salts (e.g. 1-dodecyl-and 1-tetradecyl-3-methylimidazolium (C12 and C14, respectively, in Figure 6)) studied in Ref. [74] Bromide salts show a liquid crystal phase in the range 33-115 and 46-176°C, respectively. By replacing the Branion with Tf 2 N -, the liquid crystal phase is removed and the crystal directly melts into an isotropic phase at 17 and 35°C, respectively. The thermal stability range of these systems discussed above is also graphically shown in Figure 6. These results are consistent with the well-known behaviour of the Tf 2 Nanion, which usually lowers the melting point of ILs because of the disordered structure in the solid state. [75] However, when LC phases are involved, the bistriflimide anion can either uncover a LC phase by destabilising the crystal or extend the thermal range of the isotropic liquid, thus removing the LC phase, depending on the countercation. It seems, therefore, that the pairing of spherical and rigid anions with cations lacking a real rigid and anisotropic coresuch as the imidazolium cationis the correct combination with the appropriate amount of conformational freedom in order to exhibit mesomorphism, while increasing the overall degree of conformational freedom of the system by replacing the anions with Tf 2 Nleads to a stable isotropic phase. In contrast, pairing of spherical and rigid anions with the viologen moiety (also rigid and highly symmetric) favours crystal phases; these can be destabilised in favour of a less ordered mesophase by replacement of the anions with Tf 2 N -. It appears, therefore, that the total degree of conformational freedom can be modulated independently in the cations and anions in order to meet the requirements for a smectic phase, as schematically illustrated in Figure 7. Besides this effect of cation-anion combination, in this study we explored the effect of modulating the conformational freedom of the tethered bis(viologen) cationic core. Increased flexibility of the cationic core is obtained by dimerisation; this, on the one hand doubles the size and charge, but, on the other hand, makes the whole cationic moiety more flexible. That is, the amount of conformational freedom of the ionic layer is increased, in the dimers, compared with the monomers. This increased flexibility accounts for the appearance of a fluid smectic A phase in the dimers compared to the monomer, where only an ordered smectic X phase was observed. [12,45] Not surprisingly, if the spacer of the dimers is sufficiently long, the two tethered viologen moieties behave again as independent units and multiple layers (hydrophobic and ionic) are present in the smectic phase. Such a phase, however, is again an ordered SmX phase since the ionic layers do not possess the required degree of freedom to allow for a smectic A phase. Experimental section Variable-temperature 1 H-13 C CP-MAS spectra (100.7 MHz 13 C, 1 ms contact time, 5 kHz MAS) were run in the interval 25-80°C on a Varian 400 spectrometer equipped with a 4 mm MAS probe. Solution-state NMR spectra were collected at 25°C on a Bruker AVANCE III 500 (500.13 MHz 1 H) equipped with a 5 mm z-gradient BBI probe. For diffusion-ordered spectroscopy, 32 transient spectra were collected by means of a standard STE-LED-BPP pulse scheme (Δ = 100 ms, δ = 1.5 ms). Data inversion (fitting and ILT) was performed with Bruker Dynamics Center 2.2.3 software. Polarised optical microscopy: the textures of the samples were studied with a Leica DM4000 M polarised light microscope. The samples were placed between a glass slide and a cover slip. A Mettler FP82HT hot stage was used to control the temperature. The samples were heated at 10°C/min beyond the melting temperature determined by DSC experiments, and subsequently cooled at 10°C/min to room temperature. The photomicrographs were taken between crossed-polarisers with a Leica DFC280 digital camera. DSC: all measurements were carried out with a TA Instruments model 2920 calorimeter operating under N 2 atmosphere. Samples weighing about 5 mg enclosed in aluminium pans were used throughout the experiments. Indium of high purity was used for calibrating the DSC temperature and enthalpy scales. Four ramps were included in the temperature programme: one heating from room temperature to 160°C at 10°C/ min, followed by a cooling step to room temperature at 10°C/min and by another analogous heating/cooling cycle. The repetition of two similar heating/cooling ramps was done to assess the repeatability of the phase transitions. The XRD patterns were recorded in the diffraction angular range 2-60°2θ by a Philips X'Pert PRO diffractometer, working in the reflection geometry and equipped with a graphite monochromator on the diffracted beam (CuKα radiation). When gathering temperature-dependent XRD spectra, an Anton Paar TTK450 temperature control cell was used and the diffraction angular range was limited to 2-30°2θ. Conclusions We present the characterisation of a series of viologen dimers with bistriflimide as counter-ions; the systems exhibited a rich mesomorphism. We analysed the dependence of mesomorphism on the length of both spacer and lateral chains. Short lateral chains resulted in the suppression of any LC phases; for intermediate lateral chains we observed only an ordered smectic phase; for lateral chains longer than C12 a fluid smectic A phase was found when the length of the spacer was below six carbon atoms, otherwise only an ordered smectic phase was observed. Comparison of the phase behaviour of alkyl-tethered viologens cations to that of similarly tethered imidazolium cations when paired with two different kinds of anions (rigid and spherical anions, such as halides, and flexible and disordered bistriflimide) revealed interesting insights. A qualitative rationalisation is presented of the variation in behaviour based on the total amount of conformational degree of freedom introduced in the ionic layers of the smectic phases.
6,413.4
2016-03-30T00:00:00.000
[ "Materials Science" ]
Sensitivity analysis of AI-based algorithms for autonomous driving on optical wavefront aberrations induced by the windshield Autonomous driving perception techniques are typically based on supervised machine learning models that are trained on real-world street data. A typical training process involves capturing images with a single car model and windshield configuration. However, deploying these trained models on different car types can lead to a domain shift, which can potentially hurt the neural networks performance and violate working ADAS requirements. To address this issue, this paper investigates the domain shift problem further by evaluating the sensitivity of two perception models to different windshield configurations. This is done by evaluating the dependencies between neural network benchmark metrics and optical merit functions by applying a Fourier optics based threat model. Our results show that there is a performance gap introduced by windshields and existing optical metrics used for posing requirements might not be sufficient. Introduction The aspiration to launch level-4-ready autonomous vehicles within this decade drives new challenges in the automotive world.In order to increase the perception performance w.r.t. the frontal far field, the car will be equipped with high spatial resolution cameras.Advanced Driver Assistance Systems (ADAS) cameras with telephoto lenses and high-resolution sensors provide a high pixel resolution per field angle, wherefore they are more sensitive to optical aberrations within the optical path.Since car windshields are typically curved they will act as an additional lens in the optical path.Unfortunately, the curvature and thickness characteristics of the windshield are not sufficiently controllable on small domains [15].This indicates inherent optical aberrations in the optical path of ADAS cameras and impacts the recoverable information content of camera-based ADAS systems. From the physics point of view, the imaging process of an optical system is entirely determined by the convolution of the raw image with the Point Spread Function (PSF) because of the superposition principle, which arises from the linear nature of the Helmholtz equation (1).The PSF of an aberrated optical system can be parameterized by the wavefront error in terms of Zernike coefficients [12].This paper presents a methodology to define an optical threat model based on Fourier optics to reflect the perturbations induced by windshields.This difficulty becomes important if the training dataset is taken by a camera mounted on the vehicle roof but the network inference is performed on images from a camera behind the windshield.Hence, the optical aberrations can induce a significant dataset domain shift and might affect the model performance.This paper focuses on two primary research questions.First of all, how sensitive is a neural network to optical perturbations and are those sensitivities reflected sufficiently by optical merit functions, like the refractive power of the windshield for example.In order to tackle this question, we utilize a common metric in explainable AI, namely the Shapley values [29], which quantify the contribution or impact of a particular feature on the merit function of interest.The analysis of different windshield configurations will lead to a Shapley distribution for every merit function and Zernike coefficient.In order to synchronize the development efforts regarding the optical quality of windshields in the light of neural network performance, we are aiming for an optical merit function that reflects a congruent Shapley distribution as the distribution imposed by the AI benchmark metric.Secondly, we are investigating the correlation between neural network and optical system benchmark metrics.From a quality assurance perspective, we would like to determine a bijective function between neural network and optical Key Performance Indicators (KPIs).This would allow us to derive optical system requirements for the level-4 functionality.We are addressing this issue by generating different threat model attacks on the neural network architecture by Monte-Carlo sampling from uniformly distributed Zernike coefficients of second order. The intertwining between optical characteristics and the 1 arXiv:2308.11711v1[eess.IV] 19 Aug 2023 neural network predictive power has manifested itself as a new scientific branch denoted as deep optics [5].The essential idea is to trim the PSF during training by minimizing the loss function.This results in the most optically informative PSF [31], which might differ from the PSF with the highest imaging fidelity.For example, if the task consists of performing a depth estimation of objects by a single 2D image, it might be beneficial to code the PSF with an artificial defocusing blur [5].The blurring will then affect objects differently depending on their depth position.Hence, optical aberrations can be utilized as a feature for improving the information decoding capabilities of neural networks.As a downside, this methodology requires task-specific end-toend optimization, which is not compatible with multi-task architectures typically used in the autonomous driving industry [1,19,28].Commonly used multi-task architectures consist of a pre-trained backbone model, which is trained on a joint dataset and is based on unified learning across multiple tasks in the encoder step.This is sequentially followed by different adaption models or simply heads that are trained on downstream, task-specific datasets, e.g., classification, segmentation and detection [6,34].This hybrid architecture increases the run-time performance by the joint encoder utilization and enhances the generalization capability by incorporating data heterogeneity, which ultimately strengthens the model's robustness in inference [25].As a downside, the jointly used backbone model might induce a lack of information capacity, which would result in lower task-specific KPIs [28].If we would like to make use of the deep optics approach for multi-task networks we would need to train the heads individually.As a result, the most optically informative PSF would be task-dependent, which can not be satisfied by a single optical element.Even in the case of a single-task network, the deep optics approach is economically unfeasible in the context of car windshields because of the manufacturing process, which focuses on industrial macro parameters instead of physical micro parameters that drive optical aberrations.The results of this paper indicate that optical aberrations of the windshield can significantly deteriorate the model's performance by a domain shift and evidence on the insufficiency of existing optical working requirements for ADAS systems was found. Scope and research motivation For the homologation of autonomous driving vehicles, it will be necessary to perform a holistic analysis of the entire functional chain.The sensitivity analysis of imagebased deep neural networks on optical aberrations induced by the windshield is only one aspect of the entire challenge.Other impact factors might also be critical, like weather conditions, out-of distribution events or lighting conditions.Those effects are not discussed in detail in this paper, which does not imply a judgment on the relative severity.The main motivation of focusing on wavefront aberrations of the windshield in this paper is based on the question: "What makes a windshield smart and level-4 ready?". This question can only be answered if a most informative optical metric is found, which allows for deriving component requirements for safeguarding level 4 functionalities. Optical merit functions The foundation of Fourier optics is based on the Helmholtz equation [12]. An electromagnetic field wave ρ(⃗ x) has to satisfy the wave equation, which results in the time-independent Helmholtz equation: A unit amplitude spherical wave satisfies the Helmholtz equation ( 1) and is commonly known as the free-space Green's function [12].Generally, Green's functions are the physical version of the impulse response function in control systems engineering and are applicable to linear differential operators.If the system can be characterized by a Green's function then the system output is given by the convolution of the driving term or input signal with the Green's function.This theoretical mechanism is the causal reason for the validity of the superposition principle in optics.Therefore, the imaging process of an optical system is determined by: The Green's function of an optical system |h(⃗ x o )| 2 is commonly denoted as the PSF.It describes the image of an infinitesimal light pulse given by a Dirac delta distribution.Since we are considering an imaging system under incoherent light incidence, only the squared magnitude of the electrical field or the intensity of the light pulse matters.Essentially, with incoherent light there are no interference effects.If the Fresnel approximation is valid [12], then the PSF |h(⃗ x o )| 2 of an optical system is determined by: Here, P (⃗ x a ) denotes the aperture function of the optical system.In the case of an ADAS camera, the aperture stop of the objective lens is considered.Furthermore, d z quantifies the distance between the observation plane at z o and the position of the aperture stop at z a .If there are no inherent optical aberrations, then the system is diffraction limited and the incoherent impulse response function (PSF) of a one-dimensional rectangular aperture is given by the squared sinc function [12].Unfortunately, optical systems in the automotive industry are not diffraction limited especially if the windshield is included.Therefore, the concept of the aperture function P (⃗ x a ) has to be extended to the generalized aperture function 1 P (⃗ x a ), given by: Here, W (⃗ x a ) denotes the wavefront aberration map on the aperture surface.Physically, the wavefront aberration map is given by the optical path difference between the expected wavefront and the observed wavefront.In the case of a windshield, the expected wavefront is given by a plane wave.Even in the case of non-diffraction limited systems, the superposition principle is applicable since the differential operator remains linear, wherefore aberrated optical systems are characterized by: So far, the influence of optical aberrations on the imaging process has been discussed in detail and the governing physical equations were presented.For quality assurance purposes in the light of reliable autonomous driving, this physical process has to be mapped to measurable physical quantities on which quality requirements can be imposed.The following subsections elaborate on different optical metrics, which serve this objective. Refractive power Historically, refractive power measurements have been utilized as the primary quality criterion for windshields [4].The refractive power D xi quantifies the curvature of the wavefront aberration map W (⃗ x a ) along the axis of interest x i if a plane wave is expected, as it is the case for windshields [33].Consequently, D xi is given by: Current optical requirements in terms of the refractive power are typically expressed as the maximum absolute value over both transversal axes. Modulation Transfer Function (MTF) Generally, the Green's function entirely determines the behaviour of a Linear and Time-Invariant (LTI) system [18].As a consequence, it is insightful to further analyse the PSF. 1 Generally, we adopt the Feynman slash notation if optical field quantities are assumed to be non-diffraction limited. The MTF is defined as the real part of the Fourier transform of the PSF normalized to one at ⃗ k = ⃗ 0, as stated by: Hence, if the real-valued intensity PSF is considered as a light distribution in the observer plane then the MTF corresponds to the characteristic function in statistics [3], which determines the moments of the light distribution, e.g., gray values centroid, intensity variance etc. Tier-1 ADAS suppliers are recently defining functional requirements in terms of the MTF at a spatial frequency of half-Nyquist. Strehl Ratio (SR) Instead of specifying only a single spatial frequency requirement for the MTF it might be advisable to consider the entire spectrum.In order to do so, the area under the MTF curve can be evaluated.The spectral integral of the MTF in relation to the diffraction limited MTF area is defined as the Strehl ratio [12] and is given by: An equivalent definition of the Strehl ratio is given by the quotient of the aberrated PSF over the diffraction-limited PSF, evaluated at the optical axis (⃗ x o != ⃗ 0). Optical Informative Gain (OIG) Unfortunately, there is still a drawback in the definition of the Strehl ratio because it does not incorporate the knowledge about the shape of the PSF, which entirely characterizes the optical system.Therefore, an optical merit function would be desirable that shows a dependency on higherorder moments of the PSF as well.One possible metric that considers this constraint is introduced in this paper as the Optical Informative Gain (OIG): Equation ( 9) takes advantage of the Plancherel theorem [8]. If the OIG is evaluated by measurement data then the domain of the MTF is restricted by the Nyquist frequency.Hence, the OIG incorporates the resolution limitation given by the image sensor and relates to the amount of photonic energy, which can be spatially discriminated in relation to the diffraction-limited case. Neural network merit functions Previous studies on the effect of dataset shifts [26] and noise corruptions [14] on image classification underpin the importance of optical robustness analyses for autonomous driving algorithms.The impact of dataset shifts induced by optical aberrations of the windshield on traffic sign classification has already been quantified as an accuracy drop of up to ten percent [7].In contrast, our paper focuses on the performance and the network calibration reliability for semantic segmentation.Due to the pixel-wise class prediction, it can be hypothesized that the sensitivities for optical aberrations are amplified in relation to macro-level predictions in image classification. Intersection over Union (IoU) The governing benchmark metric for semantic segmentation is given by the Jaccard similarity coefficient or also commonly known as the Intersection over Union (IoU) [22].In this paper we will make use of multi-class segmentation datasets, wherefore the mean of the IoU is computed over all classes (N c ), denoted as mIoU. Expected calibration Error (ECE) Standard neural networks typically yield non-calibrated predictions, which can be transformed into calibrated confidence scores using post-hoc calibration methods [30].Nevertheless, modern neural networks tend to yield systematically overconfident predictions [13].A metric that assesses the calibration quality of neural networks is given by the Expected calibration Error (ECE) [24].For nonbinary datasets, the metric is generalized as the mean over all classes (mECE). Shapley values Deep convolutional neural networks are inherently highly non-linear, wherefore it is generally difficult to assess the global sensitivity of the model predictions on single input features.One way to tackle this problem is by considering the outcome of the model with and without a particular feature.If all input feature subsets S are considered regarding the marginal contribution of feature i to the sub-coalition performance then the correlations between different features are inherently incorporated.Averaging the weighted marginal contribution of feature i over all possible input feature coalitions of different cardinality results in a sensitivity metric, which fulfills all fairness properties in game theory namely the efficiency-, symmetry-, linearity-and the null player condition [27].This sensitivity metric was initially introduced by Shapley [29] in the field of economics and has been widely adopted in the explainable Artificial Intelligence (AI) world since an approximative evaluation method was found by Lundberg & Lee [20].In general, the Shapley value φ for feature i and objective function Ξ is determined under a particular feature set M f .Hence, the Shapley value is a local explanation method [23], which describes the feature effect by quanti-fying the direction and magnitude of the local gradient in the feature space.As a consequence, if the entire feature space is sampled equidistantly the Shapley values will generate a distribution.The shape of the Shapley distribution for feature i in contrast to feature j might indicate differences in the feature importance for the neural network inference.The Shapley values: are determined by weighting the individual coalition merit with the inverse of the binomial coefficient, which quantifies the number of sub-coalitions with cardinality |S|. Experimental setup In order to examine the sensitivities and dependencies of semantic segmentation predictions on optical wavefront aberrations induced by the windshield, a proper testing environment has to be established.First of all, we will elaborate on the physical imaging model used in this paper, which utilizes Fourier optical principles to translate the wavefront aberrations induced by the windshield into image degradations.Secondly, the network architectures are introduced for conducting the evaluation experiments. Optical threat model The optical threat model, which simulates the optical aberrations of the windshield, is based on Fourier optics [12].Inspired by the work of Chang et al. [5], we extend the proposed optical threat model to 4k ADAS cameras with telephoto objective lenses.Generally, the optical threat model assumes that the wavefront aberration map W (⃗ x a ) is known in advance either by measurement or optical simulation.The wavefront aberrations are parameterized by a set of Zernike coefficients {ω n }, which decompose the wavefront aberration map W on the unit circle: In this paper, the aperture stop of the objective lens is circular, wherefore the orthonormal 2 Zernike polynomials Z n are selected as a basis 3 obeying the orthogonality relation ⟨Z n , Z m ⟩ = δ nm .Eq. ( 11) is parameterized by the normalized radius ρ and the polar angle ϕ of the circular aperture. In general, Zernike polynomials of zeroth-and firstorder only induce a phase modulation, which does not impact the measured intensity distribution on the image sensor [33].As a result, the incoherent MTF is not affected by the Zernike coefficients ω 0 to ω 2 .This is physically sound because the zeroth order term describes a longitudinal offset of the wavefront, which does not influence the image.Secondly, the first-order terms physically describe a deflection of the light beam, wherefore the image is displaced but not structurally perturbed since the wavefront curvature is not affected.As a consequence, the studies of this paper are restricted to second-order Zernike coefficients.Higherorder terms are neglected because the magnitude of the coefficients decays with increasing order, which reflects the convergence of the series expansion in Equation (11).Future studies might also investigate terms of the truncation order, e.g., coma and trefoil. With the knowledge of the wavefront aberration map of the windshield and the aperture stop of the camera under consideration, the generalized aperture function P can be constructed by applying Equation ( 4).Based on P the incoherent, non-diffraction limited PSF | ¡ h| 2 is computed based on Equation ( 5), which entirely characterizes the optical system.The perturbed image ¡ ρ is finally given by convolving the clean image ρ with the perturbed PSF | ¡ h| 2 .From the measured wavefront aberration map and the deduced PSF, the entire ensemble of optical merit functions introduced in Section 3 can be derived. Figure 1 demonstrates the effect of the optical threat model.The Zernike coefficients for the parameterization of the wavefront aberration map were determined by a Shack-Hartmann wavefront measurement of a test sample windshield.The black square target within the image has been utilized for a slanted edge analysis according to ISO12233 [10].The MTF of the perturbed image is normalized by the MTF of the undistorted image to retrieve the net effect of the induced optical aberrations.The resulting MTF curves for the horizontal and vertical direction are compared to the MTF parameterized by the optical threat model in Figure 2. In addition, the refractive power triggered by the curvature modulation of the wavefront can be evaluated by Equation ( 6), which has been benchmarked by a reference refractive power measurement using the Moiré pattern technique [32].In conclusion, the measurement results for the physical test sample are sufficiently reflected by the optical threat model, which underpins the validity of the implemented Fourier optics approach. Architecture of the evaluation networks In this paper, we make use of a publically available deep convolutional neural network trained on the KITTI dataset [11] from TensorFlow Hub and we study a stateof-the-art multi-task network from CARIAD. High-Resolution Network (HRNet) The High-Resolution Network (HRNet) architecture has been invented by Microsoft [16] and the TensorFlow Hub model was adapted and trained by Google [2,21].The selection of the HRNet architecture as an evaluation model is based on the fact that future ADAS functionalities will most likely rely on 4k high-resolution cameras.In general, a model architecture can be tuned in three dimensions: depth (e.g., more layers), wideness (e.g., more channels) or finesse (e.g., higher resolution images).The standard sequential encoder-decoder architecture in deep convolutional neural networks lacks on information capacity in the condensed low-resolution feature map.Hence, the standard encoderdecoder architecture is typically extended for highly spatially sensitive applications like autonomous driving.The HRNet tackles this challenge by switching the information propagation from serial to parallel [16].In detail, convolutions are performed in parallel on multiple resolutions to improve the information capacity of the model architecture.Therefore, the high-resolution representation of the input information is maintained throughout the whole process.Repeated fusion steps between parallel streams of different resolutions ensure an information flow across the levels. Multi-Task Learning (MTL) model The in-house developed Multi-Task Learning (MTL) model consists of a large shared encoder with several feature extraction layers followed by five decoder heads, each for a specific task, referred to as task head.These task heads are mainly of two types: segmentation heads and object detection heads.In detail, the parallelized decoders address the following tasks: • Semantic segmentation head: Provides a pixel-wise classification across the image for several classes.The head's performance is quantified by the mIoU. • Blockage detection head: Provides a binary segmentation mask that detects if a certain region of the image is blocked or not.The evaluation metric is given by the IoU. • Traffic Light Detection and Classification: At first, 2D bounding boxes for traffic lights in the image are predicted.Subsequently, the pixels within a single 2D bounding box are segmented to either belong to the class "traffic light bulb" or "housing".Finally, the pixels belonging to the class "traffic light bulb" are used to classify the signal color of the corresponding traffic light.For quantifying the performance of this multi-step classification task, a head-specific combined metric is evaluated, which relies, i.a., on the average accuracy and the area under the precision-recall curve. • Traffic Sign Detection and Classification: Predicts 2D bounding boxes for traffic signs within the image.Afterwards, the subimages are used to classify the corresponding traffic sign type.Similar to the traffic light classification head a combined metric is assessed for the performance of the multi-step traffic sign classification task. • Vehicle Detection: Provides a categorized 3D bounding box across two types of vehicles: large vehicles (e.g., trucks, buses, etc.) and passenger cars.The head's performance is evaluated by the average precision metric. For a consolidated evaluation, we first determine the taskspecific metrics (i.e., average precision for object detection and mIoU for semantic segmentation).However, to convey a holistic model performance, the head-specific loss functions are integrated using weighted averaging after normalization culminating into an overall combined multi-task loss, ranging from 0 (worst performance) to 1 (perfect performance).This aggregated score reflects the model's collective efficiency across all tasks.In order to prevent a single task from being dominant in the learning process, the individual, task-specific head losses can be integrated by an uncertainty-based weighting scheme to obtain a more robust combined metric [17]. Evaluation datasets Typically, datasets for autonomous driving are taken with cameras mounted behind the windshield.As a consequence, the images are inherently perturbed by optical aberrations, which leads to an unknown dataset domain shift that makes it impossible to quantitatively assess the impact of different windshield configurations without prior knowledge about the inherent aberrations.Hence, it is eminently beneficial that the HRNet was trained on the KITTI dataset, where the camera had been mounted on the car roof [11].The evaluation images from the KITTI dataset are characterized by a resolution of 370 × 1224 px. The MTL model is trained on a joint dataset, i.e., each head is trained on a task-specific dataset with corresponding labels.The multi-task dataset from CARIAD features images of the dimensions 1024 × 2048 px. Evaluation results The results obtained from employing the optical threat model on two distinct neural network architectures are summarized in this section.In general, the results for the HRNet and the MTL model are primarily coherent, e.g., the dependency of the model performance on the optical merit functions introduced in Section 3 and the network performance sensitivity on different Zernike coefficients. Sensitivity analysis The Shapley studies envisioned by Figure 6 on different optical merit functions and neural network benchmark measures indicate a non-linear mapping of the sensitivities between the AI world and the optical world.For comparability reasons, the Shapley values have been normalized to the effect of ω 4 , which physically represents defocus.The behavior of the mIoU and the mECE with increasing perturbation magnitude is physically sound and predicted but the symmetry is remarkable.The mirror symmetry w.r.t. the abscissa originates from the observation that the mECE is dominated by the accuracy degradation, as illustrated by Figure 3.In addition, the refractive power shows no sensitivity regarding ω 3 as expected, which reflects the fact that the merit function has been explicitly restricted to the xand y-axis.In general, it can be concluded that ω 4 aberrations have the biggest impact on the performance of the studied merit functions.Furthermore, the Shapley distributions regarding the MTF, the Strehl ratio and the OIG are very similar in terms of their codomains but they reveal slightly different probability allocations, which indicates different statistical moments. Correlation analysis The dependencies between optical merit functions and neural network benchmark measures are directly contrasted in Figure 7 for the HRNet.It is clearly noticeable that from the refractive power and the MTF it is not possible to infer the performance of the neural network unambiguously.are provided for the most relevant bins.The bin comprising the most severe optical aberrations is highlighted on the right-hand side in Figure 4.It can be concluded, that the statistical mass allocations within each bin are significantly more clustered in the case of the Strehl ratio and the OIG as compared to the MTF at Nyquist half frequency.Hence, Figure 4 shows evidence for the superiority of the Strehl ratio and the OIG as a quality metric in contrast to the MTF at Nyquist half frequency.The results clearly indicate that the information density of the PSF is beneficial for defining an optical ADAS working requirement.For quality assurance purposes it would be required to ensure that the quality metric is bijective, which is the subject of future studies.Overall, the MTF at Nyquist half frequency seems to be insufficient as a safety quality criterion for windshields. Calibration robustness The effect of optical aberrations on the reliability curve of the HRNet is visualized in Figure 5.In the diffraction limited case (ω n = 0 ∀ n ∈ N 0 ), the HRNet shows an mECE of 15.6%.If optical aberrations are considered, then the average accuracy and the average confidence decrease with increasing perturbation magnitude but the reduction is non-coherent, as demonstrated by Figure 3. Consequently, the neural network becomes more and more overconfident, which is underpinned by the increasing mECE in Figure 6.In the case of low prediction confidences and low prediction accuracies, the binned network accuracy seems to slightly increase if aberrated test data is used.This behavior is counterintuitive and represents an artifact of the visualization method.Essentially, optical aberrations drive the probability flow of the confidence distribution towards lower values shifting the predictions into low-confidence bins.Since the domain is limited and quantized, the bin composition varies affecting the binned accuracy.Physically, low-confidence predictions in semantic segmentation mostly correspond to class area borders.If the image is perturbed, then the contours are getting blurred as illustrated by Figure 1, which results in lower prediction confidences for these pixels. Conclusion The results presented in this paper reflect an initial assessment of the functional relationship between neural network benchmark metrics and optical aberrations parameterized by Zernike coefficients.From our experiments, we report evidence on the superiority of the Strehl ratio and the OIG as an optical quality indicator for image-based neural network performance in contrast to present functionality requirements in terms of the refractive power and the MTF at half-Nyquist frequency.In addition, the studies demonstrate that a pure defocus is influencing the performance of a semantic segmentation algorithm more than astigmatic aberrations.Furthermore, the investigations on the HRNet from Google and the studies on the MTL model from CARIAD show similar sensitivities and functional dependencies on optical aberrations, which leads to the hypothesis, that the results presented in this paper could be network architecture independent. Finally, it has to be emphasized that the optical threat model applied in this paper was tuned for telephoto objective lenses.As a consequence, the scalar product in the exponent of Equation ( 3) is assumed to be given by the product of the vector magnitudes.If wide-angle cameras are considered, then the optical threat model has to be adjusted by including the dependency on the field angle ψ as: Figure 1 . Figure 1.Toy example demonstrating the effect of the optical threat model applied to a real-world scene.The slanted edge targets are shown enlarged on the bottom. Figure 2 . Figure 2. Validation of the optical threat model based on MTF measurements with the slanted edge method.The confidence bands are reflecting the Poisson noise of the image sensor. Figure 3 . Figure 3. Pearson correlation between the weighted confidences and the weighted accuracies (bin cardinality weighting scheme). On the contrary, the Strehl ratio and the OIG indicate a functional relationship to the mIoU as well as the mECE within the uncertainty intervals.The uncertainty bars are given by the standard deviation of the mean of the mIoU and the mECE regarding the test image batch of size 40.In addition, the MTL model shows similar performance trends regarding the Fourier optical metrics as the HRNet if the envelope functions and the subdomain fluctuations in Figure 4 are considered.Here, each data point represents the effect of a windshield, parameterized by a set of Zernike coefficients in the range of ω n ∈ [−λ, λ], on a test image batch of size 20.The envelope function is obtained by binning the data and assigning the minimum value within a bin to the envelope function.This procedure can be performed for all head-specific KPIs leading to the colored stack plot in Figure 4 after applying the uncertainty-weighting.In order to indicate the local performance spread, additional boxplots Figure 4 . Figure 4. Multi-task performance versus (a) the MTF at half-Nyquist, (b) the Strehl ratio and versus (c) the OIG. Figure 5 . Figure 5. Calibration curve for the HRNet from Google. Figure 6 . Figure 6.The sensitivities of different convolutional neural network benchmark metrics as well as the sensitivities of several optical KPIs on wavefront aberrations, parameterized by Zernike coefficients, are quantified and visualized in terms of Shapley values.The impact of an induced defocus (Z4) surpasses the effect of oblique-(Z3) and vertical astigmatism (Z5) for all merit functions studied in this paper. Figure 7 . Figure 7.The dependency of the mIoU and the mECE on different optical merit functions is shown.The results are almost symmetrical around the baseline if the trend of the mIoU is considered in relation to the mECE, which is scientifically justified by Figure 3.In summary, the refractive power and the MTF at half-Nyquist frequency do not demonstrate a functional relationship w.r.t. the mIoU and the mECE.In contrast, the Strehl ratio and the OIG indicate a functional relationship, which might even fulfill the required bijectivity criterion.
7,121.8
2023-08-19T00:00:00.000
[ "Computer Science", "Physics" ]
Understanding resolution limit of displacement Talbot lithography Displacement Talbot lithography (DTL) is a new technique for patterning large areas with sub-micron periodic features with low cost. It has applications in fields that cannot justify the cost of deep-UV photolithography, such as plasmonics, photonic crystals, and metamaterials and competes with techniques, such as nanoimprint and laser interference lithography. It is based on the interference of coherent light through a periodically patterned photomask. However, the factors affecting the technique’s resolution limit are unknown. Through computer simulations, we show the mask parameter’s impact on the features’ size that can be achieved and describe the separate figures of merit that should be optimized for successful patterning. Both amplitude and phase masks are considered for hexagonal and square arrays of mask openings. For large pitches, amplitude masks are shown to give the best resolution; whereas, for small pitches, phase masks are superior because the required exposure time is shorter. We also show how small changes in the mask pitch can dramatically affect the resolution achievable. As a result, this study provides important information for choosing new masks for DTL for targeted applications. Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI. Introduction Periodic organisations of structures are useful for the creation of devices in many different fields such as plasmonics [1], photonic structures [2] or metamaterials [3].Existing techniques are capable of easily patterning periodic sub-micron features, but each has their advantages and disadvantages.Deep-ultraviolet immersion lithography using a 193 nm excimer laser is widely used in industry and is capable of achieving a resolution of 14 nm [4], and extreme ultraviolet (EUV) sources with a wavelength of 13.2 nm are on the horizon to further decrease the minimum feature sizes [5].However, the very high cost of these techniques limits their penetration into lower volume industries and research organisations.Electron beam lithography is versatile and can achieve very high resolutions (< 10 nm), but the cost is prohibitive for full wafer patterning due to the long patterning time [6].However, reaching such high resolutions is not necessary for all applications. Alternative cheaper nanopatterning methods have become available in recent years.Nanoimprint lithography is a promising technology for large-area patterning of features below 10 nm [7].Thanks to the mechanical pattern transfer, the resolution is not limited by an optical system.However, the main drawback is the lifetime of the 3D master mould.Another approach is to use interference lithography, in which coherent sources of electrons [8] or photons [9] interfere, creating a periodic array of intensity.Since maintaining control of the sources before they interfere with each other can be a challenge, a solution is to derive the multiple sources close to the region of interference through diffraction from a periodic mask. Displacement Talbot lithography is a recently developed technique for patterning large areas with sub-micron periodic features [10].It is an extension of Talbot lithography, which uses the three-dimensional interference pattern created when monochromatic light diffracts through a periodic mask.Coherent light passing through a mask patterned with a periodic structure creates different diffraction orders that subsequently interfere causing a self-imaging of the mask.This phenomenon is well-known and called the Talbot effect [11].Characteristic of the interference pattern is its repeating nature along the axis perpendicular to the mask, with a spatial period called the Talbot length.By itself, this interference pattern is difficult to use for photolithography directly due to the size and complexity of the pattern.However, introducing a displacement during a photolithography exposure along the axis perpendicular to the mask integrates the optical field and solves these problems.This technique is called Displacement Talbot Lithography (DTL) and has the advantage of a theoretical infinite depth of field [10].The main disadvantages of this technique are: 1) the low contrast between exposed and unexposed regions on the sample due to the mixing of the self-image and other secondary constructive interference features, and 2) the restriction to simple periodic features.Nevertheless, the illumination process will not be sensitive to surface roughness, or imperfect parallelism between the mask and the sample, or the depth of field; all important parameters in conventional photolithography.Recently more complex periodic structures have also been obtained using DTL [12] as well as sub-wavelength patterning [13]. Applications of this new patterning approach include metamaterials [3], III-V semiconductor photonic materials in the form of core-shell structures [14] and nanotube cavities [15]; neuronal network formation [16] and nanoimprint master creation [17].The minimum feature size that can be achieved is dependent on the source wavelength, with 125-300 nm features having been achieved for a near-UV laser source [18] and 75 nm features for a deep-UV source [19].However, for any particular source wavelength, the resolution limit of this new lithography method has not yet been reported in the literature.Having a better understanding of such a limit will permit a greater use of this fast, cheap and flexible lithography technique. In this paper, we analyse the resolution limit using the results from computer simulations of the intensity pattern seen by the sample, where the model has first been validated through a comparison of simulated results with existing experimental data.As a result, we determine the smallest feature size achievable as a function of the mask parameters, such as whether it is a phase or amplitude mask, the mask pitch and the feature diameter.Thanks to this comparison between experiment and modelling, the conditions required to optimize the resolution will be discussed. Simulation of aerial image A MATLAB computer model was developed to simulate the operation of a DTL machine (PhableR 100, EULITHA) in which the light source is a 375 nm UV laser.The optical system generates a plane wave illuminating a conventional lithography mask at normal incidence so that the light arriving at the mask is homogenous, unpolarised, and in phase.The complex field distribution has been represented by a scalar field.It allows the calculation of any field distribution in any plane parallel to the mask, by the use of a free-space propagation method realised in Fourier space [20].Both amplitude and phase masks are considered in the modelling, and any impact of the metal or phase shift layer thickness on the electric field has been neglected.Consequently, contributions from the different mask regions propagating behind the mask are given amplitudes of 1 and 0 for a chrome amplitude mask and 1 and −1 for a phase mask. Experimentally, the integration of the three-dimensional light field has been realized 100 µm away from the mask, therefore Fraunhofer conditions can be applied in the subsequent modelling to calculate the three-dimensional light field behind the mask, known as the Talbot carpet.The Fourier transform of the mask generated is calculated as well as the 2D spatial wave vectors in the plane of the mask.All 3D wave vectors can be determined by knowing at each step of the integration the vertical position of the photo-sensitive layer.The electric field amplitude at a specific coordinate is calculated by applying the inverse Fourier Transform to the combination of the 3D wavevectors, the Fourier transform of the mask and the depth positions [9].The electric field is then multiplied by its conjugate to obtain the surface light intensity. Figure 1 shows the results for the modelling of an amplitude grating with a periodicity of 600 nm and 200 nm openings.Figure 1(a) shows a cross-section in the x-z plane and illustrates the classical Talbot effect.By moving the sample through an integer number of Talbot lengths, ( ) ( p is the pitch on the mask and λ is the laser wavelength), the intensity seen by the photoresist is integrated to remove the z-dependence as shown in Fig. 1(b).To ensure accurate integration within the simulation, a vertical step resolution of 1/100 of the Talbot length is sufficient for a small pitch mask (less than twice the laser wavelength), but this is insufficient for larger pitches due to the greater complexity of the Talbot carpet, since it is generated from a higher number of diffraction orders.Therefore, a step resolution of 1/200 has been chosen.A further increase in resolution only increases the computational time with no change in the simulated pattern.The resulting intensity in the x-y plane that will be transferred into the resist, called the aerial image, is shown in Fig. 1(c).The 600 nm pitch grating mask results in a 300 nm period grating on the sample. Definition of parameters on aerial image The computer simulations allow the aerial image to be determined for any mask, which is important since the results are not intuitive.As a function of the pitch, the filling factor and the nature of the mask, the aerial image evolves dramatically.This study has focused on two types of mask: square and hexagonal arrangements of circular features.Since the results from these two structures are quite similar, we primarily discuss the results for the hexagonal patterns, with the results from the square masks presented in the appendices.Figure 2 shows the simulation of the aerial image for a 1.5 µm pitch hexagonal amplitude mask with 800 nm openings.This mask pattern has the feature that it creates an aerial image with the same hexagonal pattern.The aerial image is then transferred into the resist, and in order to understand the size of the resist features that can be created, we define three figures of merit in order to compare the aerial images from different masks: 1) the theoretical width of the pattern achievable, 2) the relative intensity of the background, and 3) the relative intensity of the maximum of the unwanted features within the images, which we call the secondary maxima.The latter two figures of merit are relatively easy to define by referring to the aerial image in Fig. 2 A primary defined as the when all resis CD26 develo achieve lower near the inten limit to becom dispersion of the model, wi suitable thresh we turn to exp Experime Experiments u achievable an minimum feat of a bottom a for further de hexagonal am Figs.3(d Impact of mask design on figures of merit Using the computer model that has been developed, the influence of different parameters on the resolution has been analysed.Amplitude and phase masks have both been modelled for a range of pitches and mask feature sizes.Whilst amplitude masks are commonly used due to their cost and ease of manufacture, phase masks can improve the resolution of classical lithography.Therefore, it is important to understand whether the same is borne out for DTL. Amplitude mask Simulations of the aerial images were generated for mask pitches between 0.5 and 3.2 µm and mask opening diameters from 20 nm to 90% of the pitch with a resolution step of 20 nm in order to allow a good understanding of the impact of the different parameters on the aerial image.The integration was performed over four Talbot lengths to prevent artefacts arising from the grid definition.From the aerial images, the theoretical width, the relative background of intensity and the maximum of the secondary patterns were determined and these have been plotted in Figs.4(a)-4(c), respectively, as a function of the pitch and feature diameter of the mask.Figure 4(a) shows that the smallest features are when the mask openings are smaller than the wavelength.With these feature sizes, the diffraction can be considered to arise from an array of point sources with hemispherical wavefronts.However, the transmission is highly reduced for subwavelength holes, where the transmission ( ) A second valley giving small features also appears for a filling factor of 50% though the theoretical width achievable is not quite as small.Sharp vertical and horizontal lines are also apparent along this valley where the minimum feature size abruptly decreases with increasing size.These appear periodically at multiples of the laser wavelength as a result of the incremental addition of further diffraction orders as the wavelength is increased.The plot of the relative intensity of the background in Fig. 4(b) shows that the background is lower for smaller mask openings.Thus, the highest resolution features can simultaneously be achieved with having a low background. In contrast to the previous two figures of merit, the plot of the relative intensity of maximum of secondary patterns (Fig. 4(c)) is not as simple.Low maxima of the secondary patterns (dark blue regions in Fig. 4(c) can be obtained in two cases: for small mask openings and particular pitches, or for masks with a filling factor of 33%.This specific value can be explained by diffraction theory.Indeed, in the case of gratings, the disappearance of some diffraction orders can occur if the ratio of the grating width on the period is a rational fraction.The same phenomenon is appearing here with a 2D mask.By tuning the diameter opening, some diffraction orders can be canceled, and so reduce the importance of the secondary pattern.Furthermore, for small mask openings, all the diffraction orders are represented and more or less have the same amplitude.This can explain why the resolution is improved in Fig. 4(a). Discussio In this study, wavelength so feature sizes smaller than t nm will impr developer wit Various a metal layer on shape of the s the results for the rounding the optimum d Neverthel [9][10][11].In ad elling shows th asks.Indeed, la nd horizontal p g all the figur e relative inten 6% (Fig. 6(b) y the ratio be the destructiv e case of the am y maxima for a lleys are not pe e intensities of b these regions n is compromis 6. Theoretical a) w ) relative intensity on computer sim ource of 375 n are predicted the laser wavel rove the resolu th higher contra assumptions ha n the mask, th structures havin r very small ma of the resist p development ti ess, the modell ddition, our pre hat the conditio arger openings periodic featur res of merit s sity of the back ), correspondi etween the tw ve interferences mplitude mask, filling factor o erfectly match both the backg do not coincid ed. width of the patter of maximum of se mulations of the nm and assumi to be around length.It is exp ution to the s ast will permit ave been made e dark erosion ng been omitte ask openings a profile.In parti ime to one that ling results agr evious work h ons to enhance lead to a bette res previously imultaneously kground for a ing to a filling wo phases of s occurring in , there is also a of 33% (Fig. 6 ed, their broad ground and the de consistently bot effect predicted experimental results qualitatively; for high illumination doses for certain hexagonal masks, the secondary patterns were seen to merge to create ring structures after development.The shape of the ring obtained experimentally was found to correspond well with the modelled shape [15] and could be used to create resonant cavity modes in axial InGaN/GaN nanotube microcavities. One of the benefits of DTL is being able to vary the exposure dose to create a large range of feature diameters; a phenomenon that arises from the shape of the intensity peaks in the aerial image.For example, the 1.5 μm hexagonal amplitude mask with 800 nm openings allows a range of features in resist from 250 -650 nm; thus demonstrating the flexibility of DTL.For this purpose having a low background and a less pronounced secondary pattern are more important factors than the resolution and these are better satisfied with phase masks. By comparing Figs.4-6, it can be seen that for mask pitches smaller than two wavelengths, both phase and amplitude masks achieve the same resolution.In this case, a phase mask would be preferred because of the higher transmission through the mask, leading to a shorter illumination time.For higher pitches, an amplitude mask would offer the smallest features, but at the expense of a long exposure time. Conclusion Displacement Talbot Lithography is a new lithography method that can pattern periodic features across a large area quickly and cheaply.The conditions to reach the smallest features with DTL are not trivial and in certain situations conflict with the optimisation of other figures of merit that influence the lithography process.For a specific i-line resist, the impact of the pitch and the size of the openings on the mask has been found and analysed not only on the minimum size feature achievable, but also the sensitivity of the resolution, the background, and the unwanted secondary pattern intensities.This study shows how sub-100 nm features can be achieved with DTL across large areas with conventional i-line resists and illumination.This limit is substantially below other classical photolithography methods using the same illumination wavelength. Experimental data Silicon wafers were coated with a bottom antireflective layer (Wide 30, Brewer Science) prior to a 240 nm positive resist (Ultra i-123 diluted from 800 nm as supplied).The wafers were exposed via DTL using a hexagonal amplitude mask with either a 1.5 μm pitch and 800 nm mask openings or with a 1 μm pitch and 550 nm openings.The Talbot length associated with these masks are 8.80 μm and 3.80 μm, respectively (the section 6.2).A Gaussian velocity integration was applied and a travel distance of 8 Talbot lengths was chosen to ensure homogenous integration over several Talbot motifs.Multiple series of illuminations were realized to ensure a high reproducibility of the process: 70 to 100 mJ/cm 2 for 1.5 μm, and 130 to 180 mJ/cm 2 for 1 μm, with 10 mJ/cm 2 step.The wafers were immersed for 210 s in the developer (MF CD26).This development time was chosen to reach a 10% contrast in the resist profile calculated thanks to the Dills model [24][25][26]. The statistical data on the feature diameters in Fig. 3 was determined from image analysis of 3 SEM pictures per sample.The pictures were consistently taken around the center of the wafer and the magnification was chosen to observe around a hundred features in one picture. Hexagonal mask modeling The non-primitive unit cell size for a hexagonal organization has a specific characteristic: one cell edge is 3 larger than the other.Due to this being an irrational number and the use of matrices in MATLAB, a rounding error will occur during the definition of the matrix size.This error can be mitigated by increasing the resolution without sacrificing the computational time.Furtherm not the neares So the Tal Square m The use of sq smaller pitch.the mask (Fig Phase The same con phase mask.L in this area, hexagonal one sk as for the h round will be be smaller than In Fig. 9(d) the intensity ratio between neighbouring patterns shows similar values to the amplitude mask with some parts being almost the same.In order to follow a valley corresponding to a specific filling factor to obtain a good resolution, specific pitches and opening diameters should be targeted. Discussion For square mask, the smallest feature will be obtained with the amplitude mask but the secondary pattern intensity is significant and could even be critical.In square phase mask, for large openings, the secondary patterns are not going to as high as the hexagonal case, with reasonably small features being achievable.So for the patterning of square array, a phase mask will be a better choice for a high resolution with limited parasitic effect. Fig. 1 . Fig. 1.Modelling of a 600 nm grating amplitude mask with 200 nm openings.Normalized figures of a) the Talbot carpet, b) the Talbot carpet after integration over the Talbot length, and c) the aerial image. Fig. 2 normaA cross-se Fig.2(b), is s needs to be experiencing dose will be resist above th in theory, by the highest re threshold.A primary defined as the when all resis CD26 develo achieve lower near the inten limit to becom dispersion of the model, wi suitable thresh we turn to exp Fig. 3 c) the a 1 μm illumi thresh As a resul and high hom images simu collimation o included in t compared in lithography. 9 . Theoretical a) wi ve intensity of m ns.idth of the pattern maximum of secon features as a functi be drawn whe gs will give a h e of the secon idth of the pattern maximum of seco achievable, b) rela ndary patterns an ion of the pitch and en using a squ higher resolutio ndary maxima achievable, b) rela ondary patterns a ative intensity of t nd d) Ratio of m d the hole diamete uare phase mas on.The backgr is going to b ative intensity of t and d) Ratio betw the background, c) maximum intensity er.
4,751.6
2019-02-20T00:00:00.000
[ "Physics" ]
Transcriptome Survey of a Marine Food Fish: Asian Seabass (Lates calcarifer) The Asian seabass (or barramundi; Lates calcarifer) is a marine teleost and a popular food fish in Southeast Asia and Australia. To date, comprehensive genome and OPEN ACCESS J. Mar. Sci. Eng. 2015, 3 383 transcriptome sequence information has not been available for this species in public repositories. Here, we report a comprehensive de novo transcriptome assembly of the Asian seabass. These data will be useful for the development of molecular tools for use in aquaculture of Asian seabass as well as a resource for genome annotation. The transcriptome was obtained from sequences generated from organs of multiple individuals using three different next-generation sequencing platforms (454-FLX Titanium, SOLiD 3+, and paired-end Illumina HiSeq 2000). The assembled transcriptome contains >80% of the expected protein-coding loci, with 58% of these represented by a predicted full-length cDNA sequence when compared to the available Nile tilapia RefSeq dataset. Detailed descriptions of the various steps involved in sequencing and assembling a transcriptome are provided to serve as a helpful guide for transcriptome projects involving de novo assembly of short sequence reads for non-model teleosts or any species of interest. Introduction The Asian seabass (or barramundi; Lates calcarifer) is a marine teleost from the Latidae family.Apart from being a popular food fish in the Australian and Southeast Asian region [1], the species has several characteristics that make it interesting for scientific research, namely: (i) it is able to adapt and survive in a range of salinities [2]; (ii) it is catadromous, born in brackish water, moving to fresh water to spend the juvenile stages there and migrating back downstream to brackish water or seawater to breed [2]; and (iii) it is a protandrous sequential hermaphrodite, changing sex from male to female between the ages of 3 and 8 years [3,4]. For more than a decade, our group has been involved in the breeding and selection program for the Asian seabass.One particular focus has been studying the genetic information encoded by the protein-coding loci, which is vital for the development of molecular tools required for gene expression studies and genome annotation.The genome of the Asian seabass is estimated to be ~700 Mb and is currently being assembled and annotated [5], while the mitochondrial genome has previously been completely sequenced [6].Transcriptome information for the Asian seabass to date is mainly represented by ~22,000 EST sequences in GenBank along with a limited number of organ-specific transcriptome studies and repetitive sequence analyses [7][8][9]. Comprehensive sequence characterization of the transcriptome is an essential first step to identify protein-coding/regulatory regions of the genome and the development of tools for gene expression studies.To this end, next-generation sequencing (NGS) technologies have enabled researchers to generate vast amounts of sequence data at ever-decreasing costs.However, there are several confounding variables pertaining to sample preparation and library construction that need to be optimized before sequencing is performed.Moreover, subsequent bioinformatic analyses following the data generation phase may pose a challenge for many small laboratories lacking expertise and/or computational resources. Here, we describe the multi-platform sequencing and de novo assembly of the Asian seabass transcriptome.A multi-tiered approach was used wherein over one billion reads from three NGS sequencing platforms were assembled in a step-wise manner.The assembled transcriptome was represented by more than 200 thousand contigs, about half of which could be subsequently annotated using BLAST searches.In addition to analyses of pathways and identification of organ-specific transcripts, the transcriptome was also inventoried for microsatellite sequences.Limited sequence information is available thus far for the Asian seabass.The present report will serve as a useful resource for future studies on this species, since it provides information on the expressed regions of the genome.In addition, we have also summarized our observations from comparing the various intermediate assemblies, to serve as useful indicators for small non-genomics laboratories embarking on sequencing and assembling a transcriptome. Sample and Library Preparation, Sequencing and Quality Control At the start of the project, transcriptome information was first generated using the 454-FLX Titanium (Roche Diagnostics, Branford, CT, USA) and SOLiD 3+ (Life Technologies, Inc., Carlsbad, CA, USA) next-generation sequencing (NGS) platforms.Subsequently, the dataset was augmented by additional sequences in the form of pair-end reads generated on the Illumina HiSeq 2000 platform (Illumina Inc., San Diego, CA, USA) to provide sequence depth in order to improve the assembly.The 454 and SOLiD sequence data (incorporated and reassembled here) has been published and released previously [8].The assembly from the initial round of Illumina HiSeq sequencing (HiSeq Round 1, HR1) was also utilized as part of a survey of repetitive elements in Asian seabass [7]. For the second round of Illumina HiSeq sequencing (HiSeq Round 2, HR2), total RNAs were extracted using the RNeasy Mini Kit (Qiagen, Hilden, Germany) from the following organs of multiple Asian seabass individuals: adult brain (male and female); transforming gonads; testis; ovary; spleen (vaccinated and unvaccinated); head kidney (vaccinated and unvaccinated); intestine (from fish fed with various feeds); liver (from fish fed with various feeds); brain (from fish fed with various feeds); and intestine (with probiotics treatment).Total RNAs were digested with DNase to remove trace levels of DNA contamination.mRNAs were enriched by depletion of the ribosomal RNAs.The resulting mRNA samples underwent strand-specific cDNA library synthesis [17] followed by ligation of an adaptor suitable for Illumina sequencing as well as incorporating a sample-specific barcode to mark the individual samples.Barcoded cDNA libraries were then pooled for efficient multiplex sequencing on the Illumina HiSeq 2000 platform to generate 2x100 bp paired-end reads of up to 700 bp well defined mate-pair distance (NCBI SRA BioProject Accession Number: SRP053272).Quality trimming and filtering was performed using fastq_quality_trimmer (with parameters: -t 25 -l 30) and fastq_quality_filter (with parameters: -q 20 -p 30) scripts available from FASTX-Toolkit (http://hannonlab.cshl.edu/fastx_toolkit/).For this dataset, an additional filtering step was also performed to remove perfect duplicate reads (100% identical sequences) that had count >100 times using PRINSEQ (with parameters: -min_len 30 -derep 1 -derep_min 101 -trim_tail_left 5 -trim_tail_right 5 -trim_ns_left 1 -trim_ns_right 1) in order to reduce the data size to facilitate assembly [18]. Filtering of Contaminating Reads Following trimming of low-quality reads, the sequences were filtered for rRNA reads as well as those originating from microbial contaminants, which could have come from the environment of the fish or during sample collection.To perform this filtering, a database was created consisting of the Escherichia coli str.K-12 genome sequence (RefSeq accession number: NC_000913.2),4174 viral genome sequences from NCBI RefSeq and 49 rRNA sequences of Asian seabass or zebrafish origin obtained from NCBI and SILVA [19].All the reads and sequences incorporated in the assembly were first compared with this database, by mapping short-reads using Bowtie with default parameters and by aligning long reads using BLAST comparison [20].Sequences identified to be of rRNA or microbial origin were removed only if they did not find a match in the zebrafish (Danio rerio) and Nile tilapia (Oreochromis niloticus) reference mRNA sequences (retrieved from UCSC and NCBI). Sequence Assembly, Mapping and Redundancy Removal The assembly of the filtered reads from various platforms was performed in a step-wise manner, as shown in the flowchart in Figure 1.The 454 and SOLiD data were co-assembled using the "De novo Assembly" tool in CLC Genomics Workbench (version 5.1; 80% length fraction, 80% similarity fraction, with default insertion, deletion and mismatch costs), and then merged with ~22,000 Asian seabass EST sequences from NCBI Genbank (Download date: 26 July 2012) as well as the published Asian seabass intestine assembly [9], using CAP3 (with default parameters).The resulting contigs were then further merged using CAP3 with an earlier HiSeq-based version of the transcriptome [6], to produce the multiplatform (MP) assembly.The HiSeq derived data were from strand-specific cDNA libraries, which marked the orientation of the reads with respect to the mRNA, greatly reducing the complexity of the assembly process.The data from HR2 comprising ten libraries were assembled independently using Trinity (version 10/11/2013), and the resulting assemblies were combined and subjected to a redundancy removal using cd-hit-est (CD-HIT version 4.6.1 with parameters -aS 0.98, -c 0.98) to produce a non-redundant dataset.Finally, the HR2 assembly was merged with the polished MP assembly (see Supplementary File 1 for polishing steps) to generate the "final multiplatform" (FMP) transcriptome sequence dataset [21]. GC Content and Microsatellites The GC-content of the Asian seabass transcriptome was calculated using BedTools utilities using a 35 bp sliding window and further processed for plotting using in-house scripts [22].The same analysis was performed for RefSeq mRNA datasets of Japanese medaka (Oryzias latipes), Nile tilapia, zebrafish and zebra mbuna (Maylandia zebra) for comparison. An inventory of the microsatellites in the transcriptome was obtained using Censor version 4.2.28 with the following parameters: censor.ncbi<filename> -nofilter -show_simple -bprg blastn -mode norm.Mononucleotide repeats were ignored since they would be difficult to distinguish from polyadenylation and sequencing errors.[9].These sequences were then merged with a previous version of the Asian seabass transcriptome from the first round of HiSeq data [7] to produce a "multiplatform" assembly, which was further polished to remove low-coverage and redundant sequences.Independently, the second round of HiSeq data was assembled (library-wise) using Trinity, and then clustered to remove redundancies.Finally, these contigs were merged with the polished multiplatform assembly to generate the final Asian seabass transcriptome dataset.The coloured boxes indicate the datasets that were used for the downstream comparisons and analyses. Generating a Refined Nile Tilapia Sequence Dataset as a Reference for Asian Seabass Transcriptome Annotation A sequence dataset from a closely related fish species, the Nile tilapia, was used to annotate the assembled Asian seabass transcriptome and estimate the completeness of the assembly.The Nile tilapia sequences were obtained from the NCBI RefSeq Protein database using the following search query: "Oreochromis niloticus"[PORGN: _txid8128] AND srcdb refseq [PROP], which resulted in a dataset of 46,501 sequences (downloaded in October 2013).However, this dataset contained redundant sequences and hence had to be filtered before it was utilized.Removal of exact sequence duplicates decreased the dataset to 39,796 sequences, and subsequently retaining only the longest sequence for those that have identical descriptions produced a refined Nile tilapia protein reference sequence dataset of 26,675 sequences [21]. Sequence Annotation, Estimation of Completeness and Full-Length Sequence Prediction Sequence annotation was performed using a BLASTX search of the assembled contigs against the Nile tilapia RefSeq protein dataset described above using an e-value of 1e −6 and retaining only the top hit.Only hits that had alignment length of at least 65 amino acids were selected, and the number of unique reference protein sequences represented by transcripts in our transcriptome was used to estimate the completeness of the assembly. To provide annotation for the contigs in the final assembly that did not have a BLASTX match to the Nile tilapia protein sequence dataset, the search database was extended and the following BLAST searches were performed: (i) BLASTN against the Nile tilapia RefSeq mRNA sequences; (ii) BLASTX against a database of RefSeq protein sequences of twelve additional ray-finned fishes, namely zebrafish, rainbow trout, Burton's mouthbrooder (Haplochromis burtoni), Japanese medaka (Oryzias latipes), channel catfish (Ictalurus punctatus), Pundamilia nyererei, spotted gar (Lepisosteus oculates), Atlantic salmon, zebra mbuna, spotted green pufferfish (Takifugu rubripes), Lyretail cichlid (Neolamprologus brichardi), and Southern platyfish (Xiphophorus maculatus); and (iii) BLASTN against a database of RefSeq mRNA sequences from the same 12 species listed above. Augustus (version 2.5.5 with default parameters) was used to predict ORFs in the contigs that remained unannotated after the BLAST searches described above [23].We then aligned the remaining unannotated contigs without predicted ORFs to a rough draft of the Asian seabass genome [5], to verify if they were bona fide Asian seabass sequences. Prediction of full-length cDNAs in the final assembly was performed using Full-LengtherNEXT version 0.08 using the default parameters and the "vertebrates" taxon group as the reference database [24]. Sequence Conservation with Other Vertebrates The final assembly was searched using BLASTX against seven vertebrate RefSeq protein sequence datasets, namely those from Nile tilapia, Japanese medaka, zebrafish, spotted green pufferfish, human, mouse and chicken to evaluate the conservation across the Asian seabass and these other species.Analysis was also extended to include the predicted protein sequence dataset of the recently published European seabass [25].BLASTX alignment length cut-off was set at 65 amino acids. Pathway Distribution and Analysis of Organ-Specific Sequences Pathway analysis was performed on the annotated subset of the final assembly using the KEGG Automatic Annotation Server (KAAS), with the parameters set to the eukaryotic representative set and the "bi-directional best hit" assignment method [26].The output data was parsed using in-house scripts to obtain the percentage of KEGG pathway genes represented in our transcriptome [27].For comparison, the KEGG KAAS analysis was performed on the European seabass protein dataset and Nile tilapia RefSeq protein dataset, while the KEGG pathway genes represented in the common carp and crucian carp transcriptomes were also incorporated from published data [11,28]. Since the second round of Illumina HiSeq sequencing (HR2) was done on individual samples from different organs, a pair-wise comparison of the BLASTX results was performed to identify the protein-coding loci that were represented in different subsets of the organ-derived transcriptomes.As a measurement of sequence complexity, the cumulative contribution of each organ to the combined transcriptome was also determined. Evaluation of Asian Seabass Transcriptome as a Reference for RNA-seq Experiments An evaluation of the usefulness of the Asian seabass transcriptome as a reference for differential expression was performed by studying the differential expression between the testis and ovary RNA-seq dataset that was generated and used for the assembly of the transcriptome.The reference dataset was created from the Asian seabass full transcriptome by selecting sequences which had the greatest percentage-aligned length to the Nile tilapia RefSeq protein sequences (26,675 sequences).This resulted in a dataset of 22,022 Asian seabass sequences [21].The testis and ovary reads were mapped against the reference dataset using the "Map reads to reference" tool in CLC Genomics Workbench (version 8.0, 95% length fraction, 95% similarity fraction with default insertion, deletion and mismatch costs).The BAM files were then imported into Partek ® Genomics Suite ® software (version 6.6, 2014) for differential gene expression analysis using the RNA-seq analysis workflow.As we did not have technical replicates in this RNA-seq experiment, the algorithm provides p-values using a chi-squared test with the assumption that the transcripts are evenly distributed across all samples.The p-values were then adjusted using the Bonferroni method.A list of differentially expressed genes between the testis and ovary was obtained with corrected p-value ≤0.05 and fold-change ≥5.The Gene Ontology terms of these differentially expressed genes were obtained using the Ensembl Biomart via the Nile tilapia RefSeq protein accession numbers [29]. Sequencing, Quality Control and Filtering of Reads The Asian seabass transcriptome sequence data was generated using three NGS platforms, namely 454-FLX Titanium (~1 million reads), SOLiD 3+ (~38 million 50 bp reads) and paired-end Illumina HiSeq 2000 (~1 billion reads from the two independent rounds of sequencing; Table 1).Adaptor and quality trimming, followed by removal of rRNA or contaminating microbial reads resulted in ~0.9 million, ~18.1 million, and ~993 million filtered reads from the 454, SOLiD and HiSeq platforms, respectively (Table 1, Supplementary File 1, Supplementary Figure S1). Assembly of the Asian Seabass Transcriptome The sequence data from various platforms were assembled in a multi-step manner as shown in Figure 1.The assembly using combined 454 and SOLiD data generated 53,862 contigs, which were then merged with 22,322 NCBI EST sequences and 81,479 sequences from a 454-based Asian seabass intestine assembly published earlier [9], resulting in 157,457 contigs.A co-assembly of these contigs with the previously reported HiSeq Round 1 (HR1) assembly [7] resulted in 362,369 contigs.Since we observed the presence of contigs, which were identical except for short terminal overhangs, we performed a sequence clustering using cd-hit-est to retain only the longest representative for highly similar contigs, resulting in a polished "Multiplatform" (MP) dataset of 196,871 contigs (Figure 1, Table 2).The reads from HiSeq Round 2 (HR2) were assembled in ten parts (one assembly per sample), generating a total of 567,237 contigs.Clustering of these contigs by cd-hit-est to collapse highly similar and redundant contigs resulted in a dataset of 196,399 sequences (Figure 1).Finally, these contigs were merged with the MP assembly to produce the "Final Multiplatform" (FMP) Asian seabass transcriptome assembly of 267,616 contigs with an average sequence length of 979 bp (Table 3). GC-Content and Microsatellite Distribution The GC-content of the Asian seabass transcriptome was ~46%, and the GC-content distribution was similar to several related fish species (Table 3, Supplementary Figure S2).A total of 40,330 microsatellites (or simple sequence repeats, SSRs) were identified.The most common repeat unit types were dinucleotides (43.4%), followed by trinucleotides (30.6%).The most common motif was AC (30.6%), followed by GGA, AG and GCA (13.1%, 9.7% and 5.9%, respectively; Supplementary Tables S1 and S2). Sequence Annotation of Transcriptome Contigs and Prediction of Full-Length cDNAs The filtered Nile tilapia protein sequence dataset obtained from NCBI RefSeq (26,675 sequences) was used as a benchmark to evaluate the assembled Asian seabass transcriptome.Approximately 37% of our assembled contigs showed a match to the predicted protein dataset using BLASTX (alignment length ≥65 amino acids; Supplementary Table S3).An additional 6% of the transcriptome contigs could be annotated by subsequent BLASTN/BLASTX-searches against the Nile tilapia NCBI RefSeq mRNA and datasets from 12 other fish species (Figure 2a).More than 99% of the annotated contigs were longer than 5 kb in length, while the majority of unannotated contigs were short (≤1 kb; Figure 2b). An inspection of the remaining unannotated transcriptome contigs identified 1% with predicted ORFs, and the remaining contigs (56%) could be mapped to the Asian seabass genome draft sequence but could not be assigned ORFs with confidence (Figure 2a). Of the 26,675 Nile Tilapia reference protein sequences, 22,021 (83%) were represented by one or more contigs in our assembled transcriptome (Figure 3).Notably, 37,360 full-length cDNAs (FL-cDNAs) were predicted from our assembled Asian seabass transcriptome, which correspond to 15,459 Nile tilapia reference protein sequences (Figure 3).(a) Proportion of the transcriptome that showed BLASTX/BLASTN matches against the RefSeq datasets of Nile tilapia and 12 other teleosts.A total of 41% of the contigs had a BLASTX/BLASTN match against Nile tilapia, while another 2% found a match to 12 other teleosts, and 1% was predicted to contain ORFs.The remaining unannotated contigs showed sequence similarity to the Asian seabass draft genome.(b) Distribution of annotated and unannotated contig lengths.The unannotated sequences were found to be mostly short contigs (≤1 kb), while majority of the long contigs were annotated.All BLAST results were subjected to an alignment length cutoff: ≥200 bp for BLASTN and 65 amino acids for BLASTX.Using BLASTX, ~83% of the Nile tilapia RefSeq sequence homologs were detected in the Asian seabass transcriptome, while ~58% were represented in the FL-cDNA subset. Sequence Conservation with Other Vertebrate Species To assess the degree of conservation between the Asian seabass and other teleost and vertebrate species, a BLASTX analysis of our final assembly was against five teleosts, namely European seabass, Nile tilapia, Japanese medaka, zebrafish, and spotted green pufferfish, as well as human, mouse and chicken.Of the 102,390 contigs (38% of the transcriptome) that had a match to at least one of the eight species, the European seabass had the largest number of BLAST-search matches with the Asian seabass transcriptome (Figure 4).Further, 82,970 (81%) of the contigs found BLASTX matches in all five fish species analyzed, and 76,780 (75%) found BLASTX-matches in all the eight vertebrate species (Figure 4). Pathway Distribution of Transcriptome Contigs Pathway analysis was performed on the annotated subset of the final assembly using the KEGG Automatic Annotation Server (KAAS).A total of 6500 out of 15,682 (41%) KEGG pathway genes were represented by transcripts in the Asian seabass transcriptome (Supplementary Table S4).The following three KEGG pathway categories were well represented in our transcriptome: genetic information processing, cellular processes and organismal systems (75%, 71%, and 60%, respectively), while the metabolism and environmental information processing pathways had a lower representation (~25% for S4).A similar trend was observed when the results were compared to that of the European seabass protein dataset, Nile tilapia RefSeq dataset and the transcriptome datasets of common carp and crucian carp (Figure 5). Figure 4. Conservation of Asian seabass transcriptome sequences across eight vertebrate species, including five teleost fish species.Based on a BLASTX search of the final transcriptome assembly with the European seabass predicted protein dataset (Dicentrarchus labrax) and the RefSeq protein datasets of the Nile tilapia (Oreochromis niloticus), Japanese medaka (Oryzias latipes), zebrafish (Danio rerio), spotted green pufferfish (Takifugu rubripes), human, mouse and chicken, ~38% of the contigs were found to have a hit with at least one species, out of which ~81% were found in all five teleosts and ~75% were found in all eight vertebrates. Analysis of Organ-Specific Transcripts Making use of the individual organ assemblies from HR2 (Figure 1), we identified protein-coding sequences that were common to all organs as well as those that are unique to specific organs based on the BLASTX results against Nile tilapia protein reference sequences.At the limit of detection, a total of ~15% of the reference sequences were represented by transcripts in a single organ, while only ~12% of them were represented by transcripts in all the organs studied, indicating the importance of including multiple organs to obtain a comprehensive transcriptome (Figure 6a, Supplementary Table S5).The brain contributed ~60% of the expected protein-coding sequences and had the largest percentage (6.6%) of unique reference protein sequences represented by transcripts (Figure 6a).The percentage of detected sequence homologs increased to ~71% when the testis was included.Subsequent inclusion of the other organs showed a stepwise improvement of <5% from each organ (Figure 6b).The "genetic information processing", "cellular processes" and "organismal systems" categories were well represented in the Asian seabass transcriptome, while the "metabolism" and "environmental information processing" categories had a lower representation. Application of the Asian Seabass Transcriptome for RNA-seq Experiments Following a mapping of RNA-seq reads from testis and ovary samples against a reference dataset of 22,022 sequences derived from the Asian seabass transcriptome, a total of 6670 differentially expressed transcripts were obtained (Supplementary Table S6).These were made up of 2440 transcripts with lower and 4230 transcripts with higher expression in the testis compared to the ovary.Gene ontology terms of these differentially expressed genes included biological processes such as reproduction (GO:0000003), sexual reproduction (GO:0019953), the reproductive process (GO:0022414), the reproductive developmental process (GO:0003006), gamete generation (GO:0007276), fertilization (GO:0009566) and the reproductive cellular process (GO:0048610).Twenty-one of these transcripts were also shown to have sex-related roles or differential expression in the gonads in previous studies (Table 4) [8,[30][31][32][33][34].This finding showed that the assembled transcriptome could be reliably used as a reference for differential expression analyses.Figure 6.Analyses of the organ-specific assemblies.(a) Percentage of protein sequence matches that were detected in all organs versus individual organs.Percentages were calculated with respect to the number of detected protein sequence homologs in the HiSeq Round 2 (HR2) assembly (21,514).About 12% of the detected homologs were common to all organs, with the brain having the highest percentage of uniquely detected sequence homologs; (b) The cumulative contribution of each organ to the transcriptome.The x-axis shows the organs as they were successively added to the tabulation (from left to right).For instance, the "Testis" data point shows the percentage of Nile tilapia protein sequence homologs detected when the brain and testis data were considered, and so on.The order of successively adding the data from the individual organs was based on the order of their unique contributions as depicted in (a). Discussion The Asian seabass is an important food fish, with widespread aquaculture prevalence in the Indo-West Pacific region.Although a few selective breeding programs do exist for this species, they are mainly constrained by the lack of sequence information, as well as the lack of sufficient relatedness to fish species with available sequence data.Here, we present the sequencing and de novo assembly of the Asian seabass transcriptome, which was performed using data from three platforms in a multi-step manner. The bulk of the sequences were generated on the Illumina HiSeq platform, with relatively smaller amounts of data from the 454 and SOLiD platforms as well as from NCBI ESTs and published data [9].A multi-step approach was used to obtain a final assembly with 267,616 contigs, of which 43% could be annotated by BLASTX/BLASTN.The contigs that contained unannotated ORFs could be either novel Asian seabass transcripts or reflect the absence of sequence homologs in the public databases, while the remaining unannotated contigs, which aligned to the draft genome, could possibly represent retained introns or non-coding regions.It was also noteworthy that the two longest transcripts (~30 kb and ~31 kb; Table 2) showed sequence homology to the Nile Tilapia titin and titin-like sequences.The human homolog of this gene is the largest known locus in the human genome comprising 363 exons encoding an exceptionally long mRNA transcript greater than 100 kb in length [35].The assembled transcriptome will be useful for the ongoing annotation of the Asian seabass genome and also serve as a source of information for numerous applications such as expression and comparative studies. A large number of microsatellites were identified in the Asian seabass contigs, with the dinucleotide count being higher than the trinucleotides.This trend is similar to microsatellite inventories reported in other fish species [11,28,36].However, the number of microsatellites identified in our study was found to be considerably higher, possibly due to the larger amount of sequence data generated for the Asian seabass compared to the previously reported transcriptomes.These inventoried microsatellites will likely be a useful resource for future development of markers to aid in marker-assisted selection and breeding. Organ-specific sequence analyses demonstrated the importance of prioritizing and more importantly, including organs with the highest contribution of unique transcripts (brain in this study) to a transcriptome.It is also equally important to incorporate as many organs and conditions as possible to achieve a comprehensive transcriptome, as seen in this study (~15% of predicted protein-coding sequences appeared to be unique in each organ). Based on our effort to sequence and assemble the de novo transcriptome, and the observations from comparisons between the intermediate assemblies (Supplementary File 1), we have listed some guiding principles that would be useful for any non-genomics lab interested in embarking on a similar project (Supplementary File 2).In-depth reviews regarding assembly tools, metrics and pitfalls in dealing with transcriptome assemblies and analyses have also been previously provided by several groups [37][38][39][40].Many transcriptome assemblies have relied on paired-end Illumina sequencing to achieve sequencing depth to obtain a comprehensive transcriptome.However, a number of these assemblies, including ours, have resulted in a high number of fragmented contigs [41][42][43].As one of the vital factors for achieving a good assembly is the read length, the advent of long-read technologies such as Pacific Biosciences' Isoform Sequencing could help in improving the contiguity of de novo transcriptome assemblies [44,45]. The transcriptome will be useful for the annotation of the genome, and can also be utilized for gene expression studies through the design of microarrays, as we have done for the Asian seabass [46], or by means of RNA-seq experiments [9,47,48].In these RNA-seq studies, the transcriptome served as a reference for read mapping and quantification of differential expression. Conclusions In conclusion, we have sequenced and de novo assembled the transcriptome of Asian seabass, a commercially important food fish species.The annotation and various analyses reported here illustrate the useful information that can be derived from a transcriptome.Additionally, we identified full-length cDNA sequences and inventoried microsatellite information.As a supplement to our study, we have provided our observations from the various approaches taken towards sequencing the transcriptome, as well as several recommendations for non-genomics labs intending to study the transcriptome of any species of interest.On the whole, the present study provides a comprehensive inventory of the Asian seabass transcriptome which will be useful for the development of molecular tools to be used in aquaculture of the species as well as to serve as an important resource for genome annotation. Figure 1 . Figure 1.Pipeline describing the Asian seabass transcriptome assembly from three next-generation sequencing platforms.The 454 and SOLiD sequence datasets were first co-assembled, and later merged with Asian seabass ESTs from NCBI and a 454-based Asian seabass intestine assembly[9].These sequences were then merged with a previous version of the Asian seabass transcriptome from the first round of HiSeq data[7] to produce a "multiplatform" assembly, which was further polished to remove low-coverage and redundant sequences.Independently, the second round of HiSeq data was assembled (library-wise) using Trinity, and then clustered to remove redundancies.Finally, these contigs were merged with the polished multiplatform assembly to generate the final Asian seabass transcriptome dataset.The coloured boxes indicate the datasets that were used for the downstream comparisons and analyses. Figure 2 . Figure 2. BLASTX/BLASTN-based annotation of the Asian seabass transcriptome.(a)Proportion of the transcriptome that showed BLASTX/BLASTN matches against the RefSeq datasets of Nile tilapia and 12 other teleosts.A total of 41% of the contigs had a BLASTX/BLASTN match against Nile tilapia, while another 2% found a match to 12 other teleosts, and 1% was predicted to contain ORFs.The remaining unannotated contigs showed sequence similarity to the Asian seabass draft genome.(b) Distribution of annotated and unannotated contig lengths.The unannotated sequences were found to be mostly short contigs (≤1 kb), while majority of the long contigs were annotated.All BLAST results were subjected to an alignment length cutoff: ≥200 bp for BLASTN and 65 amino acids for BLASTX. Figure 3 . Figure 3. Proportion of the Nile tilapia RefSeq protein dataset that was detected in the final Asian seabass transcriptome as well as the full-length cDNA (FL-cDNA) subset.Using BLASTX, ~83% of the Nile tilapia RefSeq sequence homologs were detected in the Asian seabass transcriptome, while ~58% were represented in the FL-cDNA subset. Figure 5 . Figure 5. Percentage of KEGG pathway genes detected in the Asian seabass final transcriptome assembly, shown in comparison to the European seabass protein dataset, common carp and crucian carp transcriptomes, as well as the Nile tilapia RefSeq protein dataset.The KEGG pathways are shown in five main categories.The "genetic information processing", "cellular processes" and "organismal systems" categories were well represented in the Asian seabass transcriptome, while the "metabolism" and "environmental information processing" categories had a lower representation. Table 1 . Number of reads before and after quality check (QC) and filtering of rRNA and microbial reads. Table 2 . Statistics for the intermediate Asian seabass transcriptome assemblies. Table 3 . Assembly statistics for the final multiplatform (FMP) Asian seabass transcriptome assembly. Table 4 . Differentially expressed transcripts between the testis and ovary that show sex-related roles or expression.
7,063
2015-06-02T00:00:00.000
[ "Agricultural and Food Sciences", "Biology", "Environmental Science" ]
Bibliographic Computer Science Indexing Review with Disease Covid 19 - Researchers in conducting their research use the search using the homepage of the publication, according to expertise, collaboration in research, and research interests. And at this time the Covid 19 pandemic, became a trending topic for researchers, in various scientific fields. This study classifies based on publications located on the homepage source namely Scopus and Google Scholar, by analyzing the following topics, namely Natural Language Processing, Text Mining, Remote Sensing, and Sentiment Analysis using Name Entity Recognition to detect and classify named entities in text and using occurrence and link strength methods. The results showed science index literature about diseases Covid 19, obtained that Scopus has the most equitable percentage, has a good occurrence and link strength among the five scientific fields, namely Natural Language Processing 23.81%.33%, Text Mining 19.05%%, Remote Sensing 0 %, Sentiment Analysis 57.14 % then Google Scholar Natural Language Processing 51.35%, Text Mining 0 %, Remote Sensing 48.65 %, Sentiment Analysis 0 % I. I. INTRODUCTION Coronavirus disease (COVID 19) was first discovered in Wuhan China at the end of 2019 [1]. This type of virus is highly contagious and spread rapidly in various parts of China but also in Japan, Thailand, and South Korea in less than 1 month through respiratory droplets and close contact [2]. This is a significant threat to the global health of millions of lives worldwide [3] causing acute respiratory system disorders so it was officially declared by the World Health Organization (WHO) as a global pandemic on March 11, 2020 [4]. Of course, this pandemic has caught the world's attention because of its uncontrolled spread causing a spike in cases to increase [5]. It was reported that as of April 30, 2020, there were 3.2 million confirmed cases with a total of 227,847 deaths in 185 countries [6]. Because the spread of this virus is very high and uncontrolled, a large number of studies have been carried out and have been published [7] for free to speed up research and assist the government in responding to the crisis [8]. Therefore, it is important to evaluate the literature with quantitative and qualitative values to obtain literature patterns. and identify gaps and use the results. Unstructured data in the form of entities, relations, objects, events, and many other types are a process extracted from Information Extraction, to improve the data analysis in the form of entities, objects, relations, and the perspective of streaming data is very different from static data. Static data does not have any connection between the dynamic time of initial processing and subsequent processing. Publication from researcher's academic contains rich information, which enables many applications such as academic search bibliographic and citation analysis. This research was conducted with bibliometric analysis of scientific publications that become useful tools to know the process of generation and development of knowledge, as well as to evaluate the quality of the field of science and the impact it brings in the academic area [9]. Besides, bibliometric analysis can be used to find out the mapping of research from the research that is being done, already done as well as future opportunities [10]. ISSN 2355-0082 The purpose of this research is to find out the mapping of research into several scopes of technology by discussing several parameters of topics concerning Natural Language Processing, Text Mining, Remote Sensing, and Sentiment Analysis published during the pandemic. The research mapping process is carried out by the stages of the object selection process, calculating the objects interacting, and the normalization process, creating maps and displaying maps, and evaluating the map [11]. Vosviewer is used to display bibliometric map visualizations downloaded from the page: www.vosviewer.com. Bibliometric map views are visualized with Vosviewer based on author or journal name with co-citation data, or based on keywords with co-occurrence data with label map display, sketch and density, and clusters [12] Clusters in maps from Vosviewer are presented with color differences. Each parameter is operated by a clustering algorithm that can be changed so that more or fewer clusters are generated [13]. A. Data Sources and Methods Bibliometrics is the utilization of factual strategies to dissect books, articles, and other distributions. Bibliometric strategies are habitually utilized within the field of library and data science. The sub-field of bibliometrics which concerns itself with the investigation of logical distributions is called scientometrics. Scientometrics may be a sub-field of informetric. Major investigation issues incorporate the estimation of the effect of investigation papers and scholar diaries, the understanding of logical citations, and the utilization of such suggestions for something idea in approach and administration settings. This bibliometric data collection is done from Scopus and Google Scholar, and total data analyzed as many as a total of 2991 papers indexed with the keyword "Coronavirus with …[topic]" or "Covid 19 with…[topic]". Given the similarity of virus types before, restrictions are made on data retrieval while pandemic with the search categories used is topics, titles, and abstracts. Analysis of research trends using Vosviewer software with weighting method used is occurrence to see a lot of research on the topic and link strength to show the connectedness between research topics. Both methods analyze data based on abstraction and author. The use of named entity recognition in the publisher homepage has problems and complexities that are generally the same as those in English, especially if using a machine learning approach. Fundamental differences exist when rule-based methods are used for completion or using hybrid model approaches between rule-based and machine learning. This approach will use unsupervised learning so that it does not require labeled data for the learning process. The stages of the process of the proposed method are as follows: Data preparation for sequential pattern mining: In this step are prepared sentences that have named entities in it to be able to be degenerate paternal at each appearance of the entity. To avoid the amount of pattern produced, the pattern extraction process is limited to 5 words before and after the appearance of the entity. Sequential Pattern Mining: In this step, an algorithm will be applied to the existing learning data to produce the desired pattern. Pattern Marching and Candidate Extract: Datasets for testing are prepared for custom testing with the resulting pattern. The results will be sorted according to the level of confidence and support. Candidate Pruning: This process is carried out to improve the accuracy of the named entity produced. B. Citation Mapping Result Mapping is a process that allows one to identify knowledge elements and their configurations, dynamics, interdependencies, and interactions. Knowledge mapping is used for technology management purposes, which include the definition of research programs, decisions regarding technology activities, the design of knowledge base structures, and the creation of education and training programs. A Quotation Outline such as the citation mapping could be a graphical representation that appears the quotation connections (cited references and citing articles) between a paper and other papers utilizing different visualization apparatuses and procedures. The citation mapping instrument from Web of Information tracks an article's cited and cited by references through two eras. So citation mapping is a graphical representation that shows the citation relationships (cited references and citing articles) between a paper and other papers using various visualization tools and techniques. The citation mapping tool from Web of Knowledge tracks an article's cited and cited by references through two generations. ISSN 2355-0082 In a paper conveyed earlier this year, Malcolm Tight examines the theoretical considerations around commonalities inside the approaches of communities of sharpening and Becher's insightful tribes and districts. He conducts a co-citation examination of Higher Instruction ask approximately journals; centering on maker characters and ranges, subjects, theories and examinations, methodologies and procedures, appearing as a basic diagrammatical representation of his descriptive demonstrating. Comparable thoughts of 'citation mapping' have been investigated someplace else, particularly inside the typical sciences, and a shape has as of late been displayed in the citation and journal database ISI Web of Science. And Instinctively Originator W. Bradford Paley's visualization of 800,000 coherent papers livelihoods maker citations to explore the intercontinental between science perfect models. Related to bibliometrics, science mapping is a method of visualizing a field of science. This visualization is done by creating a landscape map that can display topics from science [14]. Information visualization is a vital portion of information science, and it is utilization d in two fundamental parts of the information science cycle: at the starting of the introduction of information investigation and within the conclusion amid the result introduction. Indeed, even though the visualization procedures are the same, these two stages have diverse objectives. Information investigation begins from numbness and tries to get the information, to find covered-up realities, designs, or exceptions. Result introduction begins with information and tries to communicate the message in the clearest and most viable way conceivable. Hence, indeed although they share the same procedures, the objective and the beginning point are diverse. In the downloaded text records from Scopus and Google Scholar, we performed the metadata analysis for data extraction. This included extraction of title, author, year, and computer science of topic. The morphological analysis allowed tagging of data potential use. Various issues with Covid 19 disease were encountered while tagging the data, which are described as follows: Author field: Computer science part with Covid 19 topic names are usually made up of two or three parts. It is not always clear which indicator criteria for disease Covid 19. A definition was created in Covid 19 disease names, but the punctuation mark (,) was not given: review, role, outbreak, diagnosis, approach, detection, chest x-ray, and pneumonia Types of documents in the database, we found the following types of documents Computer Science with Covid 19 disease:  Google Scholar For each type of document, we identified the mandatory fields, and other field values occurring in the database. The information extraction algorithm and the retrieval logic were based on these field values. A Sample of mandatory fields for each document is tabulated in Table 1. The results of the article data in table 1 above are then imported one by one into Vosviewer with txt format and compiled, inserted the title of the five fields of science filtered with topics related to Covid 19 and then carried out weighting using occurrence and total link strength which is then presented in the table below: Vosviewer analysis showed the connectedness of the NLP field resulting in 4 clusters based on color differences in figure 2 that were related to natural language, artificial intelligence, and machine learning. In this mapping, it can be concluded that there is no research link between NLP with Text Mining, Remote Sensing, and Sentiment Analysis. Text Mining for Covid 19 The field of Text Mining has different research links to the NLP field with 3 clusters in red, green, and blue that are closely related topics around text, system, and classification. So it can be concluded that there are still text mining research opportunities with drug, risk, and country, as well as with 3 other fields, which are visualized in figure 4 below: Vosviewer produces a research mapping analysis into 5 clusters with red, blue, green, and yellow and displays a strong correlation with information in purple, including topics of accuracy, change, factor, an erratum. In the field of sentiment analysis, the map displays very strong relationships including topics of neural networks, sentiment classification, models, papers, text, and algorithms. B. Scopus The second analysis was carried out on article data on Scopus with the following data: ISSN 2355-0082 The visualization in the table above can be seen in Figure 6 which is a network visualization and Figure 7 shows the density visualization. In the picture above the topic of covid has a strong network with pandemics and there is a network with Twitter data, sentiment analysis, outbreak, and impact, and there is a correlation with the text mining approach and with natural language processing. ISSN 2355-0082 Figure 9 above shows that Text Mining research has the strongest link with the topic of covid dan tweet. There are only 2 clusters, namely the red cluster with the topic of covid and tweets and the blue cluster with the topic of impact and person. This topic still has enormous research opportunities, especially related to covid 19. This topic has links to other research, with the strongest link strength on Twitter data, tweets, covid, and its applications. There are 4 clusters, namely the red cluster with the topic of covid, tweet, Twitter data, and era, the green cluster with the topic of the outbreak, India, lockdown, the blue cluster with the topic of impact and application, and there is 1 pandemic topic in the purple cluster. IV. CONCLUSION
2,965.2
2022-07-05T00:00:00.000
[ "Computer Science", "Environmental Science" ]
The Impact of Blue Inorganic Pigments on theMicrowave Electrical Properties of Polymer Composites We present the results of the measurement of complex dielectric permittivity, in the microwave frequency region, on glass reinforced polybutylene terephthalate (PBT) with blue inorganic pigments. The cavity resonant method had been used in order to measure the shift in the resonant frequency of the cavity, caused by the insertion of a sample, which can be related to the real part of the complex permittivity. Also, the quality factor of the cavity decreases with the insertion of a sample. The changes in the inverse of this quality factor give the imaginary part. In order to predict the dielectric behavior of this composite, we had developed a program of numerical simulation to calculate the complex permittivity of the inclusion. By using some of dielectric mixture laws (Maxwell-Wagner-Sillars, Hanai, Looyenga, inverse and direct Wiener, and Bruggemann), we can predict the dielectric behavior of the composite in a large range of volume fraction of inclusions. Introduction Ultramarine blue is a nonhazardous pigment with a lot of variety of industrial applications.Its manufacturing process and possibility for close control over its physical, chemical, and particular color characteristics enable the production of several types of this blue pigment, which are readily accepted by polymers, printing ink, paint, cosmetic, and many other industries, due to advantages over other organic pigments. Small quantities of conducting particles can increase the dielectric constant, without exceeding the critical concentration of percolation [9], that is, avoiding high conductivity.If electrical losses become high, heating of the plastic will occur, resulting in melting or even carbonization. For high frequencies, as in the microwave range, interfacial polarization mechanisms are not present, because such polarization occurs at frequencies lower than the times scales typical of dipolar polarizations [10]. In order to predict the electrical properties of the composite, different mixture laws can be used.Numerical simulations of mixtures can lead not only to a better understanding of the physics of dielectrics but also to improvement of designing of tailored materials without going to expensive attempts. Numerous authors have studied the dielectric behavior of nonhomogeneous materials [11][12][13], and several theories can be applied, depending on the difference of the electrical properties of the host and the inclusion materials [14,15]. Experimental The blue inorganic pigment, with chemical formula Na 6 Al 6 Si 6 O 24 S 4 , in the form of powder with mean particle size 3.8 μm, was obtained from Kremer Pigments Inc., USA.Glass fibers of 10 to 20 μm reinforced the polybutylene terephthalate matrix, which was purchased from DuPont, USA.The powder samples were pressed at room temperature and 4 MPa to obtain cylinders of length 10 mm and diameter 4 mm. The resonant cavity method was used to calculate the complex permittivity [16,17], ε * = ε − iε .Two different rectangular cavities were used, operating in TE 3,5,5 and TE 0,1,11 modes, and resonant frequencies of 2.7 and 5 GHz, respectively.In this technique, we measured the shift in the cavity resonant frequency, Δ f , caused by the insertion of the sample inside the cavity, which can be related to the real part of the complex permittivity, ε , and the change in the inverse of the quality factor of the cavity, Δ(1/Q), which is related with the imaginary part, ε .The relations are simple when we consider only the first-order perturbation in the electric field caused by the sample [18], where f 0 is the resonance frequency of the cavity, ε * the complex permittivity of the material, and E i and E 0 the electric fields inside and outside the material.The integration is made in the volume of the sample, V s , and in the volume of the cavity, V .Splitting the real and imaginary parts, we can obtain the expressions for ε and ε , where K is a constant related to the depolarization factor, which depends upon the geometric parameters.In fact, this factor introduces the effect of the shape and dimensions of the sample in the electromagnetic field perturbation. Maxwell equations and the boundary conditions were used to deduce the previous equations [19], taking into account the depolarization field, which appears outside the dielectric. To calculate the factor K, we used a sample of known dielectric constant of polytetrafluorethylene (PTFE) (ε = 2.1 at microwave frequencies), with the same size and shape of the studied samples. In order to couple the microwave to the cavity, we used the quarter-wavelength flange joints and small circular irises (10 mm in diameter). For the measurements, we used an HP 8753D Network Analyzer with the excitation power of 1 mW.All the measurements were carried out at constant temperature.The measured transmissions are the S 12 parameters that quantify how the microwave energy propagates through a multiport network.The subscripts "12" refers to the ratio of signal from port 1 (input) to port 2 (output). The samples, ultramarine blue pigments, in concentrations up to 4% in a glass reinforced polybutylene terephtalate matrix, were introduced into the cavity through circular holes milled in the centre of the cavity.There, the electric field is in its strongest value and the insertion of the samples causes the maximal frequency shift.That is, a coupling between the samples and the electric field is produced in this region [19], resulting in a small perturbation of that field.The interest lies in the cases in which the samples are homogeneous and thier volumes are very small compared with the volume of the cavity.Figure 1 shows the electric field distribution in the 5 GHz cavity, without and with a sample, in a simulation made using the COMSOL software.The changes in the electric field in the center of the cavity, due to by the insertion of a sample in this region, are clearly visible, when comparing Figures 1(a) and 1(b). Results To study the linearity of the cavity, and consequently to infer the possibility to use the small perturbation theory [19], we carried out measurements using glass microtubes filled with distilled water.In Figures 2 and 3 we present, as an example, Δ f / f 0 and Δ(1/Q) for the different volume of distilled water (5 GHz resonant cavity), and the linear fit parameters that confirm the possibility of using (2). The shift in the resonant frequency of the cavity remains in the linear regime for measurements up to 27 μL of water, which corresponds to Δ f / f 0 of 0.56%.The linearity in the inverse of the quality factor is observed up 36 μL that is up to 5.7 × 10 −3 .The conjugation of both conditions permits us to conclude that the cavity can be used up to variations of Δ f / f 0 of 0.56% and Δ(1/Q) of about 4.2 × 10 −3 .If the measurements overcome this limit, the sample volume should be reduced until the results falls into the linear regime [20]. In Figure 4 we show the transmissions of the 5 GHz cavity, for the empty and loaded cavity with samples of different concentrations of blue pigments.The most perturbating sample was referred to that with highest concentration of blue inorganic pigments, corresponding to the highest complex permittivity.This means that the real and imaginary parts of the complex permittivity of the filler has considerable values. Discussion A very common mixture law was proposed by Hanai [21].If the filler particles, with a complex permittivity ε * f (ω) and volume fraction ϕ f are dispersed in a matrix material with complex permittivity ε * m (ω) and volume fraction ϕ m , then, according to this theory, Another theory, developed by Maxwell-Wagner-Sillars predicts the polarization process due to differences in conductivity and permittivity of the constituents [22].The complex permittivity of the mixture can be calculated from in which n, with 0 ≤ n ≤ 1, is the shape factor of the dispersed particles in the direction of the electrical field lines.For spherical particles, n = 1/3 [23]. The Bruggeman effective medium theory has the virtue of simplicity.It can be used with the assumption that the inclusions do not interact each other and that are randomly distributed in the matrix [24].In the most commonly used form, that is, the implicit form, where, for spherical particles embedded in a host matrix, A = 2.This law predicts a percolation threshold of conduction for ϕ f = 1/(A + 3). Looyenga introduced a new mixture formula [25], usually known as generalized Looyenga law, where for spherical inclusions t = 3.For t = ±1, we arrive to the Wiener laws (direct and inverse) [15].Table 1 summarizes the obtained results for the fit of the data with the four previous laws.The values for the real and imaginary parts of the complex permittivity of the charge are indicative that generalized Looyenga model is more accurate, which is confirmed by the obtained values of the standard deviation χ 2 . Figure 5 shows the measured real and imaginary parts of the complex permittivity for different concentrations of the blue inorganic pigment, at 2.7 GHz, and the fit with the different mixture laws, confirming the good accuracy in the fit using generalized Looeynga.Similar results, from the point of view of quality of the fit, are obtained for the 5 GHz measurements frequency. For these small quantities of blue inorganic particles, the percolation threshold is not observed and the composite is maintained as an insulator. Finally, the small polarity present in PBT explains the measured complex permittivity.The obtained values make this material a good choice for applications in telecommunications products.The higher ε permits to reduce the dimensions of the rods in dielectric antennas.In espite of the higher ε , the values are still quite low, avoiding the heating of the material. Conclusions The cavity perturbation method presents a good accuracy to evaluate the dielectric permittivity of low loss materials.The dielectric function of two phase materials can be accurately deduced using the generalized Looyenga model, in particular when the inclusion is conductive, and in low filler volume fractions.In this case, the percolation critical concentration is not reached. With this law, we can choose the adequate doping concentrations, and then to control the electrical properties, in order to obtain the desired behavior, for a particular application. Figure 1 : Figure 1: Calculated electric field in the 5 GHz cavity, without (a) and with a sample in the center (b).The vertical bar shows that the highest electric field is about 50 kVm −1 . Figure 2 : Figure 2: Δ f / f 0 versus volume of water.f 0 is the resonance frequency of the cavity and Δ f the variation of resonant frequency by the insertion of the samples. Figure 3 :EmptyFigure 4 : Figure 3: Δ(1/Q) versus volume of water.Q is quality factor of the cavity, which is degraded by the insertion of the samples. Figure 5 : Figure5: Measured real and imaginary parts of the complex permittivity for different concentrations of the blue inorganic pigment, at 2.7 GHz, and the fit with the generalized Looyenga (GL), Hanai (H), and Maxwell-Wagner-Sillars (MWS) mixture laws. Table 1 : Calculated values for ε f and ε f of the filler, and standard deviation χ 2 , using the inversion of Wiener, Hanai, Looyenga, Maxwell-Wagner-Sillars, and Bruggeman laws.A is a parameter of Bruggeman and t the exponent of generalized Looyenga laws.
2,630.6
2012-02-06T00:00:00.000
[ "Materials Science", "Physics" ]
MACROECONOMIC FACTORS THAT AFFECT DEPOSITOR FUNDS OF SHARIA BANK IN INDONESIA This study aims to determine the effect of macroeconomic factors on Islamic bank savings funds in Indonesia. Macroeconomic variables used; economic growth, government debt, exchange rates, trade balance, money supply (M2), and foreign direct investment. Macroeconomic data is obtained from the publication of the Central Statistics Agency (BPS). Depositors’ fund data is obtained from the Financial Services Authority (OJK). The population consists of all Islamic commercial banks and Islamic business units. The sampling technique used total sampling, and data analysis was performed using multiple linear regression. Observation data from January 2005 to December 2019 used quarterly data. The results show that government debt and money supply (M2) positively and significantly affect depositors of Islamic banks in Indonesia. In contrast, economic growth, exchange rates, trade balance, and foreign investment do not significantly affect Islamic bank deposit funds in Indonesia. Introduction In carrying out their operations, banks utilize funding from three related parties: share capital, interbank loans, and third-party funding (Hadinoto, 2013). Funding from third parties or depositors' funds is a form of customer confidence in a bank. Therefore, the higher the growth of third-party funds at a bank, it has built a positive image in the public's eyes, increasing confidence in the bank. Third-party funding or depositing funds is an important factor because the central bank has established a minimum liquidity reserve. This reserve is known as 4% Macroprudential Liquidity Support (PLM) for conventional commercial banks and Islamic banks based on Bank Indonesia Regulation Number 20/4 / PBI / 2018. From the first, second, and third parties, each of which has advantages and disadvantages. Fund depositors function as an intermediary for fundraising and channeling credit to the public. A high ratio of deposit funds can improve the performance and quality of Islamic banking services. Therefore, maximizing third-party funds is very important in increasing Islamic banks' profitability (Fitri, 2016). Banks with large ARTICLE INFO third-party funds will be more willing to offer customers attractive interest rates for customers. On the other hand, if deposit funds are running low, banks will experience liquidity dryness and cause companies to provide high-interest rates to the public. Total depositor funds during the current year 2019 (year to date) reached IDR 402.36 trillion. On an annual basis, depositor funds growth reached 13.03%. In December, the market share of Islamic banking assets increased to 6.01% compared to September 2019, which gained 5.94%. This growth is still lower than that of conventional banks. In several provinces in Indonesia, the development of depositor funds has even decreased from the previous year. The development of depositor funds is influenced by internal factors such as customer satisfaction or service and external factors such as macroeconomic factors. In this study, we put aside the religiosity factor. Previous studies have examined the relationship between macroeconomic factors and depositor funds. Research by Jatnika (2020) has examined the relationship of macroeconomic variables: exchange rates, inflation, interest rates, and Gross Domestic Product (GDP) on depositor funds. Research by Adim & Sukmana (2017) has also examined the relationship between macro variables: interest rate, GDP, money supply (M2), and the consumer price index to depositors funds. In this study, the difference with previous studies was that we used macroeconomic variables: economic growth, government debt, exchange rates, trade balance, money supply (M2), and foreign direct investment (FDI). There are also differences in the study period-the object of this study to determine macroeconomic factors that affect depositor funds of sharia bank in Indonesia. Macroeconomic factors are economic growth, government debt, exchange rate, trade balance, money supply, and foreign direct investment. Literature Review Depositor funds are funds obtained from the public (collected in the form of the current account, saving account, and time deposit account) both in rupiah and foreign currency (Hadinoto, 2013). The minimum amount of Macroprudential Liquidity Support (PLM) has been set by the central bank at 4%. Several macroeconomic factors affect depositor funds in Islamic banks in Indonesia, which are described as follows. The Relationship Economic Growth and Depositor Funds of Sharia Banks in Indonesia Economic growth is the process of changing the economic condition of a country or a region towards a better situation in a certain period (Yuliani, 2019). According to Putong (2013) economic growth is an increase in national income (marked by the rise in per capita revenue) in a certain period. Economic growth indicates an increase in economic activity in a country when compared to the previous period. Economic growth is categorized as positive if GDP in the observed year is higher than GDP in the last year. On the contrary, if GDP in the experimental year is lower than the previous year. High economic growth indicates that there is economic stability in the community. Therefore, the better the economic growth, the higher the level of money-saving in society. Also, good economic growth will impact the growth in the amount of credit extended to the public, and the community's ability to pay their obligations will also increase. With the high level of money-saving and creditors' ability to pay off debts, third-party funds' optimization will increase. Previous studies by Prasetya et al., (2015); Sudin & Wan (2008); Hind & Joerg (2016); Zirek et al., (2016) showed economic growth has a significant effect on depositor funds. Therefore, can formulate the following hypothesis: : Economic growth has a positive and significant effect on depositor funds of sharia bank in Indonesia. The Relationship Government Debt and Depositor Funds of Sharia Banks in Indonesia Government debt is a liability in foreign currency to non-residents with an original maturity or an extension of more than one year (Mehran, 1985). According to Munandar (2014), government debt is public debt to non-residents (or foreigners) paid in currencies, goods, or internationally accepted services. Debt management wisely is a good thing. With the condition, the government uses every rupiah debt to build infrastructure, facilities, and infrastructure to generate long-term benefits. Debt policy is one of the alternatives to optimize urgent development. Debt can be a tool to accelerate economic growth. Likewise, depositor funds and foreign debt development can be a stimulus for economic development so that the Islamic bank deposits originating from domestic and overseas will increase. Previous studies by Haslag (2020); Essien et al., (2016); Isibor et al., (2018); Saifuddin (2016) showed government debt has a significant effect on depositor funds. Therefore, can formulate the following hypothesis: : Government debt has a positive and significant effect on depositor funds of sharia bank in Indonesia. The Relationship Exchange Rate and Depositor Funds of Sharia Banks in Indonesia The exchange rate is a record of foreign currency's market price in domestic currency prices. According to Effendie (2017), an Exchange rate is the value of a country's currency against a foreign currency that occurs in the foreign exchange market through a balance mechanism for balancing the balance of demand and supply of the foreign deducted or calculated against the country's currency. Exchange rate fluctuations can affect the development of depositor funds in Islamic banking in Indonesia. Depreciation does not always harm depositor funds, assuming the flight of foreign capital out of the country. But what happens is the opposite, depositor funds will increase because customers are interested in investing in foreign currency savings by expecting a return on the margin of the rupiah exchange rate against the US dollar. Previous studies by Boon (2018); Humphrey (2016); Aysan et al., (2018), and Anureev (2015) showed exchange rate has a significant effect on depositor funds. Therefore, can formulate the following hypothesis: The exchange rate has a positive and significant effect on depositor funds of sharia banks in Indonesia. The Relationship Trade Balance and Depositor Funds of Sharia Banks in Indonesia The trade balance is the amount received for net exports of goods and services (Mankiw, 2003). According to Diphayana (2018), the trade balance is the difference between exports and imports of goods. If the export value exceeds the import value, then the trade balance is categorized as a surplus. On the contrary, if the import value is higher than the export value, it is classified as a deficit. The trade balance, either a surplus or a minus, impacts the development of depositor funds because the trade balance indicates import and export activity between countries. These activities involve banking as a means of payment between countries. Exporters and importers can take advantage of the spot, swap, and spread markets in conducting foreign currency transactions. Previous studies by Kabir & Chowdhury (2014) (2015) showed trade balance has a significant effect on depositor funds. Therefore, can formulate the following hypothesis: : The trade balance has a positive and significant effect on depositor funds of sharia banks in Indonesia. The Relationship Money Supply and Depositor Funds of Sharia Banks in Indonesia The money supply is generally categorized into two types, M1 and M2. M1 is defined as the monetary system's obligation to the domestic private sector, which consists of currency and demand deposits. M2 consists of currency and demand deposits, quasi-money and securities, and does not include shares (Suseno, 2017). Financial operations or open market operations in regulating the money supply by Bank Indonesia will impact the development of depositor funds. Too much money in circulation causes the real value of money to fall, causing interest rates to decrease and increase consumption. People will prefer borrowing to save. It, of course, reduces the number of depositor funds. If there is too little money in circulation, the real value of money will increase but harm individuals and entities that carry out export activities. It can reduce the portion of depositor funds from exporters. Previous studies by Sudin & Wan (2008); Werner (2014) (2019) showed money supply has a significant effect on depositors funds. Therefore, can formulate the following hypothesis: : Money supply has a positive and significant effect on depositor funds of sharia banks in Indonesia The Relationship Foreign Direct Investment and Depositor Funds of Sharia Banks in Indonesia Foreign Direct Investment (FDI) is defined as investing to do business in the Republic of Indonesia's territory, either fully using foreign capital or joint ventures with domestic investors (Kairupan, 2014). Foreign ownership can provide strong capital for the sustainability of the Islamic banking business. On the other hand, foreign investment, both medium and large, can assist banks in technological applications. Foreign capital was able to increase internal strengthening with capital strength so that maintained bank liquidity. Continuity of banking operations is a guarantee for customers to save money in the bank. Previous studies by Bahri et al., (2017); Sie (2018); Hamza (2016) has a significant effect on depositor funds. Therefore, can formulate the following hypothesis: : Foreign direct investment has a positive and significant effect on depositor funds of sharia banks in Indonesia Conceptual Framework Based on theoretical studies and previous research, the conceptual framework of the research is presented in Figure 1 below: Data and Research Methods Research variables consist of independent and dependent variables. Independent variables consist of macroeconomic variables: economic growth (X 1 ), government debt (X 2 ), an exchange rate (X 3 ), trade balance (X 4 ), money supply (X 5 ), and foreign direct investment (X 6 ). The dependent variable in this research is depositor funds (Y). The research data were obtained from secondary sources. For macroeconomic variables, economic growth (X 1 ), an exchange rate (X 3 ), and FDI (X 6 ) data obtained from the publication of Statistics Indonesia (BPS), government debt (X 2 ) data are obtained from the Ministry of Finance. Trade balance (X 4 ) and money supply (X 5 ) data are obtained from the Ministry of Trade publications. For depositor funds, data are obtained from the publication of the Financial Services Authority (OJK). Data is collected quarterly. Therefore, the research data starts from the first quarter of 2005 to the fourth quarter of 2019. The measurement of each variable will be presented in table 1 below: Data analysis consisted of descriptive analysis, classical assumption test, and multiple linear regression. The descriptive analysis was used to assess the characteristics of the data. The normality test uses the Kolmogorov Smirnov test because the data is categorized as normal if the significant value > 0.05. The heteroscedasticity test uses the Glejser test on the condition that each independent variable has a significance value > 0.05. The multicollinearity test has provisions where each variable's tolerance value is > 0.1, and the Variance Inflation Factor (VIF) value is < 10. Autocorrelation test using the Durbin-Watson test with provisions DW > DU and (4-DW) > DU. Hypothesis testing uses F-test and the t-test. F test was performed by comparing the significance value < 0.05 and F-statistic > F-table. T-test was performed by comparing the significance value < 0.05 and t-statistic > t-table. The regression model equation can be formulated as follows: Where: Y: Depositor funds X 1 : Economic growth X 2 : Government debt X 3 : Exchange rate X 4 : Trade balance X 5 : Money supply X 6 : Foreign direct investment Finding and Discussion The results of the descriptive analysis test result are presented in table 2 below. The minimum value for depositor funds (Y) is 35,913,557 (in billion IDR), and the maximum value is 1,227,311,000 (in billion IDR). The lowest depositor funds occurred in the first quarter of 2005, and the highest depositor funds occurred in the fourth quarter of 2019. The minimum value for economic growth (X 1 ) is 4.31%, and the maximum value of 6.81%. The lowest economic growth occurred in the third quarter of 2009, and the highest economic growth occurred in the fourth quarter of 2010. The minimum value for government debt (X 2 ) is 26.20%, and the maximum value is 46.50%. The lowest government debt value occurred in the first quarter of 2012, and the highest government debt value occurred in the first quarter of 2005. The minimum value for the exchange rate (X 3 ) is 8,597, and the maximum value is 14,929. The lowest IDR exchange rate against USD occurred in the second quarter of 2011, and the highest IDR exchange rate against USD happened in the third quarter of 2018. For trade balance (X 4 ), the minimum value is -4,883.54 (in million USD), and the maximum value is 11,885 (in million USD). The lowest trade balance value occurred in the fourth quarter of 2018, and the highest trade balance value occurred in the fourth quarter of 2006. The minimum value for money supply (X 5 ) is 1,020,693.00 (in trillion IDR), and the maximum value is 6,136,551.81 (in trillion IDR). The lowest money supply occurred in the first quarter of 2005, and the highest money supply occurred in the fourth quarter of 2019. The minimum value for Foreign Direct Investment (X 6 ) is 457.40 (in million USD), and the maximum value for FDI (X 6 ) is 51,279.90 (in million USD). The lowest FDI occurred in the fourth quarter of 2009, and the highest FDI happened in the fourth quarter of 2018. (1) The result of the multicollinearity test is shown in table 5 below. The tolerance value of each independent variable > 0.1 and Variance Inflation Factor (VIF) value < 10. From this re-sult, it can be concluded there are no symptoms of multicollinearity in the data. All the classical assumption tests have been completed. The data has fulfilled the requirements to perform multiple linear regression. Furthermore, the coefficient of determination test, the test results are presented in Table 7 below. The value of R Square is 0.421 or 42.1%. It indicates that the contribution of the independent variable, which consists of economic growth (X 1 ), government debt (X 2 ), an exchange rate (X 3 ), balance trade (X 4 ), money supply (X 5 ), foreign direct investment (X 6 ) on dependent variable depositor funds (Y) is 42.1%, other variables outside this research influence the rest 57.9%. The F-test was performed to determine the effect of the independent on the dependent simultaneously. The F-test results are shown in Table 8 below. The test results show the significance value is 0.000 < 0.05, the value of F-statistic>F-table (51.592 > 2.25). It means variables economic growth (X 1 ), government debt (X 2 ), an exchange rate (X 3 ), balance trade (X 4 ), money supply (X 5 ), and foreign direct investment (X 6 ) simultaneously have a significant effect on depositor funds (Y). The next step to test hypotheses partially, the t-test is performed. The t-test result is shown in table 9 below. The significance value of economic growth (X 1 ) is 0.470 > 0.05, t-statistic< t-table (0.728 < 2.00). H1 is rejected. This result indicates economic growth (X 1 ) has no significant effect on depositor funds (Y). This result is not in line with previous research by (Prasetya et al., 2015;Sudin & Wan, 2008;Hind & Joerg, 2016;Zirek et al., 2016). Low economic growth does not reduce people's interest in saving in shariah banks and high economic growth. In most cases, shariah bank customers behave following the theory of saving behavior. When the rate of economic growth is low, the public does not change its preference for lower-risk investments. At a high economic growth level, people spend more of their money at the consumption level, not at the saving rate. When economic growth is low, people are less conservative and still aspire to return from risky investments. The significance value of government debt (X 2 ) is 0.000 < 0.05, t-statistic> t-table (4.505 > 2.00). H2 is accepted. This result indicates government debt (X 2 ) has a positive and significant effect on depositor funds (Y). This result confirms previous research by (Haslag, 2020;Essien et al., 2016;Isibor et al., 2018;and Saifuddin, 2016). Public debt may harm financial development if the government borrows heavily from the banking sector (Ismihan & Ozkan, 2012). The public responds to the increase in government debt by making safer and lower-risk investments by saving in banks. On the other hand, the government utilizes funds collected from the public through state debt securities issuance. Infrastructure development requires increased funding, and foreign debt is a funding source with low interest compared to domestic funding. Infrastructure development has greatly increased economic activity and can make people improve their habits for deposits. The significance value of exchange rate (X 3 ) is 0.295 > 0.05, t-statistic< t-table (-1.057 < 2.00). H3 is rejected. This result indicates exchange rate (X 3 ) has no significant effect on depositor funds (Y). This result is not in line with previous research by (Boon, 2018;Humphrey, 2016;Aysan et al., 2018;Anureev, 2015). When the rupiah strengthens against the dollar, the public converts the rupiah into dollars. When the dollar rises, people tend to take short-term profits by converting the rupiah to dollars to benefit the exchange rate. On the other hand, when the dollar falls, people are more likely to travel abroad and spend more money. The price of gold also falls. The public reacts to investing in gold because it is cheaper for holders of the rupiah currency. In these two cases, further research is needed. The tendency for high imports to make the trade balance deficit. It has occurred during the last few years. However, this does not have a significant effect on saving preferences in sharia banks. Although the bank is the guarantor of payment, the liaison between the exporter and importer and the financier cannot increase depositor funds. Export and import players only use banks as a medium of payment and do not use banks as foreign currency savings. The significance value of money supply (X 5 ) is 0.000 < 0.05, t-statistic> t-table (20.312 > 2.00). H5 is accepted. This result indicates money supply (X 5 ) has a positive and significant effect on depositor funds (Y). This result confirms previous studies by (Sudin & Wan, 2008;Werner, 2014;Anik & Prastiwi, 2019;and Nastiti & Kasri, 2019). When the money supply is normal, the public's saving behavior is also normal. When the money supply is not very high, the real value of money will decrease concerning goods. People are more likely to save than spend because it spends more money on the same item. The government also seeks to reduce the money supply by raising interest rates to reduce the circulation of money in the public. When the money supply is too high, it will result in uncontrolled inflation. The significance value of FDI (X 6 ) is 0.119 > 0.05, t-statistic< t-table (1.587 < 2.00). H6 is rejected. This result indicates FDI (X 6 ) has no significant effect on depositor funds (Y). This result is not in line with previous research by (Bahri et al., 2017;Sie, 2018;Hamza, 2016). Foreign investment is expected to build the government to build infrastructure, while depositor funds are expected as an alternative so that the government does not depend on foreign capital. Moreover, the high number of foreign direct investments is not accompanied by additional deposits in Islamic banks because foreign parties tend to save at foreign banks. Joint ventures or similar cooperation have not significantly impacted the development of savings in sharia banks. The government needs to make regulations related to FDI that can provide benefits to banks in Indonesia. Conclusion The results show variables economic growth (X 1 ), government debt (X 2 ), an exchange rate (X 3 ), balance trade (X 4 ), money supply (X 5 ), and foreign direct investment (X 6 ) simultaneously have a significant effect on depositor funds (Y). The results show that government debt (X 2 ) and money supply (X 5 ) partially positively and significantly impact depositor funds of sha-ria banks in Indonesia. In contrast, economic growth (X 1 ), exchange rates (X 3 ), trade balances (X 4 ), and foreign direct investment (X 6 ) partially have no significant effect on depositor funds of sharia banks in Indonesia. The government must strive to increase economic growth to accompany the rising trend of depositor funds. The government must also be concerned about the portion of the debt to GDP and must be able to find alternative financing from foreign debt. Also, the government must take a right monetary policy regarding the money supply. It will cause the money to be stuck in the bank, thereby reducing the circulation of money in the public.
5,224.8
2021-06-01T00:00:00.000
[ "Economics" ]
Solving the strong CP problem with non-conventional CP A very simple model is presented where all CP violation in Nature is spontaneous in origin. The CKM phase is generated unsuppressed and the strong CP problem is solved with only moderately small couplings between the SM and the CP violation sector or mediator sector because corrections to θ¯\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \overline{\theta} $$\end{document} arise only at two loops. The latter feature follows from an underlying unconventional CP symmetry of order 4 imposed in the sectors beyond the SM composed of only two vector-like quarks of charge −1/3 and one complex scalar singlet. No additional symmetry is necessary to implement the Nelson-Barr mechanism. Introduction The fact that Nature distinguishes left from right and particles from antiparticles at energies so far explored was established long ago and represents a cornerstone of our ability to infer the most basic properties of the fundamental interactions. In its most subtle form this symmetry violation, called CP violation, has been experimentally confirmed only through the presence of one irremovable phase in the mixing matrix among quarks interacting with the W bosons in the weak interactions, a manifestation known as the CKM mechanism. In principle, the nontrivial vacuum structure of QCD would allow CP violation to appear through the nonzero value of the so-calledθ parameter, with possible contamination from phases in the weak sector. Why this parameter is experimentally constrained to be so small is known as the strong CP problem (see [1,2] for a review). In the simplest implementation of the Nelson-Barr idea, Bento, Branco and Parada (BBP) [33] enlarged the SM with only one complex singlet scalar and one vector-like quark of charge −1/3. The former is responsible for the spontaneous breaking of CP while the latter mediates this breaking to the SM. They have found that a correction toθ was generated at one-loop as where f and λ φS quantify the two portals to the SM: the former is the Yukawa coupling between the heavy quarks transmitting the CP violation from the scalar sector to the SM and λ φS is the Higgs portal coupling(s) to the CP violating scalar. So, sufficiently suppressedθ required very suppressed portal couplings. See ref. [34] for more discussions on the naturality of this and similar schemes. Here we will improve on this simple model by assuming the presence of a new order 4 CP symmetry, dubbed CP4 [35], acting on the new scalar and the heavy quark sector which now requires two vector-like quarks. This symmetry further protectsθ = 0 which is now corrected only at two loops as Therefore, only moderately small portal couplings are necessary to obey the current bound: θ 10 −10 [36][37][38]. The study and application of CP4 symmetry is being actively pursued in the literature after the original model was proposed in the context of a 3HDM with irremovable complex parameters in the Higgs potential without explicit CP violation [35]. After that, it was extended to the quark sector in refs. [39,40], where irremovable phases appear in the Yukawa sector as well, and to the neutrino sector in ref. [41]. More recently, an algorithm to detect such a symmetry in the 3HDM was devised [42], its relation to mass degeneracy was also studied [43] and the interplay between annihilation and conversion to the DM abundance was investigated [44]. We organize the rest of this paper as follows: in section 2 we review some aspects of CP4 which will be needed to construct the model. The model is presented in section 3. The vanishing ofθ at one-loop is shown in section 4 together with the estimate at two-loops. The conclusions are then shown in section 5. Review of CP4 Here we review the action of CP4. We begin by analyzing the action on two complex scalars which can be readily adapted to two chiral fermions. Our model, developed in JHEP03(2019)040 the next section, will make use of two such a pair of chiral fermions together with a single complex scalar which, as reviewed below, is equivalent to a pair of real scalars transforming faithfully under CP4. Scalars The basic structure of the order-4 CP transformation, known as CP4 [35,39,40], can be defined by two complex scalar fields ϕ 1 , ϕ 2 transforming as wherex denotes spatial inversion of x. The minus sign in the second relation is crucial because its square leads to a Z 2 symmetry: hence order 4 CP symmetry [35]. Taking combinations of these fields, we can recover more familiar transformation properties. For example, we can construct CP4-odd combinations, with an additional sign that can be eliminated by rephasing ϕ i . It is well known that multiplication by i on a complex field S can be represented by the real matrix = iσ 2 in the real basis (Re S, Im S). The eigenvalues in complex space does not change, i.e., ±i for S, S * . In the same way, in the basis (Re ϕ 1 , Re ϕ 2 , Im ϕ 1 , Im ϕ 2 ), the transformation in (2.1) can be represented by This too has eigevalues ±i, each one with multiplicity two. The combinations corresponding to eigenvalue +i are Their complex conjugate have eigenvalue −i. The previous "diagonalization" of CP4, however, is only possible for fields that do not carry other quantum numbers as separation into real and imaginary parts is necessary. If ϕ 1 ∼ ϕ 2 carry the same U(1) charge q, CP4 flips this charge as usual CP. On the other hand, if ϕ 1 ∼ q but ϕ 2 ∼ −q, then CP4 commutes with this U(1). In fact, the entire SU(2) group acting on (ϕ 1 , ϕ 2 ) commutes with CP4 as U * = U for any U ∈ SU(2). If ϕ 1 , ϕ 2 carry no charge, CP4 do not mix the fields in (2.6) and only one of them can be chosen as a minimal component, i.e., a complex scalar S transforming as JHEP03(2019)040 hence acting as a Z 4 transformation on field space. In terms of its real components S = S 1 + iS 2 , they transform as i.e., the real version of (2.1). As the combination (2.4) transforms as a complex field by usual CP transformation, we can also see that CP4 can be represented faithfully by (2.1) but it can also be represented unfaithfully by the usual CP transformation which has order two for scalars and for this reason is sometimes denoted as CP2 to distinguish it from CP4 [39,40]. Therefore, when we say that a model is CP4 symmetric, it means that at least one set of fields transforms faithfully as (2.1) or (2.7) (or equivalently for fermions) while others might transform by usual CP2 transformation, which includes the usual CP even or CP odd behaviors for singlet scalars. These possibilities are akin to the possibility of representing Z 4 by fields that have charges ±i (faithful) or charges ±1 (unfaithful). The latter are the nontrivial and trivial representations of the Z 2 subgroup of Z 4 . We disregard the possibility of CP symmetries of order higher than four [45]. It is also useful to track the action of (CP4) 2 in (2.2) which generates a Z 2 symmetry. For the faithful representations of CP4, the action of this Z 2 is also faithful but for the unfaithful representations, this Z 2 acts trivially since (CP2) 2 is equivalent to the identity transformation for scalars. Spinors We define the usual CP (CP2) transformation on chiral fermion fields ψ = ψ L,R as with β = γ 0 and C ≡ iγ 0 γ 2 in the Dirac or Weyl representation. In contrast, we define the nonconventional action of CP, i.e., CP4, on a pair of chiral fields ψ 1 , ψ 2 , as where cp is the usual CP transformation in (2.10). We will often pack the two fields as one doublet ψ = (ψ 1 , ψ 2 ) T which transforms as where i = −σ 2 acts in this degenerate space. See refs. [39][40][41] for more discussions about the action of CP4 on fermions. Again, the action of (CP4) 2 is nonconventional as while conventional CP transformation (2.9) results in (2.14) Yukawa sector The Yukawa sector of the model coincides with the SM for charge 2/3 quarks but is modified for charge −1/3 quarks by the addition of a pair of vector-like quarks D 1L , D 1R , D 2L , D 2R and one singlet complex scalar S. Being hypercharged, each pair (D 1L , D 2L ) and (D 1R , D 2R ) can be considered as a doublet of CP4 and transform faithfully as (2.12). Then we simply denote these pairs as D L and D R . The scalar S also transforms faithfully under CP4 as (2.7). The rest of the fields of the SM transforms by usual CP2, i.e., as in (2.10) for fermions or through complex conjugation for scalars. Because only D L,R and S transform unconventionally under (CP4) 2 , the Yukawa interactions for charge −1/3 quarks will be partly secluded into ordinary and heavy quarks as where q iL and d jR are SM quark fields, i, j = 1, 2, 3, a = 1, 2, Y d and µ D are real due to CP4 with µ D > 0. 1 In addition, the Yukawa coupling F is a 2 × 3 complex matrix and its barred coupling is defined byF with = iσ 2 being the two-dimensional antisymmetric tensor. Unlike the original Bento-Branco-Parada (BBP) model [33], here we do not need an additional Z 2 since this symmetry is already generated by CP4. We note that although the theory is CP conserving the coefficients F,F are intrinsically complex and cannot be transformed to real coefficients much like the CP4 symmetric 3HDM potential proposed in ref. [35]. For the latter, intrinsically complex Yukawa couplings are also present if some quarks also transform faithfully under CP4 [40]. The model discussed here is complementary to those in that the parameters with irremovable phases appear exclusively in the Yukawa sector. One may choose F to be real, in which case,F is also real and the theory is additionally invariant by CP2 for all fields. We discard this possibility because in this case the scalar potential is not capable of spontaneously breaking this CP2 and the CKM phase cannot be generated; see sections 3.2 and 3.3. The most general reparametrization transformation that maintains the CP4 (and CP2) transformations invariant are SU(2) transformations on D aL , D aR and O(3) transformations on d jR and q iL . It is clear that these transformations will leave the Yukawa interactions (3.1) form invariant. Rephasing of S, except for multiples of i, are forbidden because the potential needs to be invariant; cf. section 3.2. Under a transformation of this type F andF transform in the same way. Hence, we can see from the singular value decomposition of F that in the generic case not all complex phases of F can be removed. 1 The term involving µD can be more generic, i.e., a 2 × 2 matrix obeying µ * D † = µD, but it can always be diagonalized with positive and equal values using appropriate SU(2) reparametrizations acting on DL,R. In any case, D1, D2 are degenerate in mass due to CP4. Scalar potential Apart from the SM Higgs doublet, we consider one complex scalar singlet under SU(2) L , transforming as (2.7) by CP4. The most general potential invariant under CP4 is given by is the usual SM scalar potential. The next term is given by where we can absorb the phase of λ 2 by rephasing S so that we can choose λ 2 > 0. Finally, the interaction between φ and S occurs only through the Higgs portal We can see that the phase of S only appears in the λ 2 term and then the potential is minimized when S is real and positive. 2 Since the potential contains only one phase sensitive monomial, the manifest canonical CP symmetry cannot be broken spontaneously [46]. The CP4 symmetry, however, will be broken once S gets a vev. Compared to the original BBP model, this scalar sector has the same number of fields but less free parameters while our Yukawa sector in (3.1) has more fields and more parameters. At this point, we should remark that the symmetry structure of our model is crucially different from BBP-type models because it cannot be obtained from the imposition of usual CP and a Z n symmetry. The BBP model is based on Z 2 and CP2 and it cannot forbid S 2 terms in the potential (henceθ at one-loop) in contrast to our potential in (3.5) and (3.6). The same potential as ours can be obtained by using Z 4 but then either the F orF term would be absent in the Yukawa Lagrangian (3.1). With the additional imposition of CP2, F orF would be real and no CP violation can be generated to account for the CKM phase; cf. section 3.3. Since the physical predictions are different, we can see that CP4 is indeed a genuinely different CP symmetry that cannot be obtained from usual CP and an additional discrete symmetry. Other examples can be seen in refs. [35,[39][40][41]43]. Let us now show the scalar spectrum. By defining the vevs with v = 246 GeV being the electroweak scale, we can write the minimization equations as which can be solved analytically. We have also defined λ 12 ≡ λ 1 − λ 2 > 0. JHEP03(2019)040 After shifting the fields by their vevs, we can define (S 0 , S 1 , S 2 ) = √ 2(Re φ 0 , Re S, Im S) where S 0 corresponds to the SM higgs direction. In this basis, the mass matrix is The block diagonal structure is evident and follows from usual CP conservation of the potential, hence CP4 ensures CP2 for this simple potential, although CP4 is spontaneously broken. As we are going to see, this conservation of CP2 will be crucial for the protection ofθ = 0 at one-loop. We can see that the CP odd field A = √ 2 Im S has mass while the CP even fields s, h have masses where the approximate expressions are valid for v S v. The lighter scalar h corresponds to the 125 GeV Higgs boson discovered in the LHC. The mixing between the CP even scalars is given by (3.13) Current LHC data constrains this angle to be small, satisfying |s α | 0.2 [47,48], which implies t 2α 0.426 and then v S v/0.426 ∼ 600 GeV for order one quartic couplings. In fact, moderately suppressed F couplings will be needed for suppressedθ and since the heavy quarks are constrained to be above the TeV scale we will typically need v S TeV. Generating the CKM Considering the vevs (3.7), the down type quark mass matrix is (3.14) JHEP03(2019)040 For definiteness, we work in the basis in which the mass matrix for up type quarks is diagonal. Note that M DS contains irremovable phases from F and thus CP is spontaneously broken. Nevertheless, M D obeys the Barr criteria [12,49]: a complex CKM matrix can be generated butθ = 0 at tree-level. 3 We also note that M DS is always nonzero asF = −F implies F = 0. The same is true for any relationF = e iα F . Usual bidiagonalization allows us to write Such a hierarchical structure allows us to approximate The first matrix in (3.17) can perform the approximate block diagonalization that leads to the mass matrices for the SM down-type quarks and heavy quarks: We can see that M d M † d contains complex phases unsuppressed by v/v S in the second term if µ D M DS , analogous to the original BBP model [33]. These complex phases will lead to a complex CKM matrix V CKM which diagonalizes At the same time, spontaneous CP4 breaking also leads to a mass-squared splitting of the heavy quarks in (3.15) proportional to v 2 S . In the regime µ D M DS we are interested in, this mass splitting is at least of the order of µ D . This regime also means that the mixing matrix U R is not hierarchical and generically contains order one mixing angles and phases. The presence of vector-like quarks D a implies the existence of FCNC interactions through Z mediation which are however suppressed by the ratio between the masses of SM quarks and heavy quarks [50]. Other effects such as electroweak precision observables or deviation of SM couplings are also suppressed for heavy quarks [51]. Current experimental searches at colliders constrain these heavy quarks to be heavier than the TeV scale [52,53]. JHEP03(2019)040 4 Loop corrections toθ As the model implements spontaneous CP violation and satisfies the Barr criteria [12,49], both the contributions coming from QCD and from the electroweak sector to the CP violating parameterθ vanish at tree-level. Higher order finite corrections are calculable and we will quantify these corrections in the following. We will conclude that the one-loop contribution vanishes and nonzero contributions arise only at two-loops. If we denote by m R the generic tree-level quark mass matrix in the basisf L f R , it receives corrections at higher order as m R − δm R . This correction, if complex, will lead to a correction toθ of the form (4.1) Only the corrections coming from the Yukawa interactions (3.1) lead to a potentially complex contribution at one-loop [33]. Using dimensional regularization and MS we find at one-loop where is a loop function that depends on the possibly non-diagonal fermion mass matrix m R m † R . The calculation of the correction (4.2), which is detailed in appendix A, is the same as found in ref. [33] if we ignore the renormalization scale µ. The Yukawa couplings are defined by In appendix B we explicitly show that the 1/ part of (4.2) vanishes at one-loop. Hence these corrections are finite as expected. This property also implies that δθ is invariant by shifts of the subtraction point µ → µ + δµ in (4.3) so that we can use masses and Yukawa couplings at any suitable renormalization scale. Ignoring the mass matrix from the up-type quarks, which is real, we can focus on the contribution for m R = M D . Then the correction can be written as where the index f runs through the charge −1/3 quark mass eigenstates and is a matrix that transforms asf iL f jL in flavour space. The coefficientsĈ are defined for Yukawa couplings Y ϕ R for mass eigenstates ϕ ∈ {h, s, A}. The symmetry structure is more JHEP03(2019)040 evident in the initial symmetry basis ρ ∈ {S 0 , S 1 , S 2 } connected through the mixing (3.12) which we write here as ρ = ϕ R ρϕ ϕ , (4.7) although R contains no mixing between A and the CP even scalars. The matrixĈ ϕϕ can be written in terms of C ρρ in the symmetry basis aŝ where The Yukawa couplings in the symmetry basis can be easily extracted from where we note that among A ρ only A S 0 is nonzero. Explicit calculation leads to Since all entries involves A ρ , it is clear that a nonvanishing matrix requires ρ = S 0 for which µ −1 (4.13c) One can check that the diagonal elements of U † L C ρρ U L are real for (ρ, ρ ) = (S 0 , S 0 ) or (S 1 , S 0 ). For example, (4.14) where the hatted matrices denote the diagonalized masses in (3.15). The element C S 2 S 0 leads to a potentially complex contribution but the absence of mixing between S 2 -S 0 , i.e., R S 2 ϕ R S 0 ϕ = 0 for all ϕ, makes all contributions to (4.5) vanish and there is no correction tō θ at one-loop. This calculation is exact with respect to the mixing matrix U L . Therefore, our model predicts a correction toθ only at two-loops, a feature that improves over the original BBP model. In order to estimate the two-loop contribution toθ, we notice that the mixing between S 0 −S 2 can be induced at one-loop level as shown in figure 1, leading to a mixing angle of the order of where λ φS is the Higgs portal coupling of S and F here denote a generic combination of F aj . The dominant contribution to the h-A self-energy comes from the chirality flipping part of the heavy fermion propagators with insertion of m Da . This mixing will induce a contribution to (4.5) coming dominantly from the lower-right block of (4.13c) because the diagonalization matrix (3.17) is approximately block diagonal. We arrive at which can be estimated as The absence of the neutron eletric dipole moment constrainsθ 3.0 × 10 −10 [36][37][38] so we just need a moderately small Yukawa coupling of the order F ∼ 0.05λ −1/4 φS independently of the scale v S which contrasts with the model in [33]. The function I(m 2 1 , m 2 2 ) is the integral in (4.5), a dimensionless slowly varying function which is order one for a wide range of values and can be written as B(m 2 1 , m 2 2 , m 2 1 ) in terms of the B function of Passarino and Veltman [54]. Explicit forms and asymptotic values are shown in appendix C. Finally, we can see that the expression in (4.16) vanishes if D a are degenerate because Tr[(F −F )(F +F ) † ] = 0. In fact, the mass splitting of D a only arises as a consequence of the spontaneous breaking of CP4 through M DS . So, for a small mass splitting, δθ is proportional to this mass splitting. JHEP03(2019)040 Considering other contributions, few comments are in order: • The key property in our model that guaranteesθ = 0 at one-loop is the automatic conservation at tree-level of usual CP, S 2 → −S 2 , in the scalar potential (3.3) as a consequence of CP4. This means that the mixing between CP even and CP odd scalars, S 1 -S 2 mixing, arises only at one-loop through the graph in figure 1. • The symmetry S 2 → −S 2 is not spontaneously broken at tree-level as S 2 = 0, and it is only broken by F −F in the Yukawa couplings. • From rough estimates of representative graphs, a net complex contribution requires the interference between the Yukawa coupling of S 1 with that of S 2 and nonzero vevs for them. Thus S 2 = 0 leads to the vanishing of the estimates of ref. [34] toθ from (a) the one-loop threshold effect to the low-energy Yukawa coupling of d-type quarks (essentially the BBP contribution that we calculated) and (b) the two-loop complex contribution to the heavy quark mass matrix (µ D ) from the dead-duck type diagram. • Two-loop contributions toθ unsuppressed by F,F (but suppressed by SM Yukawas) will likely arise by using an effective theory at an intermediate energy scale between the electroweak scale and that of the heavy quarks when the other degrees of freedom are much heavier (by, e.g., suppressing F 1 so that v S M Da ). In the mass basis for these heavy quarks, all CP violation comes from the Yukawa interactions between light SM d-type quarks and the Higgs. Then corrections toθ arise only from two-loop corrections in these complex Yukawa couplings [55]. Conclusions A very simple model is presented where CP violation in Nature is spontaneous in origin and then all CP violation effects are calculable. While easily accommodating the observed CP phase residing in the CKM mechanism, theθ parameter of the QCD vacuum structure vanishes not only at tree-level -the Nelson-Barr mechanism -but also at one-loop level due to the imposition of a nonconventional CP symmetry of order 4, also known as CP4, on the fermion and scalar sectors beyond the SM. Thus the strong CP problem is solved with only moderate Yukawa couplings that couple the mediator heavy quarks, SM quarks and heavy scalars. The field content of the SM is enlarged by adding just two vector-like d-type quarks and one complex singlet scalar. No other symmetry is necessary besides the nonconventional CP. Therefore, this model improves on the minimal model of Bento-Branco-Parada [33] on two aspects: (a) there is no unrelated Z 2 symmetry and (b) corrections toθ arise only at two loops. 4 A Self-energy Here we provide further details on the calculation of the fermion self-energy at one-loop level. Following similar arguments to those presented at [33], only the contribution due to the exchange of a scalar field will be relevant toθ. Therefore, we will only need Yukawa couplings which can be generally defined by where ϕ are real scalar fields and Y ϕ ij may contain γ 5 matrices in Dirac space. It is hermitean in the sense that γ 0 (Y ϕ ) † γ 0 = Y ϕ . We are mainly interested in the one-loop contribution tof i f j . Assuming ϕ are mass eigenstates, the amputated diagram, which also contains an internal fermion f k , can be written as where a sum on the scalars ϕ is implicitly assumed. Although the contribution toθ is finite, a regularization is needed in order to deal with intermediate steps of the calculation. In this work, we will choose to adopt the dimensional regularization scheme, which yields 3) The calculation can be performed in a generic fermion mass basis as well, in which we can decompose where R = (1 + γ 5 )/2 and L = (1 − γ 5 )/2. Notice that the hermitean condition implies Y ϕ L = Y ϕ † R , and m L = m † R . Therefore, the self-energy can be expressed as iΣ ij ( / p) = i / pRΣ R ij (p 2 ) + / pLΣ L ij (p 2 ) + Σ m ij (p 2 ) , (A.5) where we defined JHEP03(2019)040 As it stands, the self-energy still contains a divergent piece. We show in appendix B that this term will eventually drop in the calculation of δθ, however, for definiteness, one can adopt a subtraction scheme such as the MS and explictly remove the ε −1 term. Finally, regarding δθ, the relevant quantity is the radiative correction to the mass matrix, namely m R − δm R . To obtain such, one needs to find the position of the poles of the corrected fermion propagator Using the chiral decomposition and defining Σ eff (p 2 ) = 1 2 m LΣ L (p 2 ) + RΣ R (p 2 ) + 1 2 RΣ L (p 2 ) + LΣ R (p 2 ) m + Σ m (p 2 ), (A.8) one obtains, to first order on the corrections, Therefore, the position of the poles can be found with knowledge only of Σ eff (p 2 ), and the end result is where we consider only the R projection of eq. (A.8), the sum on the scalars is explicitly introduced and p 2 = m R m † R at leading order. Finally, since both Σ L and Σ R are hermitean, only the last term of eq. (A.10) contributes to δθ. B One-loop correction toθ is finite Here we show that the one-loop correction in (4.2) is finite by explicitly retaining the 1/ terms of dimensional regularization. The correction is given by The coefficient of (16π 2 ) −1 in δm R coming from generic Yukawa couplings is JHEP03(2019)040 Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
6,602.4
2019-03-01T00:00:00.000
[ "Physics" ]
Interactive Skin Display with Epidermal Stimuli Electrode Abstract In addition to the demand for stimuli‐responsive sensors that can detect various vital signals in epidermal skin, the development of electronic skin displays that quantitatively detect and visualize various epidermal stimuli such as the temperature, sweat gland activity, and conductance simultaneously are of significant interest for emerging human‐interactive electronics used in health monitoring. Herein, a novel interactive skin display with epidermal stimuli electrode (ISDEE) allowing for the simultaneous sensing and display of multiple epidermal stimuli on a single device is presented. It is based on a simple two‐layer architecture on a topographically patterned elastomeric polymer composite with light‐emitting inorganic phosphors, upon which two electrodes are placed with a certain parallel gap. The ISDEE is directly mounted on human skin, which by itself serves as a field‐responsive floating electrode of the display operating under an alternating current (AC). The AC field exerted on the epidermal skin layer depends on the conductance of the skin, which can be modulated based on a variety of physiological skin factors, such as the temperature, sweat gland activity, and pressure. Conductance‐dependent field‐induced electroluminescence is achieved, giving rise to an on‐hand sensing display platform where a variety of human information can be directly sensed and visualized. The ORCID identification number(s) for the author(s) of this article can be found under https://doi.org/10.1002/advs.201802351. DOI: 10.1002/advs.201802351 human-machine interface that can play a key role in the numerous biomedical applications of human-activity monitoring and personal healthcare. [1][2][3][4] Besides the efforts for high performance sensors capable of detecting individual epidermal stimuli such as the pressure, strain, and shear, as well as the body temperature and sweat, great emphasis has been placed on to develop single platform multifunctional epidermal sensors where diverse stimuli were detected independently. Most of them were, however, built by combining individual sensors on a common test bed. [5][6][7][8][9][10] Furthermore, visualization of the epidermal stimuli while sensing it can further extend the usefulness of an E-skin by offering novel functions, including not only shape and position recognition, but also dynamic visual monitoring of the stimuli. Numerous pixelated and nonpixelated interactive displays were demonstrated, based on a variety of optical elements including light-emitting diodes (LEDs), [11][12][13][14][15] and thermochromic, [16] electrochromic, [17,18] and triboelectrification devices [19][20][21] and alternating current (AC)-driven electroluminescent (ACEL) devices. [22][23][24][25][26][27][28][29][30][31][32][33][34] Most of the interactive displays were again developed by physically combining stimuli-sensors with the aforementioned display elements. Even the ACEL platforms were limited to simultaneous detection and visualization of a certain individual stimulus of either pressure or temperature. [35][36][37][38] The development of an interactive sensing display capable of detecting and visualizing multiple epidermal stimuli in a single device rather than ones with the combination of the individual sensors and displays on a common platform is, therefore, in great demand. We envision that multifunctional physiological skin display can be accomplished when the epidermal layer itself acts as an interactive display by replacing one of electronic components of the device by human skin. Herein, we demonstrate a simple but robust interactive skin display with epidermal-stimuli electrode (ISDEE) capable of simultaneous sensing and display of multiple epidermal stimuli on a single device. Our ISDEE with a simple two-layer architecture consists of a topographically patterned elastomeric polymer composite with light-emitting inorganic phosphors upon which two electrodes are placed with a certain parallel In addition to the demand for stimuli-responsive sensors that can detect various vital signals in epidermal skin, the development of electronic skin displays that quantitatively detect and visualize various epidermal stimuli such as the temperature, sweat gland activity, and conductance simultaneously are of significant interest for emerging human-interactive electronics used in health monitoring. Herein, a novel interactive skin display with epidermal stimuli electrode (ISDEE) allowing for the simultaneous sensing and display of multiple epidermal stimuli on a single device is presented. It is based on a simple two-layer architecture on a topographically patterned elastomeric polymer composite with light-emitting inorganic phosphors, upon which two electrodes are placed with a certain parallel gap. The ISDEE is directly mounted on human skin, which by itself serves as a field-responsive floating electrode of the display operating under an alternating current (AC). The AC field exerted on the epidermal skin layer depends on the conductance of the skin, which can be modulated based on a variety of physiological skin factors, such as the temperature, sweat gland activity, and pressure. Conductance-dependent field-induced electroluminescence is achieved, giving rise to an on-hand sensing display platform where a variety of human information can be directly sensed and visualized. gap. When an ISDEE is directly mounted on the human skin, the skin surface by itself serves as a field-responsive floating electrode of the display working under an AC. The AC field exerted on the epidermal skin layer depends on the conductance of the skin, which can be modulated as a function of a variety of physiological skin factors, such as the temperature, sweat gland activity, and pressure. Conductance-dependent field-induced electroluminescence is achieved, giving rise to an on-hand sensing display, in which we are able to directly sense and visualize multiple types of human information including the temperature, sweat gland activity, and pressure as well as fingerprint. Our ISDEE was conveniently developed by fabricating an elastomeric poly(dimethyl siloxane) (PDMS) composite with periodic arrays of topological micropyramids in which lightemitting inorganic ZnS:Cu microparticles are embedded with the alumina layer for their surface passivation, followed by the deposition of two parallel PEDOT:PSS electrodes on the composite, as shown in Figure 1a ( Figure S1, Supporting Information). A 40 µm-thick ZnS:Cu/PDMS composite with topological pyramidal arrays was prepared by spin-coating a mixture on a micropatterned silicon mold, followed by thermal crosslinking of the PDMS. The replicated PDMS pyramids were successfully fabricated with an area and height of 5 × 5 µm 2 and 5 µm, respectively, as evidenced in the scanning electron microscope (SEM) results shown in Figure 1b. A cross-sectional view of the composite clearly shows that ZnS:Cu particles of ≈30 µm in diameter are embedded in the PDMS matrix ( Figure 1c and Figure S2, Supporting Information). We extensively examined the mechanical properties of a neat PDMS and the PDMS composites with ZnS:Cu particles ( Figure S3, Supporting Information). As expected, a rubbery, elastic PDMS became stiff when adding ZnS:Cu particles, as shown in the stress-strain curves. The elastic moduli of the neat PDMS and composites determined from the slopes of the stress-strain curves increased with the increase of the ZnS:Cu particles in the composites. In addition, we confirmed that the strain-at-break values of the samples decreased with the ZnS:Cu particles. By conformally placing the composite with two parallel electrodes on human skin, the skin serves as a floating electrode, called an epidermal-stimuli electrode (EE), which consists of epidermal and dermal layers sensitive to pressure, temperature, and sweat stimuli from either the external environment or physiological signals of the internal body, as schematically illustrated in Figure 1a. Our ISDEE is ready to detect and visualize a variety of human-related stimuli when sealed with an adhesive tape, as shown in the photographs in Figure 1d and e. The AC field applied between the two parallel electrodes has little effect on the composite owing to the in-plane electric field. When a composite with two parallel electrodes is placed on epidermal skin, which is naturally conductive, the skin serves as a floating electrode, allowing for a sharing of the electric field, giving rise to a vertically driven electric field developed in the two overlapped areas on the AC field between the two Adv. Sci. 2019, 6, 1802351 electrodes. Under these circumstances, our device can detect either a change in capacitance or impedance depending upon a variety of vital skin information. The device scheme and electronic circuit diagram in Figure 1f and h illustrates the equivalent circuit model of an ISDEE upon unloading and loading, in which one fixed capacitor (C air ) indicates the initial capacitance between the two in-plane PEDOT:PSS electrodes, whereas the variable one (C s ) and epidermal resistor (Z E ) describe the active capacitance with the area of the electronic-epidermal contact ( Figure S4, Supporting Information). The significant benefit of our device lies in the fact that all epidermal stimuli are visualized in EL arising from solid-state cathode luminescence of ZnS:Cu particles embedded in the PDMS, in addition to electrical detection. A change in either the capacitance or the impedance with epidermal stimuli can vary the electric field upon AC operation, giving rise to a change in EL intensity. The generation of an effective electrical field using the epidermal stimuli electrode of our ISDEE was confirmed through an electrical field calculation based on a finite element analysis (FEA), the results of which are shown in Figure 1g and i. The calculation shows that the electrical field built between the PEDOT:PSS electrodes before contact on the skin was significantly spread out and concentrated toward the two overlapped skin areas when a composite was conformally placed on the skin. Prior to monitoring the pressure exerted on the human skin using our ISDEE, we examined the pressure-sensing performance of a parallel-type AC device with a floating indium tin oxide (ITO) electrode instead of skin, as shown schematically in Figure 2a. The dielectric constants of ZnS:Cu/PDMS composites have little effect on frequency except the slight decrease at a high frequency regime, which was much higher than the typical operation frequency for a parallel AC device, as shown in Figure 2b. The presence of ZnS:Cu particles in the composites was also advantageous for the sensitive detection of pressure in the capacitance owing to the enhanced dielectric properties of a composite with semiconducting particles ( Figure S5a and b, Supporting Information). The advantage of our pressure sensor operated with a parallel AC field lies in its extremely low initial capacitance because such capacitance is regarded as the value obtained from the in-plane capacitor of PEDOT:PSS/air/PEDOT:PSS right before contact, as schematically shown in Figure 1f ( Figure S6, Supporting Information). When a composite with two parallel electrodes is contacted on a floating electrode, the capacitance is field-driven and the second capacitance (C s ) becomes dominant (Figure 1h), as determined using a vertical capacitor of the floating electrode/composite with a pyramidal air gap/PEDOT:PSS. When further pressurized, the capacitance increases owing to both a decrease in thickness and an increase in the dielectric constant of the composite arising from the reduction of the pyramidal air gap ( Figure S7, Supporting Information). As a consequence, the pressure sensitivity of the device was much higher than that of a flat composite capacitor, as shown in Figure 2c (Figure S8, Supporting Information). The highest sensitivity of ≈12 kPa −1 was achieved in our parallel-type AC device, much greater than the values obtained from the sensors with the similar topological structures. [39][40][41] The fast capacitance response and relaxation times (≈100 ms) upon pressure was obtained ( Figure S9, Supporting information). In addition, the device exhibited an excellent load-unload endurance of over 5000 cycles under a pressure of 2.5 kPa, as shown in Figure 2d. Furthermore, we examined the pressure sensing properties of an ISDEE containing a ZnS:Cu/ PDMS (3/1) composite upon repetitive bending events with the bending radius of 10 mm. The performance of the ISDEE was rarely altered after 1000 bending cycles ( Figure S10, Supporting Information). The pressure exerted on a parallel-type sensor was successfully visualized using field-induced ACEL while detecting the pressure in capacitance mode. EL spectra of a parallel-type device containing a blue-emission layer were obtained as a function of pressure; the results in Figure 2e clearly shows that the intensity of the light emission increased with the applied pressure ( Figure S11, Supporting Information). The plot inset of Figure 2e shows that the EL intensity increased linearly with pressure. The EL sensitivity of the ISDEE calculated from the slope of the plot was ≈0.0495 kPa −1 . EL performance of the sensor was also examined as a function of the amount of ZnS:Cu microparticles as well as the frequency [31][32][33][34] ( Figure S12, Supporting Information). A bottom electrode was readily replaced with an epidermal stimuli electrode by patching a bilayered topological composite with two parallel PEDOT:PSS electrodes placed on human skin, giving rise to an ISDEE, as shown in Figure 2f. First, fingerbending motion was monitored using an ISDEE attached to a finger knuckle, as shown in the photograph of Figure 2f. In addition, the compression exerted on the knuckle upon bending was monitored in terms of the capacitance in real time, and the location of the maximum compression was at the same time visualized in the EL, as shown in Figure 2g. Both the capacitance and EL were enhanced with the bending angle proportional to the compression ( Figure S13 and Video S1, Supporting Information). Our ISDEE also allowed for monitoring the finger touch motion in terms of both the capacitance and EL intensity ( Figure S14, Supporting Information). Furthermore, our ISDEE enabled us to detect human breath as well as a jugular venous pulse (JVP) signal for respiration ( Figure 2h and Figure S15, Supporting Information). It should be, however, noted that, in spite of the capability of visualizing the pressure in EL over a broad range of pressure levels, the respiration modes involving delicate changes in pressure were difficult to visualize in EL mainly owing to the limit of the EL sensitivity of our ISDEE. The operation voltage we used for an ISDEE may be high to cause some damage on the skin. The electric field exerted on an ISDEE with a composite film of ≈30 µm in thickness was, however, 3.3 V µm −1 which is rarely harmful. We also monitored the current level of an ISDEE upon multiple contact events at the AC voltage and frequency of 100 V and 100 kHz, respectively ( Figure S16, Supporting Information). The current of ≈2.4 mA was detected upon every contact event, corresponding to the unharmful current level. Because our ISDEE contains an epidermal stimuli electrode whose conductance can be varied with both body temperature and sweat, the information can be monitored in terms of the impedance change as long as the AC field exerted on the ISDEE is conductance-dependent. To prove this, we examined how the sensing performance of a parallel-type device is conductance-dependent with a PEDOT:PSS Figure 2. Properties of pressure-sensing and visualization of a parallel-type AC device. a) Schematic of a parallel-type AC device with a floating ITO electrode. b) AC frequency-dependent dielectric constant of ZnS:Cu/PDMS composite with different ratios. c) A plot of the change in capacitance and sensitivity as a function of pressure with a parallel-type AC device with a composite ZnS:Cu/PDMS at a weight ratio of 3:1. The sensitivity is defined as S c = δ(∆C/C 0 )/δp, where p is the applied pressure, and C and C 0 are the capacitances with and without the applied pressure, respectively. d) Loadunload cycle endurance in capacitance changes over 5000 cycles at ∆2.5 kPa. e) EL intensity of the device under different applied pressures of 1 to 35 kPa. The inset shows a plot of the integrated EL luminescence as a function of pressure. The EL sensitivity is defined as S EL = δ(∆L/L 0 )/δp, where p is the applied pressure, and L and L 0 are the integrated EL intensities with and without applied pressure, respectively. f) Photographs of an ISDEE mounted on finger with EL upon finger motion as a function of the bending angle. g) Time-dependent capacitance changes of the ISDEE attached to a finger for human motion sensing. h) Measurement of JVP patterns by the ISDEE attached to the middle of the neck-packing VHB film. The device was operated at AC voltage and a frequency of 100 V and 100 kHz, respectively. The ISDEE also has a composite ZnS:Cu/PDMS (3:1 in weight ratio). floating electrode, whose conductance varies with the amount of dimethyl sulfoxide (DMSO) mixed with PEDOT:PSS. [35] Impedance (Z) changes of parallel-type AC devices with the floating PEDOT:PSS electrodes containing different amounts of DMSO were examined as a function of AC frequency and the results as shown in Figure 3a. As expected, the impedance decreased with the conductance of the floating electrode as a function of DMSO concentration, representatively at a typical operation frequency of 100 kHz, as shown in Figure 3b. The conductance-dependent ACEL performance of the devices was also observed using PEDOT:PSS electrodes containing different amounts of DMSO, as shown in Figure 3c (also see Figure S17, Supporting Information). The detection and visualization of the change in conductance of a floating electrode in terms of the impedance and EL, respectively, allowed us to utilize our ISDEE as a sensing display for both body temperature and sweat gland activity, which is sensitive to skin impedance under an AC field arising from the reorientation of lipids in the sweat glands underneath the epidermis depending upon the salt concentration in sweat. As a result, the skin impedance decreases with both body sweat and temperature. For the demonstration, an ISDEE was fabricated using a finger epidermal stimuli electrode, as shown in the photographs of Figure 3d. The ISDEE containing a topological-patterned composite ZnS:Cu/PDMS at a weight ratio of 3:1 was operated at AC voltage and a frequency of 100 V and 100 kHz, respectively. e) Time-dependent variation of change in impedance and temperature immediately after release of the grasped cup on ISDEE with temperature detection. The sensitivity is defined as S T = δ(∆Z/Z 0 )/δT, where T is the applied temperature, and Z and Z 0 are the impedances with and without the applied temperature, respectively. f) Variation of the change in impedance as a function of sweat from human skin with a relative sweat concentration of 0-160 × 10 −3 m. The sensitivity is defined as S S = δ(∆Z/Z 0 )/δC, where C is the [Na + ] concentration, and Z and Z 0 are the impedances with different concentration. g) Photographs of EL intensity images with surface temperature 25 °C (left) and 100 °C (right). h) Photographs of EL intensity images with sweat concentration of 10 × 10 −3 m (left) and 160 × 10 −3 m (right). of the water was transferred to the skin electrode, giving rise to a change in impedance as well as EL upon the AC field. The emissions became brighter with the surface temperature, as shown in Figure 3g. A change in sweat-dependent impedance was also observed in our ISDEE, the results of which are shown in Figure 3h. When the ISDEE on a finger was in contact with skin under a high state of sweating ([Na + ] 160 × 10 −3 m), the impedance of the device was abruptly reduced, and the original value was recovered when untouched. Multiple sweat sensing was accomplished with reliability. In addition, the concentration of sweat can be clearly visualized using EL, as shown in Figure 3h. We examined the effect of the relative humidity on the impedance of an ISDEE ( Figure S18, Supporting Information). When the impedance of an ISDEE was monitored with the relative humidity ranging from 30% to 80% at room temperature, the impedance was rarely varied with the relative humidity. The results imply that the epidermal electrode in an ISDEE with the humidity as high as ≈90% was hardly affected by the relative humidity of surrounding environment. It should be noted that in spite of the multifunctional sensing and display of pressure, temperature, and sweat with a single ISDEE, it was not trivial completely to avoid the coupling of two sensing performances. In fact, while capacitance sensing of pressure was rarely affected by either temperature or sweat, the impedance change utilized for either temperature or sweat sensing was influenced by the pressure (Figure S19, Supporting Information). Materials and device design for pressure-independent impedance change is under investigation. Furthermore, our ISDEE offers a useful way to achieve the direct imaging of a 2D conductive biological information when an epidermal stimuli electrode contains a 2D pattern, such as a naturally conductive fingerprint, as schematically shown in Figure 4a. Our ISDEE for visualizing fingerprint patterns was mounted to a finger, as shown in the photograph in Figure 4b. When the finger with the ISDEE was gently touched on any transparent surface, a distinct fingerprint instantly appeared with high resolution. It is also apparent that efficient quantitative pressure sensing was achieved as a function of the fingertip pressure in terms of capacitance ( Figure S20 and Video S2, Supporting Information). Notably, the topographic pyramidal arrays of a composite layer are beneficial for highresolution imaging of a fingerprint. Compared with the fingerprint images obtained from the ISDEE with a flat composite layer, the image from that using the pyramidal arrays shows very discrete valleys and ridges in the fingerprint. In particular, the image was barely resolved under a high pressure of the ISDEE with a flat composite layer, as shown in Figure 4c. Fingerprint imaging using our ISDEE was advantageous as compared with a conventional fingerprint reader because the fingerprint could be shown on any transparent surface, as the series of photographs in Figure 4d. In principle, fingerprint identification can be successfully achieved when an ISDEE is conveniently attached to a finger touching anywhere on a transparent device. In summary, this study presented a novel single device, ISDEE capable of visualizing a variety of body information such as touch, body temperature, and sweat with the simultaneous sensing of either the capacitance or impedance. The ISDEE was developed using a topographically surface-structured elastomeric PDMS composite containing light-emitting ZnS:Cu particles with two parallel PEDOT:PSS electrodes attached on a normal epidermal skin surface. The utilization of an epidermal skin layer as one of electrodes not only simplifies the device architecture (a bilayered structure), but also is capable of both sensing and displaying sensitive. First, the device visualized tactile sensations and finger-bending motion in EL under an AC field while detecting such elements based on a change in capacitance. Furthermore, both body temperature and sweat sensitive to the conductance of the epidermal-stimuli electrode were monitored based on changes in AC impedance through a direct visualization in EL. A single ISDEE exhibited the multiple sensing performance of 0.49 % °C −1 , 0.19 % mM −1 , and 11.63 kPa −1 for sensing temperature, sweat gland activity, and pressure, respectively, as well as a direct visualization of these epidermal stimuli. The ISDEE also allows for direct imaging of fingerprint patterns with a high resolution while achieving sensitive capacitance detection. The suitability of our extremely simple, but multimode, multifunctional skin display platform with high sensing performance (Table S1, Supporting Information) can be expanded for numerous emerging biomedical applications of human-activity and health-monitoring systems. Experimental Section Materials: Green ZnS:Cu microparticles (D512C) were purchased from Shanghai KPT Co. PDMS and crosslinkers were purchased from Dow Corning. The PEDOT:PSS electrode (Clevios PH1000) was modified through mixing with 5 wt% DMSO and 1 wt% Zonyl surfactant (FS-300 fluorosurfactant from Aldrich) with respect to PEDOT:PSS, which promoted the wetting of the ZnS:Cu/PDMS composite layer on an active layer. The 3M VHB tape (3M, VHB 4905) used was purchased and applied as received. Trichloro(1H,1H,2H,2H-perfluorooctyl)silane (FOTS) was purchased from Sigma-Aldrich. All other materials were purchased from Aldrich and also used as received. Fabrication of an ISDEE: An interactive sensing display was developed using a parallel-type AC device architecture, as illustrated in Figure 1a. First, a micropatterned pyramidal relief mold was fabricated on a 4-inch silicon wafer (using 300 nm thick thermally grown silicon oxide) using photolithography, followed by chemical etching. The arrays of the engraved pyramids were developed under P4mm symmetry, with the base area and height of each pyramid being 5 × 5 µm 2 and 5 µm, respectively. The micropatterned Si mold was treated by O 2 plasma at 40 W for 3 min before deposition of a FOTS solution, which facilitated the removal of a ZnS:Cu/PDMS composite layer from the Si mold. The ZnS:Cu/PDMS composite was prepared by mixing the ZnS:Cu powder with a PDMS liquid and a curing agent (Sylgard 184) with a weight ratio of 10:1. A liquid solution was spin-coated on the micropatterned Si substrates at 2000 rpm for 120 s, and subsequently annealed at 80 °C for 12 h, followed by UV treatment for 20 min. High-conductivity PEDOT:PSS was modified by mixing it with 5 wt% DMSO and a 1 wt% Zonyl surfactant with respect to PEDOT:PSS. A PEDOT:PSS layer of the modified solution was then spin-coated onto the composite film. An ≈200 nm thick PEDOT:PSS film was subsequently annealed at 100 °C for 15 min in an ambient atmosphere. In addition, the conductive PEDOT:PSS layer was treated as separate electrodes with an air gap using reactive-ion etching (RIE). Both the composite and PEDOT:PSS layers were peeled off from the Si molds and mechanically transferred onto a 3M VHB film. An ISDEE characterization: The cross-sectional morphology and thickness of the ZnS:Cu/PDMS composites were characterized using a field-emission scanning electron microscope (FESEM) (JEOL-7800F), as shown in Figure 1c. The capacitance and impedance were applied using a precision inductance, capacitance, and resistance (LCR) meter (Agilent E4980A). The frequency was varied from 100 Hz to 300 kHz. For changes in the measurement of the capacitance and impedance as a function of pressure and conductance, respectively, a computercontrolled universal manipulator (Teraleader) was set up along with the LCR meter. The vertical spatial and force resolutions of the equipment are 1 µm and 10 mN, respectively. The luminance and EL spectra of the devices were obtained using a spectroradiometer (Konica CS 2000). A function generator (Agilent 33220A) connected with a high-voltage amplifier (TREK 623B) was used for EL driving of the ISDEE. The current-voltage-luminance (I-V-L) characteristics of the devices were measured using a multichannel precision AC power analyzer (Zimmer Electronics Systems LMG 500). All measurements were conducted in a dark box under ambient conditions in air. Informed consent was obtained from the volunteer, who is one of the authors. Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
6,050.8
2019-04-26T00:00:00.000
[ "Physics" ]
Optimized Minimum Spanning Tree for Secure Routing in MANET Secure group communication transfers message from one member to another confidentially. Key management for secure communication in wireless networks is primitive based on cryptographic techniques. Intercluster routing was used to improve wireless networks security. In the new scheme, Minimum Spanning Tree (MST) and GBest-BAT algorithm are computed to identify Cluster Heads (CH) with a concept of backup nodes being introduced for effective key management. This study proposes MST formation for inter-cluster routing and it is optimized with GBESTBAT algorithm. INTRODUCTION Mobile Ad-hoc Networks (MANET) (Dalal et al., 2012a) are structure less, dynamic networks of mobile nodes without physical links.A MANET has many mobile wireless nodes and communication is carried out without any centralized control.MANET is a selforganized, self-configurable network sans infrastructure, where nodes move arbitrarily.MANET nodes are mobile due to which topology changes dynamically (Singh and Rathore, 2013).MANET is at risk to security attacks due to its topology.So, a secure key management scheme is a prime need for MANETs. Security is critical for ad hoc networks and it is a largely unexplored area.As nodes use open, shared radio medium in an insecure environment, they are prone to malicious attacks like Denial of Service (DoS).Lack of centralized network management or certification authority ensures that dynamically changing wireless structure are vulnerable to infiltration, eavesdropping and interference.Security is considered to be major "roadblock" in ad hoc network technology's commercial applications (Jain et al., 2005).Conventional data protection methods with cryptography face the task of key distribution and refreshing.Accordingly, research on security concentrated on secure data forwarding.But, security risks are related to ad hoc networks peculiar features, the most serious being the risk of a node being seized and compromised.This node would have access to the network's structural information, relayed data and it can send false routing information, which can paralyze the network quickly.A current approach to security issues is building a self-organized public-key infrastructure for adhoc networks cryptography.Key exchange raises scalability issues. MANET security requirements are (Djenouri et al., 2005): Availability: Ensuring that desired network services are available when expected, despite attacks.Systems that ensure availability combat DoS and energy starvation attacks to be seen later. Authenticity: Ensuring genuine inter-node communication.It ensures that a malicious node cannot act as a trusted node. Data confidentiality: A core security primitive for ad hoc networks, ensuring that a given message can be understood by the recipient(s) only.Data confidentiality is enabled through cryptography. Integrity: Denotes data authenticity when sent from one node to another i.e., ensuring that a message from node A to node B is not modified by malicious node, C, in transmission. Non-repudiation: Ensures that message origin is legitimate i.e., when a node receives a false message non-repudiation allows former to accuse latter of sending false message helping other nodes to learn about it. Key management manages cryptographic keys in a cryptosystem including handling generation, storage, exchange, use and replacement of keys.It also incorporates cryptographic protocol design, key servers, user procedures and relevant protocols (Singh and Rathore, 2013).In MANETs, key management is classified into 2 kinds; the first is based on a centralized/distributed Trusted Third Party (TTP) responsible for renewing, issuing, revoking and providing keying material to nodes participating in situations where key management process is done with threshold cryptography (Rafsanjani and Shojaiemehr, 2012).The second key management type are selforganized key management schemes that allow nodes to generate own keying material, issue public-key certificates to other network nodes based on their knowledge.Nodes store and distribute certificates. MANETs key management schemes are classified Symmetric Key Cryptography, Asymmetric Key Cryptography and Group Key Management (Xiong and Gong, 2011) To ensure MANET security, different Key Management schemes are used.Using and managing keys for security is crucial in MANETs due to its energy constrained operations, limited security, variable capacity links and dynamic topology.MANET speed depend on applications, for example, in commercial applications (short range networks) speed is high whereas in military applications (long range network) speed is low, i.e., speed is inversely prepositional to network range.MANET have special features like network working in a standalone intranet and also can be connected to a large internet.It can cover an area bigger than a transmission range and is quickly deployable due to using internal routing (Dalal et al., 2012b). Clustering divides a network into different virtual groups, based on rules to discriminate nodes allocated to different sub-networks.The goal is to achieve scalability in large networks and high mobility (Anupama and Sathyanarayana, 2011).Cluster-based routing solves node heterogeneity and limits routing information propagating inside a network.It increases routes life and decreases routing control overhead (Bakht, 2011;Narayanan et al., 2013).There are 3 types of nodes: cluster heads, cluster members and cluster gateways.Cluster Heads (CHs) coordinate nodes in their clusters (intra-cluster communication) and also communicate with other cluster heads (intercluster communication).Cluster Members (CMs) are ordinary nodes that transmit information to their cluster heads, which aggregate received information and forward it to a sink.Cluster gateways are non-cluster heads having inter-cluster links to contact neighboring clusters to forward information. Tree-Based Multi-Channel Protocol (TMCP) is a greedy, tree-based, multi-channel data collection applications protocol which partitions a network into multiple sub-trees reducing intra-tree interference by assigning different channels to nodes on different branches starting from top to the bottom of the tree scheduled according, to TMCP for aggregated data collection.Here, nodes on left most branch are assigned frequency F1, second branch frequency F2 and last branch frequency F3.After channel assignments, time slots are assigned to nodes with the BFS Time Slot Assignment algorithm.TMCP's advantage is that it is designed to support convergence of cast traffic and needs no channel switching.But, contention inside branches is not resolved as all nodes on same branch communicate on the same channel.This study proposed GBest with BAT to optimize inter-cluster routing using Minimum Spanning Tree (MST). LITERATURE REVIEW Noisy versions of Minimum Spanning Tree (MST) problem was investigated by Gronskiy and Buhmann (2014) who compared MST algorithms generalization properties.An information-theoretic analysis of MST algorithms measures information on spanning trees extracted from an input graph.Early stopping of MST algorithm yields approximate spanning trees set with increased stability compared to minimum spanning tree.The framework provides insights for algorithm design when noise is unavoidable in combinatorial optimization. A Modified Shuffled Frog-Leaping Algorithm (MSFLA) with Genetic Algorithm (GA) cross-over to solve MST problem was proposed by Roy (2011).SFLA is a natural memetics inspired meta-heuristic search method combining benefits of meme-based Memetic Algorithm (MA) and social behavior based Particle Swarm Optimization (PSO).SFLA was modified for MST problem.Results reveal that the algorithm ensures accurate results with minimum iterations. The selection process of Scattered settlements composed of individual buildings seen as point cluster was performed by Zheng et al. (2011).The selection was performed with properties like selectable, disposable and selectable-or-disposable.The point cluster selection was transformed into a simplification of the linear cluster, with Ant Colony Optimization (ACO) algorithm being applied to simplify linear objects.The experiment showed that the new method ensured feasible and effective results. Bees algorithm based approach to handle degree constrained problem was proposed by Malik (2012).Travel Salesman Problem (TSP) was considered and a set of 2-degree spanning trees extracted from a graph and supplied to the new algorithm.A bees algorithm based approach optimized spanning trees based on cost values.Fitness function points that cost effective degree constrained spanning trees.Experiments with TSP show that the new approach produces cost/time effective results. An MST-based and new GA algorithm for distribution network optimal planning was presented by Li and Chang (2011).Two new operators were introduced to reduce computational time and avoid infeasible solution and to ensure that individuals are feasible solutions.An electricity distribution network and feeder cross-sectional area selection simultaneous optimization model dealt with the weight of minimalcost system tree.This combinatorial coding guarantees solution validity to a global optimum. Minimum Energy Network Connectivity (MENC) problem, that reduces sensors transmission power in wireless networks and lowers its energy consumption while simultaneously keeping global connectivity was addressed by Abreu and Arroyo (2011).MENC problem is NP-hard and its hardness motivates the development of a PSO based heuristic algorithm to get near-optimal solutions.The new heuristic was tested on a 50 instances problem set.Computational results show that the new approach performs better than classical MST heuristic. An improved Discrete PSO (DPSO) approach for mcd-MST that compromises between key objectives in WSNs like energy consumption, reliability and QoS provisioning was presented by Guo et al. (2009).GA's mutation and crossover operator principles were incorporated in the new PSO algorithm to achieve better diversity and break from local optima.The new algorithm was compared to an enumeration method.The simulation shows that the new algorithm provides efficient/high-quality solutions for mcd-MST. A study on PSO applying an instance of Multi-Level Capacitated Minimum Spanning Tree Problem was presented by Papagianni et al. (2009).A diversity preservation global variant of PSO meta-heuristic was specifically presented.The specific PSO variant includes Gaussian mutation to avoid premature convergence and alternative selection of flight guide per particle.Results were compared to corresponding evolutionary approaches.Network Random Keys decoded/encoded Potential tree solutions. An ant-based algorithm to find low cost Degree-Constrained Spanning Trees (DCST) presented by Bui et al. (2012) uses a set of ants which traverse the graph and identify candidate edges from which DCST was constructed.Local optimization algorithms improved DCST.Experiments using 612 problem instances show improvements over current algorithms. METHODOLOGY BAT optimization algorithm and hybrid GBEST-BAT are explained in detail in the following sections. A Minimum Spanning Tree (MST): Minimum Spanning Tree (MST) (Upadhyayula and Gupta, 2006) is a sub-graph that spans over vertices of a graph without any cycle and with minimum sum of weights over all edges.Weight for every edge is considered in MST-based clustering as the Euclidean distance between end points forming that edge.So, an edge that connects 2 sub-trees in MST must be shortest.In such clustering, unusually longer inconsistent edges are removed from MST. MT's connected components obtained by removing edges are treated as clusters.Elimination of longest edge results in 2-group clustering.Removal of next longest edge leads to 3group clustering. A packet is transmitted by a node that does not exist after one hop.To spend least energy in packet transmission, a node transmits to its closest (weight) neighbor (towards sink node).Energy consumed is given by Eq. ( 1): where, K is a constant packet traveling along a graph, w(u, v) is the weight of the link between nodes u and v and T is a tree.In Eq. ( 1), E total is minimized only when ( , ) ( , ) Cluster head selection: Cluster formation is adapted from (Karypis et al., 1999).The technique also determines similarity between each cluster pair named C i and C j with their relative inter-connectivity RI.C i ; C j / and their relative closeness RC.C i ; C j /.A hierarchical clustering algorithm merges a pair of clusters where both RI C i ; C j / and RC.C i ; C j / are high.By this selection, (Karypis et al., 1999) overcomes limitations of current algorithms. Inter-cluster connectivity between a pair of clusters C i and C j is defined as absolute inter-cluster connectivity between C i and C j and is normalized with internal inter-cluster connectivity of 2 clusters C i and C j .Absolute inter-cluster connectivity between a pair of clusters C i and C j is defined as the sum of the weight of edges connecting vertices in C i to vertices in C j .This is Edge Cut (EC) of the cluster having the 2 clusters mentioned above.Cluster connectivity of cluster C i is captured by the size of its min-cut bisector (Karypis andKumar, 1995, 1998).Thus Relative Inter-connectivity (RI) between a pair of clusters C i and C j is given by Eq. ( 2): (2) Which normalizes absolute inter-cluster connectivity with average internal inter-connectivity of the two clusters.By focusing on the relative inter-cluster connectivity between clusters, overcomes limitations of present algorithms that use static inter-cluster connectivity models. Trust for cluster head selection: Trust is a basic level of security.It is calculated by a node and values are stored locally.Regular updating based on new interactions is performed.Trust values expressed between 0 and 1. 0 indicate complete mistrust and 1 indicates complete trust.When a new or unknown node y enters the neighbourhood of node x, trust agent of node x calculates trust value of node y. A chosen cluster head checks required network trust.The algorithm compares the node's trust value by combining direct/indirect trusts to achieve total trust.Trust value (T threshhold ) is associated with a job processed till all Cluster Heads (CH) are chosen.Trust (T) is tested against trust sources with direct trust value (D t ), indirect trust value (I t ) and total trust value (T t ).When T t is higher than or equal to required trust value, then a node is selected as CH provided no 2 hop nodes have a higher trust value than the current node.The next highest trust value in a 2 hop node is named backup node. CH is elected i.e., when a node (X) becomes a cluster head, then checks whether it had earlier experience with neighborhood nodes.If so, direct trust value (D t ) is represented as in Eq. ( 3): (3) where, T yi (x) is the sum of its trust value with its 2 hop neighbors. If D t T max , then associated risk is lower than risk threshold and node (X) becomes CH where there is no node with higher T value than current node (X).So indirect trust value (I t ) is represented as in Eq. ( 4): (4) where, T y (x) trust value of node X based on recommendations from its 2 hop neighbors. If I t T max then associated risk is lower than risk threshold so that node (X) becomes CH provided there are no neighbor nodes with higher T values.If node (X) value T is lower than T max then total trust value (T t ) is computed as in Eq. ( 5): where, W A and W B are weights assigned.If (T t ) is greater than/equal to (T threshod ) then, the process is continued as above.If CH is not discovered T threshold is decreased. When CH is selected, trust value certificates are used by nodes when moving to adjacent clusters.This count computes indirect trust.The indirect trust uses communication data rate (R c ) which is a rate of successful communication with evaluated nodes with values between 0 and 1 and whose initial value is 1.Data delivery rate (R d ) is the rate of successful packet delivery by evaluated node.Indirect trust is a weighted sum of Trust value certificate and communication data rate. CH and the backup node are termed "control set".CH, backup node and all cluster members generate TEK agreement using A-GDH.2 from cliques protocol (Gomathi and Parvathavarthini, 2010) based on Diffie-Hellman (DH) (Zhang et al., 2010) key agreement method responsible for key authentication.A Backup node maintains the CH's redundant details and it becomes CH when the real CH leaves the cluster. Proposed Gbest BAT algorithm: Yang (2010) proposed Bat Algorithm was inspired by bats echolocation characteristic.Echolocation is sonar which bats use to detect prey and avoid obstacles.Bats emit a very loud sound and listen for an echo to bounce back from objects.Thus, a bat computes how far it is from an object.Also, bats distinguish the difference between obstacle and prey in total darkness (Nakamura et al., 2012).To transform such bat behavior to an algorithm, Yang idealized some rules (Komarasamy and Wahi, 2012): • All bats use echolocation to sense distance and to know difference between food and background barriers; Bats fly randomly with velocity v i at position x i with a frequency f min , varying wavelength and loudness A 0 to search for prey.They automatically adjust wavelength (frequency) of emitted pulses and adjust pulse emission rate [0, 1], based on the target's proximity.• Though loudness varies in many ways, it is assumed that the variance is from a large (positive) A 0 to a minimum constant value A min . Initialization of bat population: Random generation of initial population is done from real-valued vectors with dimension d and number of bats n, by considering lower and upper boundaries as in Eq. ( 6): where, i = 1, 2,…n, j = 1, 2,….d, x minj and x maxj are lower and upper boundaries for dimension j respectively. Update process of frequency, velocity and solution: A frequency factor controls solution step size in BA.This factor is assigned a random value for every bat (solution) between upper and lower boundaries [f min , f max ].Solution velocity is proportional to frequency and a new solution depends on new velocity (in Eq. ( 7)): where, βϵ[0, 1] indicates randomly generated a number, x* represents current global best solutions. Update process of loudness and pulse emission rate: Loudness/pulse emission rate must be updated as iterations proceed.As a bat (Yilmaz and Kucuksille, 2013) gets closer to prey then loudness A decreases and pulse emission rate increases.Loudness A and pulse emission rate r are updated by Eq. ( 8): where, α and γ are constants.r 0 i and A i are factors consisting of random values and A 0 t can be [1, 2], while r 0 i can typically be [0,1].Initially, all bats fly randomly in search space producing random pulses.After each fly, each bat's position is updated as in Eq. ( 9) (Baziar et al., 2013) where, Gbest is best bat from an objective function point of view; NBat is number of bats in a population; to reach a better random walking, another random fly is simulated where a random number β is generated randomly.In each iteration, if random value β is larger than ri then a new solution around Xi is generated as in Eq. ( 10): ; 1,..., where, ε is a random value in a range of [−1, 1] and A is the mean value of all bats loudness.If random value β is less than ri then a new position A i new is generated randomly.New position A i new is accepted when Eq. ( 11) is satisfied: Also, values of loudness and rate are updated as in Eq. ( 12): where, α and γ are constant values and Iter is number of the iterations during optimization. RESULTS AND DISCUSSION Table 1 to 4 and Fig. 1 to 4 shows the result values and graph for average packet delivery ratio, average end to end delay, average number of hops and jitter respectively. Table 1 and Fig. 1 shows that the average packet delivery ratio for Trust Cluster GBEST BAT MST with GDH performs better by 5% than DSR with GDH and by 3.04% than Trust Cluster BAT MST with GDH at number of nodes are 75.Similarly, the average packet delivery ratio for Trust Cluster GBEST BAT MST with GDH performs better by 15.9% than DSR with GDH and by 9.92% than Trust Cluster BAT MST with GDH at number of nodes are 450. Table 2 and Fig. 2 shows that the average end to end delay for Trust Cluster GBEST BAT MST with GDH performs better by reducing delay as 16.92% than DSR with GDH and by 6.5% than Trust Cluster BAT MST with GDH at number of nodes are 75.Similarly, Fig. 2 : Fig. 2: Average end to end delay . Symmetric Key Cryptography applied to MANETs are based on keys deployed in advance, including a single key used by nodes.A node shares a single key with single/multi-nodes.A deployed node possesses the following key.Such schemes are divided into determinate key management scheme and stochastic key management scheme. : Table 1 : Average packet delivery ratio Table 2 : Average end to end delay Table 3 : Average number of hops to destination the average end to end delay for Trust Cluster GBEST BAT MST with GDH performs better by 149.3% than DSR with GDH and by 148.2% than Trust Cluster BAT MST with GDH at number of nodes are 450. Table 4 and Fig. 4shows that the jitter for Trust Cluster GBEST BAT MST with GDH performs better Trust cluster BAT MST with GDH Trust Cluster GBEST BAT MST with GDH by 7.07% than DSR with GDH and by 7.07% than Trust Cluster BAT MST with GDH at number of nodes are 75.Similarly, the jitter for Trust Cluster GBEST BAT MST with GDH performs better by 14.08% than DSR with GDH and by 12.12% than Trust Cluster BAT MST with GDH at number of nodes are 450.CONCLUSION MANETs are susceptible to attacks by malicious nodes resulting in packets being dropped.Key management is crucial in MANET security issues as it is the basis for security services.This study uses intercluster routing to mitigate network performance degradation due to malicious nodes.Inter-cluster routing is a clustering criterion for MANETs group key management.GBEST-BAT algorithm is disseminated by meta-heuristic population based optimization algorithm inspired from bats search for food.Then node mobility detects malicious group members.Experiments show that the new Trust Cluster GBEST BAT MST with GDH ensures improved average packet delivery ratio, average end to end delay, average hops and jitter than Trust Cluster BAT MST with GDH and DSR with GDH.Average packet delivery ratio for Trust Cluster GBEST BAT MST with GDH is 5% than DSR with GDH and by 3.04% than Trust Cluster BAT MST with GDH when there 75 nodes.Similarly, average packet delivery ratio for Trust Cluster GBEST BAT MST with GDH improves by 15.9% than DSR with GDH and by 9.92% than Trust Cluster BAT MST with GDH when there are 450 nodes.Similarly, average end to end delay for Trust Cluster GBEST BAT MST with GDH reduces delay by 16.92% than DSR with GDH and by 6.5% than Trust Cluster BAT MST with GDH when there are 75 nodes.Average end to end delay for Trust Cluster GBEST BAT MST with GDH improves by 149.3% than DSR with GDH and by 148.2% than Trust Cluster BAT MST with GDH when there are 450 nodes.
5,107.2
2016-02-05T00:00:00.000
[ "Computer Science" ]
Philosophy of Science in Accounting Aspects and Its Development This study explains the philosophy of science, especially in the field of accounting, scientific methods, and the development of accounting science, both conventional and sharia. Basically, accounting science develops within the framework of the philosophy of science which is the basis and direction, namely through ontology, epistemology and axiology in the concept of the scientific method. The development of accounting science arises because accounting science is a category of social science that moves dynamically following the development of the social environment, where the science is applied. The times have greatly influenced the development of accounting science, so the role of accounting research is needed in answering the phenomena that occur. Introduction Etymologically, the word philosophy, which in Arabic is known as falsafah, in English it is known as philoshophy, and in Greek it is called philoshophia. the word philos means love (love) and sophia means wisdom or wisdom, so a philosopher is also called a lover or seeker of wisdom. And in terminology, philosophy is a science that investigates everything that exists in depth by using reason to its essence. Philosophy does not question the symptoms or phenomena, but what is sought is the essence of a phenomenon. Philosophy examines something that exists and that may exist in depth and thoroughly (Surajiyo, 2010). FurtherSuriasumantri (2001) explains that philosophy is widely used as a foothold to develop science which is part of epistemology (philosophy of knowledge) and specifically examines the nature of science (scientific knowledge). Philosophy is knowledge and inquiry with reason about the nature of everything that exists, its origin and law (Harafah, 2007). Philosophical thinking tools are analysis and synthesis that use thinking tools in the form of logic, deduction, analogy and comparison (Harafah, 2018). Thus it can be said that philosophy is a science that was born from a love for knowledge, a high and deep curiosity, so that it continues to study theories and phenomena that develop from time to time. In this article the author wants to explain and discuss the philosophy of Accounting Science.The philosophy of accounting science is one of the branches of philosophy that has been widely used by accounting experts to develop accounting theory. Theoretically, accounting science is a combination of rationalism and empiricism because accounting is a science that uses thinking to analyze accounting transaction data in making financial reports where accounting transaction data is a concrete thing that can be responded to by the five human senses, and financial statements are very important in decision making. for internal and external parties, this is in accordance with aspects of ontology, epistemology and axiology (Yusnaini, 2016). Likewise in science and knowledge that is currently developing, namely accounting science. Accounting science is divided into three areas of accounting, namely (Yusuf, 2011): a. Financial Accounting Financial accounting is an accounting process that produces useful information to meet the needs of external parties from the entity in making decisions. b. Cost accounting Cost accounting is an accounting process that produces information about costs to produce the output of a production process that is used by internal parties (management) of the company in making decisions. However, information resulting from the cost accounting process can also be included as part of the financial accounting information to provide information to external parties in decision making. c. Management Accounting Management accounting is an accounting process that produces information to meet the needs of the organization's internal parties. The development of accounting science arises because accounting science is a category of social science that moves dynamically following the development of the social environment, where the science is applied. However, in its development, many have highlighted where the position of the accounting discipline in the structure of science is. This is where the role of accounting research is needed, to answer the phenomena that occur. Through the research carried out, it is hoped that it can give birth to various new ideas that creatively make accounting science a role in society. Philosophy Philosophy is a rational, systematic and universal thought process towards everything that exists and that may exist. Philosophy can also be influenced by the growth of human civilization in a variety of sciences, so that science separates itself and pursues their respective goals (Maksum, 2011). For this reason, philosophy is the link between science and its application. There are 2 types of philosophy that can be studied based on the object, namely: a. Material Objects, i.e. everything that is the problem, or everything that philosophy is concerned with. b. Formal object, namely an attempt to find information radically about the material object of philosophy. The function in studying philosophy is to save people from the misguided life of facing the effects of progress and the materialist lifestyle, as well as releasing the confines of anxiety and meaninglessness. Philosophy of Accounting The development of science and technology is very fast, sometimes it affects the tendency to think that all problems can be solved by the scientific method, namely learning methods that are applied based on certain theories (Gaffikin in Abdullah, 2011). The development of accounting thought and theory (accounting thought) is strongly influenced by the basic assumptions used, or accounting thinking is also influenced by the way the thinkers are classified (Davis et al, in Abdullah, 2011). The science of philosophy itself reviews the science of accounting as a science that is learned for the purposes of a job in terms of making reports in the financial sector and analyzing transaction data. Data in accounting is concrete data and has proof of payment or receipt that affects a financial report in the company (Abdullah, 2011). Furthermore, Morgan in Bambang (2014) states that accounting thinking consists of four social reality paradigms, namely functionalist, interpretative, radical humanist and radical structuralist paradigm. functionalist is based on theory for confirmation, objective, what the data says, researcher is considered passive, between theory and practice is considered there is no difference, only as a tool not a goal. Interpretative is based on the subjectivity of the researcher, the researcher is directly involved in reality, there is an understanding of social behavior, the understanding reaches the core. While the radical humanist and radical structuralist emphasize the aspect that the entity is influenced by the surrounding factors. Epistemological Aspects in Accounting Each knowledge has specific characteristics regarding what (ontology), how (epistemology) and for what (axiology) knowledge is structured. These three foundations are interrelated with each other, so the ontology of science is related to the epistemology of science and the epistemology of science is related to axiology and so on. Suriasumantri (2001:105) further explained that if we want to discuss the epistemology of science, it must be related to the ontology and axiology of science. The core of the epistemological approach is to question the process of the occurrence of science, including scientific means, scientific attitudes, methods, and scientific truth. This thinking is the main basis in carrying out scientific activities that will combine the ability of reason with experience and data obtained during scientific activities. Rationalism emphasizes the role of reason in acquiring knowledge. This understanding holds that the source of human knowledge is reason or ratio. Knowledge that meets the requirements is that which is obtained through the activities of reason. The main characteristics of rationalism are: (1) the establishment that the essential truth can be directly obtained by using reason as a tool, (2) the existence of a logical explanation or deduction which is intended to provide the most rigorous proof possible regarding all aspects of the field. knowledge based on what is considered to be the essential truths mentioned above (Koento Wibisono and Misnal Munir in Lasiyo, 2007: 2). The idea of rationalism comes from the notion of idealism, this understanding uses deductive methods, reason, a priori and coherence. Furthermore, the notion that emphasizes experience as a source of human knowledge is called Empiricism. This view holds that human experience includes outward experience concerning the world and inner experience concerning the human person. Empiricalism itself is derived from the notion of realism which uses inductive methods in seeking scientific truth. There are very striking differences between these two ideologies, so that there is an attempt to unite the two views, hence the ideology of Criticism which was pioneered by Immanuel Kant. The critical view is that knowledge is basically the result of collaboration between materials that are sensory experiences which are then processed by reason so that there is a causal relationship. Based on an epistemological approach, the science of financial accounting has undergone many transformations, so much so that we are in the midst of one of the largest since Pacioli created double-entry in accounting (Ismail & King, 2006). In the aspect of epistemology, accounting science uses various methods according to what is needed. An obvious example is the inductive method used when making decisions. By looking at the report, the authorities will conclude what steps to take. Next is the positivism method used when making a financial report which must use existing data or that has been known explicitly with accurate evidence in the form of notes, etc. The difference between bookkeeping and accounting is that the accounting process includes the functions of bookkeeping whereas bookkeeping only involves recording economic events. So bookkeeping is part of the accounting process. Further examination of the sources of Islamic teachings reveals that Islam also addresses the science of accounting. Religion has been proven to answer human problems both at the macro and micro levels. Religious teachings must be practiced in all areas of life. Translation and interpretation are needed to carry out religious teachings. In the world, religion needs to be pursued for its relevance so that it can color the cultural, political, and socio-economic life of the community. Therefore, religion is not only at the normative level. Since Islam is a religion of charity, its interpretation needs to shift from prescriptive scientific theory to factbased scientific theory. We can see the existence of accounting in Islam from various historical evidences and from the Qur'an. In Surah Al-Baqarah verse 282, the issue of muamalah is discussed. This includes buying and selling activities, debts and leasing. From that we can conclude that in Islam there has been an order to carry out a recording system whose main emphasis is for the purpose of truth, certainty, openness, and justice between the two parties who have a muamalah relationship. In accounting language, it is better known as accountability. There is a huge difference in culture and values that developed between Islamic and western societies. In Islamic society there is a value system that underlies every community activity, both personal and communal. This is not found in the life of western society. These differences in culture and value systems result in different forms of society, practices, and patterns of relationships. Meanwhile Triyuwono (2006) explains that the purpose of sharia accounting is the creation of a business civilization with humanist, emancipatory, transcendental, and theological insights. With sharia accounting, the social reality that is built contains the value of monotheism and submission to the provisions of Allah SWT Accounting Developments in Indonesia Before the arrival of the Dutch to the island of Java in 1609, the Indonesians had known a medium of exchange, but did not yet have a universal currency. For that Barter was the dominant trading activity at that time. The Dutch government not only introduced currency, but also introduced a double-entry bookkeeping system to Indonesia in the 17th century. The East Indies Company, a Dutch colonial company that had a very important influence on business regulations in Indonesia at that time, used a double-entry bookkeeping system known as the continental system. At that time there were no accountants from Indonesia, so the books were still handled by foreign accountants. During the Japanese colonial period, the Japanese government did not make any contribution to the accounting system in Indonesia. At the time, Indonesia still uses the continental system originating from the Dutch government. After the independence of Indonesia (1945), the Dutch bookkeeping system was still used until 1960. The development of accounting in Indonesia grew rapidly after 1957, at that time an organization was established that accommodated accountants in Indonesia. The organization was named the Indonesian Accounting Association (IAI). At that time, Indonesia began to adopt the United States accounting system, known as the Anglo Saxon system. In 1960, the State College of Accountancy (STAN) began to change their accounting program from the Dutch system to the United States system, known as the Anglo Saxon system. However, by 1975, all institutions, both private and government, had adopted the Anglo Saxon system. The development of the Anglo Saxon accounting system in Indonesia is due to foreign investment in Indonesia which has a positive impact on the development of accounting, because most foreign investment uses the United States (Anglo Saxon) accounting system. Here is an overviewDutch accounting system versus American system Union. Development of Accounting Science Problems in the development of accounting science arise because accounting science is a category of social science that moves dynamically following the development of the social environment, where the science is applied (Bambang, 2014). Skousen (2008) divides accounting research topics into several topics, including financial accounting, managerial accounting, auditing, taxation, governance. A study can be said as accounting research, if the research shows the effect of economic events on the process of summarizing, analyzing, verifying and reporting standardized financial information, as well as the effect of the information presented on economic events (economic events). Research in the field of accounting science which later became the basis for the development of further research, among others:In 1494 Luca Pacioli in his book entitled "Review of Arithmetic, Geometry, Ratio and Proportion" introduced an accounting recording system with a paired recording system or double-entry system which is still being studied in accounting science, namely bookkeeping and accounting equations or debit system. and credit, where the amount/nominal debit recorded on the left must equal the amount/nominalnominal with those recorded on the credit/right side (Alexander, 2002). In 1800 people made the balance sheet and income statement a very important report used in evaluating companies and making decisions. Until 1850 the development of auditing science (accounting checks) is getting faster and audits are carried out on bookkeeping records and financial reports (Harahap, 2013). AroundIn 1900 the USA began to apply professional certification which was carried out through a national exam. Then in this period accounting is considered to be able to provide a pattern of taxes (Harahap, 2013). In 1925, government accounting and supervision of government funds began to be known, cost analysis techniques were also introduced, financial statements began to be standardized, accounting audit norms began to be formulated, and the manual accounting system switched to the EDP system.(Electronic Data Processing) with the introduction of Punch Card Record (Harahap, 2013). 1950 The accounting process uses a computer for data processing, Formulation of principles or Financial Accounting Standards (SAK) or GAAP (Generally Accepted Accounting Principles) came into effect, Cost Revenue Analysis or Cost Accounting was increasingly recognized, tax services such as Tax Consultants and Tax Planning began to be offered by the Accounting profession, Management accounting as an accounting field especially for management purposes began to be known and developed rapidly (Harahap, 2013). In 1960 in the United States, which was initiated by the APB (Accounting Principles Board) gave rise to the concept of Full Costing so that many experts then gave their arguments in the measurement system in cost accounting (Bambang, 2014). In 1970 there was a shift in research orientation, namely on the capital market in Capital Market Accounting which studied investment, stocks and securities (Bambang, 2014). In 1980 the degree of Accountant (Ak) for Graduates of PTS (Bachelor of Economics) was opened through the mechanism of the State Accounting Examination (Anisa, 2010). From the beginning of 2000 until now, Islamic Banking has developed and IAI (Indonesian Accountants Association) issued an Exposure Draft PSAK No. 56 concerning the Basis for Compiling Financial Statements for Sharia Banking in 2002 (Fitria, 2016). Accounting science from the West that developed is conventional accounting science that is capitalist and secular (economic rationalism). Conventional accounting science has not been based on the nature of knowledge that comes from God revealed through revelation (Yusnaini, 2016). The essence of accounting itself is recording, and Allah says in surah al-Baqarah, verse 282 which means: "O you who believe, when you are in debt with an appointed time, you should write it down fairly, and let no one the author is reluctant to write it down as Allah has taught it," (Surah Al-Baqarah: 282). The practice of accounting science has been implemented since the time of the Prophet Muhammad (610 AD) with recording techniques and the concept of justice. The word "justly" or "divine justice" in QS. Al-Baqarah: 282 contains three basic values, namely: monotheism and Islam in the sense of submission and submission to Allah, and justice in the sense of belief that all human actions will be judged by Allah. So, justice cannot be separated from ethical and moral values which are none other than revelation or God's laws (Raharjo in Tryuwono, 2015). In the context of accounting, an accountant must make the value of "divine justice" as the basis for interacting and constructing social reality. This means that accounting as a discipline and practice cannot stand alone. Accounting is always tied to the social reality in which it is practiced. This is because accounting is interpreted as a mirror used to reflect social reality. It is further explained that the mirror itself is a product of the ideological values in which the mirror is made. Thus, accounting which is constructed with a different ideological basis will reflect reality in a different form. This situation will become even more crucial, when the results of these reflections (accounting information) are then consumed by others which will eventually form new realities (Morgan, Tricker in Tryuwono, 2015). The ontological consequence that accountants must realize is that they must critically be able to free humans from the bonds of pseudo realities and then provide the reality of "divine justice" that binds humans in everyday life. Thus, it is hoped that it can awaken self-awareness (Self Consciousness) in full of one's obedience and submission to divine power. With this self-awareness, accountants will always feel the presence of God in the dimensions of time and place wherever he is, this is what is meant by the ontology of monotheism. With the principle of divine justice, the social reality that is constructed contains monotheism and submission to networks of divine power, all of which are carried out with the perspective of khalifatullah fil Ardh, which is a perspective that is aware of future responsibilities before God Almighty (Tryuwono, 2015). In its development, sharia accounting is one of the breakthroughs that is quite shocking in the economic world in Indonesia. Sharia accounting is a system that regulates the activities of recording, classifying and summarizing, reporting, and analyzing financial data using principles that are in accordance with the values of Islamic teachings. The application of Islamic principles to the economic sector does not only occur in bank products such as sharia savings. Now, sharia accounting is also inseparable from the application of principles that are in accordance with Islamic religious values in it, both in terms of cycles and recording. Therefore, sharia accounting is very demanding of the accountant's accountability to the sharia principles that are applied when working on the presentation of these financial data. As forthe differences between conventional accounting and Sharia accounting are as follows: Scientific Method Accounting science is included in the social sciences group, the objects studied by social sciences have different characteristics compared to natural sciences. Therefore the method used will be associated with philosophy. According to Mantra (2004) research methodology is a science about the framework of carrying out systematic research. Method is a systematic way of working, in order to find scientific truth. While the function of philosophy is to test the methods used to produce valid knowledge. Procedures taken from philosophical arguments are based on knowledge obtained from philosophy, while philosophical knowledge is generated from ontology, epistemology and axiology (Abdullah, 2011). Ontologyis the concept of the subsystem of existence. The methodology derived from the ontology relates to the nature of "being" which is the object of investigation, thus answering the "what" question. So there is an assumption that ontology and epistemology are the determinants of methodology. Epistemology is the determination of criteria to get real knowledge about something real with the compatibility between real knowledge and concepts. In the scientific method, it begins with the emergence of a phenomenon, then manifests it in the form of a set of logical statements in the form of a hypothesis and ends with a conclusion that is connected with the phenomenon studied previously. Axiology is revealing the usefulness of science for humans (Abdullah, 2011). Abdullah (2011) states that knowledge is power, which is a reflection of how important knowledge is. However, the power possessed by this knowledge is highly dependent on humans as users of knowledge, because science itself is neutral. Every knowledge basically has specific characteristics regarding ontology, epistemology and axiology. These three foundations are interrelated with each other, so if we want to discuss the epistemology of science, it must be related to the ontology and axiology of science (Suriasumantri, 2001). The core of the epistemological approach is to question how the process of science occurs, including scientific means, scientific methods, scientific truth. FurtherFrancis Bacon in In the epistemological aspect of accounting science uses various methods according to its needs. Financial Statements are the result of recording and accounting processes used for decision making by the authorities. In compiling a financial report, one must use existing data or which has been known firmly, accurate evidence in the form of notes and other supporting evidence. Conclusion Basically, accounting science develops within the framework of the philosophy of science which is the basis through ontology, epistemology and axiology in the scientific method. Philosophy reviews accounting science as a science that is learned for the purposes of a job in terms of making financial reports and analyzing transaction data. In the aspect of epistemology, accounting science uses various methods according to its needs. Islamic epistemology, based on the monotheism paradigm. The fixed parameter is from revelation (Al Qur'an and As Sunnah). The times have greatly influenced the development of accounting science and the role of accounting research is very much needed in answering the phenomena that occur. Through the research carried out, it is hoped that it can give birth to various new ideas that creatively make accounting science have a very decisive role in society both in thought and feeling (heart).
5,320
2022-08-29T00:00:00.000
[ "Philosophy", "Business" ]
Against ‘functional gravitational energy’: a critical note on functionalism, selective realism, and geometric objects and gravitational energy The present paper revisits the debate between realists about gravitational energy in GR (who opine that gravitational energy can be said to meaningfully exist in GR) and anti-realists/eliminativists (who deny this). I re-assess the arguments underpinning Hoefer’s seminal eliminativist stance, and those of their realist detractors’ responses. A more circumspect reading of the former is proffered that discloses where the so far not fully appreciated, real challenges lie for realism about gravitational energy. I subsequently turn to Lam and Read’s recent proposals for such a realism. Their arguments are critically examined. Special attention is devoted to the adequacy of Read’s appeals to functionalism, imported from the philosophy of mind. Introduction This paper scrutinises Read's recent claim that a functionalist strategy can support realism about gravitational energy in General Relativity (GR)-the view that GR possesses local and global gravitational energy-stress in a robust physical sense (at least within a certain class of models). According to Read (2018), such a realism chimes with, and sanctions the use of, gravitational energy common in astrophysical practice. Read's arguments, I'll argue, aren't convincing-at least not as they stand. In particular, not only is gravitational energy explanatorily dispensable in GR-as Read admits. It is, I submit, a tenuous explanans. In what follows, I pursue three goals: 1. to critique Read's proposal,2. to plead for anti-realism about gravitational energy, and 3. to animadvert upon facile uses of functionalism. The first goal is to push back against Read's realism about gravitational energy: even if one is sympathetic to his overall general argument, one may well question that gravitational energy satisfies its premises. My second goal is indirect: I'll take up the cudgels for Hoefer's eliminativism about gravitational energy (Hoefer 2000). Hoefer's arguments admit of a different, more circumspect formulation. Thus re-formulated, they evade Read's objections. This also brings to the fore the more serious difficulties that realists about gravitational energy face. I'll argue that Hoefer's eliminativism is ultimately a more satisfactory stance than Read's. A third goal is to enhance our understanding of functionalism in the philosophy of physics. Section 4 provides a critical discussion of a recent application of functionalism-viz. Read's. This allows us to demarcate more sharply (or, at least, sensitivises us to pitfalls with respect to) what functionalist strategies can and can't achieve-and what suitable contexts for their application might be. A recurrent theme will be a major, yet somewhat underappreciated question in the extant GR literature 1 : To what extent is GR special in comparison to, say, electromagnetism or Yang-Mills theories? Advocates of an "egalitarian" view deny this (e.g . Feynman 1995;or Brown 2005). They can rightly point to the-at least, occasional-fertility of such a position. (Think, for instance, of spin-2 derivations of the Einstein Equations, see e.g. Pitts 2016c, or the clarification of the misinterpretation of the cosmological constant as a mass term, Pitts 2019.) Likewise, egalitarians must be given credit for often exposing double standards frequently applied to GR. Contrariwise, advocates of "exceptionalism" affirm GR's privileged status. They accentuate the distinguished explanatory (and, plausibly attendant, ontological) status of GR's spacetime structure (see e.g. Janssen 2009 (whom I take to extend his views on Special Relativity to GR); Nerlich 2007Nerlich , 2013. One's realist/anti-realist attitudes towards gravitational energy in GR tend to be fuelled by egalitarian/exceptionalist presuppositions. Usually, they remain implicit. It lies outside the present paper's ambit to adjudicate between GR egalitarianism and GR exceptionalism. Instead, I'll flag where one's verdict on the force of certain (counter-)arguments in the realism/anti-realism debate about gravitational energy hinges on such prior commitments. In this regard, I'll advance the following claim: In terms of realism/antirealism about gravitational energy, Read's functionalism brings nothing new to the table which transcends the realism one may already cherish towards gravitational energy: Only if one is already attracted to realism about gravitational energy (undergirded by egalitarianism about GR), will one find Read's position attractive, too. It doesn't furnish, however, any independent arguments for such a realism. The paper will proceed as follows. In Sect. 2, I'll introduce the dispute between realists about gravitational energy, such as Read, and antirealists, such as Hoefer. The next section, Sect. 3, hones in on Hoefer's arguments (Sect. 3.1), and Read's responses to them (Sect. 3.2). Both are assessed in Sect. 3.3. Section 4 is devoted to Read's own realist proposal. I'll first,in Sect. 4.1, reconstruct the logical structure of his argument. Section 4.2 critically evaluates it. Finally, in Sect. 5, I'll outline two contexts not considered by Read: non-tensorial global notions of gravitational energy, and Ashtekar's asymptotics programme. Setting the stage: Realism about gravitational energy In this section, I'll delineate the tenets of realism about gravitational energy, as they appear in the debate between Read (who advocates it) and Hoefer (who discards it). The debate revolves around the following conundrum. GR's field equations can be derived from varying the Einstein-Hilbert action 2 d 4 x |g| L (g) + L (m) . Applying Noether's 1st Theorem (or a suitable generalisation-what Brown and Brading call the "Boundary Theorem") to this total Lagrangian density, L (m) +L (g) , yields a continuity equation (see e.g. Barbashov and Nesterenko 1983 for details): here T b a denotes the energy-stress tensor associated with ordinary matter fields. ϑ b a is the canonical energy-momentum associated with the purely gravitational Lagrangian densityL (g) . It's dubbed the Einstein pseudotensor, It transforms tensorially only under affine transformations. Hence, its qualifier "pseudo". Despite the Einstein pseudotensor's non-tensorial nature, the above continuity holds for all coordinate systems. Given its exactly analogous construction as canonical energy-momentum in other field theories, the Einstein pseudotensor is naturally construed as local (differential) gravitational energy-stress density. (Below, I'll suppress "density" for the sake of readability.) In consequence, the above continuity equation is naturally interpreted as local conservation of total energy-stress: Total (gravitational plus matter/non-gravitational) S302 Synthese (2021) 199 (Suppl 2):S299-S333 energy-stress, T b a : T b a + ϑ b a , has neither sources nor sinks. (A referee has insightfully pointed out that one it might be more correct to say that the (pseudo-)density √ |g|T b a has no sinks/sources: after all, √ |g|T b a -and not T b a -satisfies the continuity equation. For better or worse, however, I'll stick with the conventional usage in the literature (by Hoefer and Read in particular) that T b a has no sinks/sources. Nothing in what follows essentially hinges on this.) Henceforth, I'll refer to this interpretation of the pseudotensor as "realism about local gravitational energy-stress" (REAL LOC ). I'll treat it as concomitant with the interpretation of T b a +ϑ b a as local total energy-stress. (For the purposes of this paper, I'll primarily discuss the Einstein pseudotensor as the pseudotensor to which (REAL LOC ) is committed. Other choices possible, see below. For the most part, the arguments presented here carry over mutatis mutandis.) Via an application of Gauß's Theorem, one may now try to convert the continuity equation into a conserved global (integral) quantity over a 4-volume. For the integrals to be well-defined, certain conditions must hold. Preliminarily, I'll subsume them under the label "asymptotic flatness". More details will follow in Sect. 3.1. The view that this quantity denotes global total energy-stress and is conserved-attendant with the view that the integral over the pseudotensor denotes global gravitational energy-stress-will be referred to as "realism about global gravitational energy-stress and energy-stress conservation" (REAL GLOB ). This position is strictly weaker than its local counterpart. Provided one counts a possibly divergent integral as a well-defined, but infinite quantity, one can advocate (REAL GLOB ) without (REAL LOC ), but not vice versa. 3 (Such a situation is familiar from other areas. Think, for instance, of entropy production in thermodynamics. In, say, a Carnot cycle, entropy production δ Q rev /T , with the reversible heat energy transfer δ Q rev and temperature T , is defined only up to "thermal gauge transformations", i.e. exact one-forms. Hence, "locally" entropy isn't well-defined. Only "globally" it is, i.e. the integral ∫ δ Q rev /T . For a field-theoretical example, think of the selfcurrent of Yang-Mills theories, j Aμ − f A BC A B λ f Cλμ . Being explicitly dependent on the connection A B λ , it's gauge-variant. By contrast, due to the sourceless Yang-Mills equations, ∂ ν F Aμν j Aμ (with F Aμν denoting the curvature of the connection), the "charge" Q Aμ ∫ d 4 x √ |η| j Aμ is gauge-invariant.) Hoefer and Read disagree over whether or not to adopt realism about (local and/or global) gravitational stress-energy. Read affirms both (REAL GLOB ) and (REAL LOC ) in certain contexts; Hoefer opposes them without qualification. What are their respective arguments? Hoefer's eliminativism: read's response In this section, I'll first review a straightforward reconstruction of Hoefer's objections to gravitational energy, together with his rejoinder that in GR energy conserva-tion should be abandoned (Sect. 3.1). Subsequently (Sect. 3.2), I'll inspect Read's responses. They are critically evaluated in Sect. 3.3. The analysis will cast into sharper relief the real problems that (Read's) realism about gravitational energy must address. They'll play a pivotal role in Sect. 4.2. Hoefer's first point, (H1), is that realism about energy-stress conservation "goes against the most important and philosophically progressive approach to spacetime physics: that of downplaying coordinate-dependent notions and effects, and stressing the intrinsic, covariant and coordinate-independent as what is important" (pp. 194). According to Hoefer, the pseudotensor featuring in (REAL LOC ) doesn't comply with this precept: "(I)ts non-tensorial nature means that there is no well-defined intrinsic 'amount of stuff' present at any given point" (ibid.). Neither does (REAL GLOB ) comply-presumably because asymptotic flatness, as Hoefer presents it, is formulated via the following coordinate conditions: (a) For r : x 2 + y 2 + z 2 → ∞, the coordinate system must be asymptotically Lorentzian, i.e. g μν → η μν diag(1, − 1, −1, −1). In the interior, it can vary arbitrarily. (b) The metric must decay sufficiently rapidly: Hoefer's second objection, (H2), (ibid.) attacks the pseudotensor's ambiguity: It's not uniquely defined. Some elaboration is in order of what Hoefer may have had in mind. The pseudotensors are defined only up to a transformation of the form a . Here, [bc] a is a so-called superpotential, anti-symmetric in its upper indices (see Trautman 1965, for details). As a result it's vastly underdetermined, thereby impeding (REAL LOC ). Hoefer's third argument, (H3), targets (REAL GLOB ). His thought seems to be that (REAL GLOB ) hinges on realism about the conditions under which it's well-defined, i.e. asymptotic flatness. Hoefer correctly observes that our actual world isn't asymptotically flat. Realism about asymptotic flatness thus is mistaken. This, according to Hoefer, undercuts (REAL GLOB ). Hoefer's point straightforwardly carries over to (REAL LOC ). In flat spacetimes, pseudotensors are to some extent extricated from their unsettling non-tensorial transformation behaviour: In them, Poincaré transformations-i.e. at least a subgroup of linear transformations-are distinguished as relating physically equivalent frames. Alas, Hoefer might interject: Our universe isn't flat-not even asymptotically. An advocate of (REAL LOC ) thus has to stomach non-tensoriality. Based on his diagnosis of coordinate-dependence, ambiguity and anti-realism about the formal prerequisites for defining global gravitational energy, Hoefer champions antirealism/eliminativism about gravitational energy: we should relinquish the notion. Instead, we should just accept that in GR, energy conservation no longer holds. How does Read respond to Hoefer's arguments? Read's response While for Hoefer the above reasons suggest that one abandon (REAL GLOB ) and (REAL LOC ), Read wants to resist this conclusion. He parries by (R1) rejecting Hoefer's ban on coordinate-based language, (R2) by assuming that the non-uniqueness can be overcome (or, at least, isn't problematic), (R3) by defending the use of idealisations, and (R4) by baulking at the revisionary nature of Hoefer's eliminativism. First, Read takes Hoefer to outlaw the usage of coordinate-dependent notions (H1). Read rightly repudiates this as unwarranted (R1). The mere usage of coordinates is unproblematic 4 : "[…] (P)resentations of spacetime theories need not proceed in a coordinate-independent manner; rather, spacetime theories may be defined in terms of equations written in a coordinate basis and their transformation properties (this is what Brown […] and Wallace […] refer to as the 'Kleinian conception of geometry'), and explanations may be given by appeal to those laws, written in a coordinate basis." On this Kleinian conception, one characterises geometry via the class of privileged coordinate systems (see Wallace 2016 for details). In these, the dynamical equations preserve a particular (e.g. simplest) form. Such coordinate-based characterisations are as coordinate-independent as those not based on coordinates, i.e. drawing on intrinsically geometric notions. Prima facie, Read thus effectively wards off Hoefer's first complaint. It might even appear that (H1) was unfounded for (REAL GLOB ) from the outset: While it's popular and expedient to define asymptotic flatness in a coordinate-based manner (e.g. Jaramillo and Gourgoulhon 2010 for a more detailed presentation), this isn't necessary. Via conformal techniques, it's indeed possible to characterise asymptotic flatness in purely geometric, coordinate-free terms (e.g. Geroch 1972, Ch. 35-38;Wald 1984, Ch. 11;Ludvigsen 1999, Ch. 12)-as Hoefer demands. To Hoefer's complaint of the ambiguity of pseudotensors (H2), Read responds as follows (R2): "There are many distinct but non-equivalent choices for this pseudotensor, based on one's choice of superpotential. Hence […] we are implicitly supposing that a choice has been made from the family of possible candidates" (p. 11). (Below, I'll also consider a different response that Read may be read as endorsing.) In his third response, (R3), Read rebuts Hoefer's attack on asymptotic flatness as an assumption not applicable to our universe (H3). Read acknowledges: It is "[…] undeniable […] that the entire universe is not asymptotically Minkowski" (pp. 16). Yet, according to Read, asymptotic flatness is a good idealisation for certain approximately isolated subsystems. 5 Read rightly underscores that "every theory of physics is an idealisation and does not 'apply to the actual world' in this strong sense" (p. 17). He takes Hoefer to reject asymptotic flatness as an ultimately inaccurate assumption. That, however, Read argues, demands too much of successful hypotheses for them to earn realist commitments: Ultimate exactness is never attainable. Rather, Read suggests that this doesn't curtail the utility of asymptotic flatness as an idealisation. Read's final response, (R4), is to avoid Hoefer's eliminativism due to its "potentially undesirable consequences". On the one hand (R4a), "such a claim would also commit one to the statement that there exists no genuine stress-energy conservation law in [Special Relativity, SR]-a theory in which the conservation of total stress-energy typically is taken to be uncontroversial" (p. 18). On the other hand (R4b), "the advocate of the Hoefer-type view is apparently committed to the denial of the claim that gravitational waves and other forms of purely gravitational radiation are energetic". Read avers that this is gratuitously revisionary. Do Read's responses-the legitimacy of coordinate-based language, the implicit supposition that the non-uniqueness can be overcome, the legitimacy of asymptotic flatness as an approximation, and the rebarbative ramifications of Hoefer's eliminativism-effectively rebut Hoefer's worries? In the next paragraph, I'll assess Read's answers, arguing that they don't. Hoefer reloaded I'll now critically examine Read's counters to Hoefer, (R1)-(R4). Each, I submit, misses the more subtle points of Hoefer's critique: (R1) conflates the mere usage of coordinates with a vicious coordinate-dependence; (R2) merely voices a hope, not an argument; (R3) ignores the distinction between approximations and idealisations; (R4) is in part, both exegetically and systematically unwarranted, and in part an appeal to majority consensus. Let's begin with (R1), Read's rehabilitation of coordinate-based descriptions à la Klein. I deem it a red-herring: It's an infelicity in Hoefer's presentation of his argument which invites the misunderstanding that Hoefer wishes to ban coordinate-based language per se. A more disconcerting real issue lurks behind his worry, though: Pseudotensors, which figure in (REAL LOC ), are artefacts of conventions; something akin besets the integral over them. Read is certainly right in that neither coordinate-relativity nor non-tensoriality need prevent us from ascribing an object physical significance. The Levi-Civita connection coefficients, λ μν 1 2 g λσ ∂ ν g σ μ + ∂ μ g σ ν − ∂ σ g μν , attest to that: Geometrically, Footnote 5 continued defines a general-relativistic system as (materially and gravitationally) isolated because its total energy content is conserved; otherwise, one would regard it as (at least) gravitationally interacting. To avoid this vacuity, I take Read to make the more specific claim that certain subsystems of the universe that don't interact non-gravitationally are approximately asymptotically flat. To jump ahead a little: The preceding claim can't be universally true-as witnessed by textbook FLRW cosmologies: Their matter sector is modelled by cosmic dust, i.e. a homogeneous, isotropic fluid with negligible non-gravitational interactions, see e.g. Hobson et al. (2006, Ch. 14). One therefore ought to understand Read's claim as this: There exists a physically relevant, and empirically well-corroborated class of only gravitationally interacting systems that are asymptotically flat. We'll return to this in Sect. 4. they connect the fibers of the tangent bundle over different points of the base manifold; physically, they encode inertial structure. Yet, pseudotensors are plagued by "vicious coordinate-dependence" (Pitts 2009, p. 16) 6 : They pick out preferred coordinates in the above Kleinian sense that don't align with the spacetime symmetries. Equations involving pseudotensors preserve their invariance only under affine coordinate transformations. But what distinguishes them in non-flat spacetimes? (Recall: In generic spacetimes affine coordinate transformations aren't preferred in the Kleinian sense.) Conversely, how to make sense of the fact that pseudotensors don't respect spacetime symmetries? That is, how to understand the fact that their invariance isn't preserved under spacetime symmetry transformations-in contradistinction to what one expects of matter fields (cf. Pooley 2013)? These oddities are highlighted by the fact that pseudotensorial 4-fluxes of gravitational energy-momentum, ϑ ν μ ξ μ (along the direction of ξ ) don't transform like 4-vectors under purely spatial, or under purely temporal transformations. But both amount to a merely conventional re-labelling of points in space, and continuous change in the rate and setting of a coordinate clock (Horský and Novotný 1969, p. 431), respectively. Neither should impact physical quantities-such as energy-momentum fluxes. By stressing merely their non-tensoriality, Read downplays the viciousness of pseudotensors: They aren't merely (and, as the Kleinian hastens to add: benignly) "frame-relative"; they are viciously frame-dependent. This has its exact counterpart in the frame-dependence of non-invariant quantities in SR: The latter don't represent objective features of the world; reifying them leads to the notorious "paradoxes" of SR (see Maudlin 2011, Ch. 2, 2012 for lucid explications). It's instructive to re-phrase the problem: Pseudotensors don't form geometric objects in the sense of e.g. Trautman (1965, Ch. 4.13, 1962. 7 A geometric object y on an N-dimensional manifold M is a correspondence y : p, {x μ } → y 1 , . . . , y N ∈ R N which associates with every point p ∈ M and every local coordinate system {x μ } around p an N-tuple y: y 1 , . . . , y N of real numbers (the object's components), together with a definite transformation rule that relates the components relative to the original coordinate system, and the components y : y 1 , . . . , y N relative to a different coordinate system x μ around p. The transformation rule only involves the object's components y, y relative to the coordinate systems {x μ } and x μ , and their Jacobi matrix ∂ x/∂ x . Requiring that the transformation rule depend only on the the components and the Jacobi matrix and the Jacobi matrix's gradient (i.e. y, y , ∂ x /∂ x,∂ 2 x /∂ x 2 ) is necessary for the mutual consistency of legitimate coordinate systems in the following sense: Whenever we can use different coordinate systems, the order in which we switch from one to the other them doesn't matter. This can be stated more precisely. Let the transformation rule T y, y , ∂y/∂ x, ∂x/∂ x ) for the coordinate transformation {x μ } → x μ depend on, say, ∂ y/∂ x. Consider now x μ as a third coordinate system. Suppose that relative to it, T y , y , ∂y /∂ x , ∂x /∂ x ) also holds. In general, it won't follow, however, that T y, y , ∂y/∂ x, ∂x/∂ x ) is satisfied (for details, see Kucharzewski and Kuczma 1964). In short: The components of geometric objects in arbitrary coordinates are uniquely determined by their components in one coordinate system and the transformations between the coordinates. By way of example, note that connection coefficients μ κλ form a geometric object. Under coordinate changes, they transform as By contrast, consider the vector field v μ . Then, the quantity ∂v μ ∂ x ν doesn't form a geometric object: Under coordinate changes, it transforms as That is: The transformation rule exhibits the prohibited dependence on v κ (rather than ∂v κ ∂ x λ ). One can indeed straightforwardly verify that in virtue of this dependence, successively applying the preceding transformation law to two coordinate transformations, In the same sense, being non-geometric objects, pseudotensors are viciously coordinate-dependent: The transformation rules of pseudotensors exhibit a dependence on the coordinates employed. 8 The consistency condition is violated. Geometric objects, however, constitute the standard framework within which physical objects of contemporary field theories are couched (see Nijenhuis 1952;Schouten 1954;Anderson 1967Anderson , 1971Torretti 1996, Ch. 4.3). 9 They ensure that the intrinsic properties of physical entities and all relations between them are preserved under general coordinate transformations-the mere re-labelling of the manifold points: "Thus the components of a geometric object form a natural kind mathematically: they constitute faces of one and the same entity by virtue of being interrelated by a coordinate transformation law" (Pitts 2009, p. 610). (Note that this is compatible with the existence of special coordinates, in which the physical laws take a particularly simple form.) By contradistinction, the properties and relations of entities represented by non-geometric objects are, as it were, sensitive to the labels attached to spacetime points. But such labels are usually deemed merely conventional. (Equivalently: Non-geometric objects presuppose more structure-information encoded directly in coordinates of the manifold points-than a manifold, standardly construed, contains.) Due to their non-geometric nature, the physical significance of pseudotensors, and hence the tenability of (REAL LOC ), thus becomes questionable. To be sure, Read could stand by his guns: He might withdraw his allegiance to the geometric object programme. 10 Suppose that a pseudotensor θ b a [τ , Σ τ ] is only meaningful relative to a given coordinate system. Let the latter represent a (3 + 1)decomposition ("frame"), τ , Σ τ . Relative to a different frame, τ , Σ τ one obtains a distinct object, θ b a τ , Σ τ : θ b a τ , Σ τ . Vicious coordinate dependence of θ b a now has lost its sting: θ b a and θ b a represent distinct entities. What impedes the interpretation of such frame-relative objects is that no (3 + 1)-decomposition is distinguished over any other. To preserve this "frameegalitarianism", one has two options. The first one is to renounce realism about those θ b a [τ , Σ τ ] s for all possible frames. This is tantamount to anti-realism towards pseudotensors. The second option is to extend one's realist attitude to every θ b a [τ , Σ τ ] for all frames. The idea is to lump the totality of all Q[τ , Σ τ ]s for all possible frames, {τ , Σ τ }, into one formal object-symbolically: as one of the uncountably infinitely many components of .) A realist about doesn't privilege any of its components. Thereby, she respects frame-egalitarianism. ( is even a geometric object in a slightly relaxed sense. 11 ) Pitts (2009) has indeed made this astute proposal. Here, we needn't arbitrate between the anti-realist first, or the realist second option. It's clear, however, that at this stage (REAL LOC ) is staked on the plausibility of Pitts' proposal. Whether the latter is persuasive remains to be seen (cf. Curiel 2018, fn 27; Duerr 2018a, Sect. 3.3 for a critique). Read, at any rate, stays silent on the matter. (Plausibly, a defence of realism about Pitts' proposal deploys a double strategy akin to Read's: 1. to appeal to scientific utility to licence realist stance towards it, and 2. to appeal to similarities with pre-relativistic notions of gravitational energy in order to identify Pitts' object as their genuine, general-relativistic analogue. Read's crucial-to-date unaccomplished-task would then be to flesh all of this out in detail.) In the same vein, Read's Kleinian vindication of coordinate-use doesn't allay a related worry for (REAL GLOB ). Different coordinate choices can give rise to different (or even ill-defined) distributions of global (gravitational) energy-stress (see Xulu 10 Contra Read's (2018, Sect. 3.1) remark, absent any explicit discussion in his work (to my knowledge), it's hard to say-and possibly a rewarding reconstructive task, integrating his views on coordinates (see e.g. Norton 1989Norton , 2002 and interpretation of GR (see e.g. Lehmkuhl 2014)-whether Einstein himself would have had qualms about non-geometric objects. While he repeatedly objected to the requirement that all meaningful objects be tensorial, that view is, as we saw, compatible with an insistence on geometric objects. Indeed, Torretti (1996, p. 316, fn 1) views Einstein's insistence on a "definite transformation rule" as essentially an endorsement the geometric object framework. (This is compatible with the passage, cited by Read (fn 22), in which Einstein defends his pseudotensor against his colleagues' complaint. Therein, Einstein (1918, p. 449) critiques their view that "all physically significant quantities can be understood as scalars and tensor components" (my translation). It's not clear, however, that Einstein fully understood the non-geometric nature of his pseudotensor (avant la lettre). Yet, later on (p. 452) in the cited text, Einstein seems to concede some unease about his pseudotensor: "[…] we thus come to ascribe more reality to the integral than to its differentials" (my translation). I thank James Read for pressing me on this.). 11 Usually (e.g. Trautman 1965, p. 85 or Anderson 1967) one considers only geometric objects with finite components. 2003, for a survey of explicit calculations). To maintain realism about pseudotensorbased integral quantities, one must cope with the ambiguity resulting from such coordinate-dependence. There are three options. The first is to remove the ambiguity by privileging certain coordinate systems (e.g. quasi-Cartesian ones). This seems to contravene frameegalitarianism. (On the other hand, Read might argue-as does e.g. Pitts (2009)-that our world, to a good approximation, does privilege quasi-Cartesian coordinates anyway. But first, one may worry whether such an appeal to approximate symmetries is sufficiently robust: How good need the approximation be for it to legitimately privilege quasi-Cartesian coordinates? Secondly, as we'll see below (Sect. 4.3.2), quasi-Cartesian coordinates aren't privileged, when applied to the universe as a whole-nor generic subsystems. If, however, one restricts oneself to not too large spacetime regions that can be approximated as roughly Minkowskian, quasi-Cartesian coordinates are indeed privileged for those subsystem. But such a quasi-Minkowksian regime is contingent, and fairly arbitrarily stipulated: What then justifies Read in distinguishing it for characterising gravitational energy and/or energy conservation?) The other two options are in line with it. One is to adopt anti-realism about such integral quantities. This defeats Read's realist ambitions. He should therefore pursue the third option-the integral/global version of Pitts' proposal: All integral quantities are real. That is, he should adopt realism about infinite-component objects of the (quasi-symbolic) type As before in the case of (REAL LOC ), the plausibility of Read's version of (REAL GLOB ) hinges on the plausibility of realism towards such objects. Again, Read's position doesn't add a new argument for (REAL GLOB ). Rather, it crucially relies on a prior commitment to a realism about the integral Pitts-object-for which no argument is given. In summary: A conservative framework for classical field theories is Anderson's geometric objects programme. Within it, a physical interpretation of pseudotensors, as envisaged by (REAL LOC ), is doubtful: They aren't geometric objects. (REAL GLOB ) fares no better: Pseudotensor-based global notions of gravitational energy are coordinate-dependent artefacts of conventions. Read has two options: either to reject (or revise) the geometric object framework, or to extend his realism to Pitts' object. For either choice, we are owed an argument. These problems hold irrespective of one's predilection for a Kleinian or a coordinate-free approach to geometry. Read's response (R1) cuts no ice against them. Let's continue with (R2), Read's second response, regarding the pseudotensor's ambiguity (H2): He simply assumes that one can learn to live with the plurality, or that uniqueness can be restored in a principled manner. The ambiguity of pseudotensors bodes ill for (REAL GLOB ): Different pseudotensors can also yield different global energy distrutions (see again Xulu 2003, also for further references). One response is, of course, to accept the ambiguity about gravitational energy-stress. But such pluralism has a drastic conclusion: Via the First Law, it threatens to subvert the uniqueness of thermodynamic states more generally. Read shies away from this (pers.comm.). Read's hopes should therefore be set on a way of coping with the non-uniqueness. But he remains silent on how to achieve this. Why believe Read's "implicit supposition" (R2)? Two possible reasons spring to mind: One is that perhaps uniqueness can be restored; the other is to bite the bullet: Perhaps the non-uniqueness is a feature, not a bug. It's certainly conceivable that the list of viable pseudotensors can be further whittled down. For instance, vis-à-vis its anomalous factor, it's plausible to exclude the Møller pseudotensor (Katz 1985). More general arguments for a unique expression are collated in works by Katz (2005), Katz et al. (1997) and Petrov (2008). (Note that these authors use a background metric. Vis-à-vis such auxiliary structure one may already ponder: Does its introduction compromise the result?) While an enticing project, a comprehensive analysis of such an agenda is pending. Another possibility for coping with the non-uniqueness is "to try to find meaning in it" (Pitts). In this spirit, Pitts (2017), following Nester (2004) and collaborators, suggests that the pseudotensors' ambiguity is a blessing in disguise: Their differences correspond to different free energies and the like under different boundary conditions. It remains to be seen whether this proposal is convincing (cf. Duerr 2018a, p. 11). At present, it too is an enticing project, not a clear-cut argument in Read's favour. In short: As it stands, Read's response (R2) falls short of being an argument. At present, whether uniqueness for gravitational energy can be restored is an open question. Likewise, whether non-uniqueness is an advantage, remains controversial. Read's third response (R3) takes Hoefer to reject asymptotic flatness for not applying to our universe. Read seeks to legitimise its use as an approximation. This way of portraying Hoefer's criticism, however-as a demand for excessive rigour-glosses over a deeper concern: To what extent may we assume that the universe possesses the relevant structures that gravitational energy presupposes? We can render the question's import more transparent by dint of Norton's distinction between approximations and idealisations (Norton 2012). The former denotes an inexact description of the target system. The approximation's referent coincides with it. An idealisation, by contrast, is an (inexact) description of a surrogate system that mimics the target system in relevant regards. An idealisation's referent is thus distinct from the target system. Given a supremely successful model, an inference to the best explanation (IBE) entails different realist stances towards it-depending on whether one classifies it as an approximation or an idealisation (cf. ibid, Sect. 2.4; Torretti 1990, Ch. 3.6). In the first case, an IBE licences realism about the model totaliter: The target system can be assumed to actually possess, at least roughly, the properties of the model. By contradistinction, an IBE about an idealisation licences only a "selective realism": We may only assume that the target object shares some structural features with the model-those responsible and indispensable for the model's explanatory success, its "working posits" (see e.g. Vickers 2016, 2017 for details). Only they-not the model tout court-merit realism. Norton's distinction affords a refined reading of Hoefer's objection to asymptotic flatness: Rather than an intolerably imprecise approximation, asymptotic flatness is an idealisation of our actual world. Even when successful, an IBE about asymptotically flat models consequently doesn't warrant an unqualified realism: Only their working posits merit realism. Hoefer's criticism (H3) is thus best construed as the view that asymptotic flatness is an idle posit of an idealisation. By contrast, Read's response (R3) is more sanguine: Asymptotic flatness is either an approximation, or a working posit of an idealisation. Hence, it literally (albeit only in approximation) depicts real structures in the world. Neither Hoefer nor Read proffers arguments for their respective verdicts, thus construed. I'll arbitrate between them in Sect. 4.2. To summarise: Hoefer's objection to asymptotic flatness is best interpreted as the view that the explanatory successes of relativistic astrophysics and cosmology don't justify the belief in approximate asymptotic flatness. Read champions this belief. Neither backs up his stance by arguments. With his fourth response, (R4), Read goes on the offensive. He points to two unappealing alleged consequences of Hoefer's position. Neither argument strikes me as cogent. Read's first claim, (R4a), is that Hoefer's eliminativism implies also a failure of energy-conservation in Special Relativity (SR)-provided one demands that an acceptable conservation principle hold in every frame. I agree with Read about this conditional claim. But I find no textual evidence that Hoefer endorses the antecedent condition. 12 But even if he did: His eliminativism isn't inherently tied to the (implausible) doctrine that conservation principle must hold in all frames. 13 In fact, for the same reasons why fictitious forces in Classical Mechanics (e.g. the Coriolis force) are artefacts of descriptions in ill-adapted, generic coordinate systems that needn't disturb us (see e.g. Maudlin 2012, pp. 23, fn. 7), we needn't be worried by a conservation principle formulated in special coordinate systems. If we thus drop the doctrine of equality of all frames, SR's conservation law remains untouched-as Read (2018, Sect. 2.4) admits: Owing to the existence of a timelike Killing field in Minkowski spacetime, a bona fide, tensorial local and global conservation law is straightforward (e.g. Straumann 2013, Ch. 3.4). The coordinates adapted to these symmetries are the familiar (globally defined) Lorentz coordinates. In them the matter energy-stress tensor satisfies an ordinary continuity equation, with its standard interpretation. Read's second claim, (R4b), is that on Hoefer's view, the standard interpretation of binary systems must be jettisoned: In this account, gravitational energy evidently is a central explanans (e.g. Hobson et al. 2006, Ch. 18). An eliminativist about gravitational energy, however, abjures it. Read's lesson: So much the worse for Hoefer's eliminativism. But Read's argument is an argumentum ad verecundiam: It merely cites orthodoxy in the community. What are the cogent, systematic reasons to subscribe to it? I'll return to this in Sect. 4. The insights gained here will pave the ground for our discussion of Read's own position-the topic of our next section. Functional Gravitational Energy and its discontent Here, I'll first (Sect. 4.1) lay out Read's functionalist approach to gravitational energy. Its logical structure will be made explicit. Subsequently, (Sect. 4.2) I'll critically examine three of its crucial premises. I reject them for multiple reasons. Notwithstanding my sympathies to his overall functional approach, and to the Dennettian ontological framework, I conclude that Read's realism should be dismissed. Functional Gravitational Energy Here, I'll expound Read's realism about gravitational energy-stress (Read 2018, Sects. 3.3.2, 3.3.3), and the logical structure of his argument for it. Read proposes to embrace the background relativity of gravitational stress-energy (in the sense of Sect. 3.3). As this background-relative notion is both useful, and satisfies the functional role of gravitational, according to Read, we should be realists about it. By "background" Read (and Lam, see below) mean (asymptotic) symmetries, encapsulated in asymptotic Killing fields, and suitable fall-off conditions, both implemented via asymptotic flatness. Lam and Read suggest that one should regard local and global gravitational and total energy as quantities well-defined relative to this background. Let's unravel his reasoning in more detail. Read picks up an earlier intimation by Lam (2011): On the one hand, "[…] within [GR] all meaningful notions of (gravitational and nongravitational) energy-momentum […] require the introduction of some background structures" (p. 1023); on the other hand, if these structures are present, genuine gravitational and non-gravitational energy exists: "they make only sense in particular (but very useful) settings" (ibid.). Read's realism, (REAL LOC ) & (REAL GLOB ), can now be cashed out as positive, principled answers to the following two questions (p. 19): (a) Does the pseudotensor ϑ b a in (REAL LOC ) and its associated integral ("charge") in (REAL GLOB ) represent anything real? Are these formal terms grounded in physical (but not necessarily fundamental) quantities? (b) Suppose a positive answer to (a). Are we then licenced to identify the quantities that ϑ b a and its associated charge represent as gravitational energy-stress? "(I)s it correct to call the quantity appearing in [the continuity equation of (REAL LOC ) and its integral form in (REAL GLOB )] […] 'gravitational stress-energy'"? The questions in (a) require a reality criterion. Echoing Lam, Read appeals to the explanatory and predictive utility of the gravitational pseudotensor and its associated charge: "[…] (they) are only well defined in a certain subset of [dynamically possible models, DPMs] of GR"; (n)evertheless, in such instances it is extremely useful to make use of this term, within that subclass of DPMs. Hence, at a practical level, it is legitimate to call such a quantity gravitational stress-energy." This is an instance of the following principle for realist commitment towards a theoretical, higher-level concept Q (cf. Dennett 1991a; Ladyman and Ross 2007, esp. Ch. 4)-what Wallace (2012, Ch. 2) dubs "Dennett's Criterion": Whenever Q is definable and explanatorily or predictively useful, it captures a real structure ("real pattern") in the world. Real patterns are higher-level structures: They are formulated in non-fundamental terms. (Think of molecules and their shapes as treated in chemistry. A satisfactory fundamental account isn't available at present (see Hettema 2012 for the chemical case). Of course, this doesn't imply that real patterns are "strongly autonomous" (Fodor), i.e. unrelated to the most fundamental level.) To complete his affirmative answer to (a), Read needs to assume that the quantities conventionally labelled "(formal) gravitational energy", gravE f, 14 indeed satisfy (DC): For certain DPMs, gravE f is definable and explanatorily/predictively useful. It now follows from (DC) that gravE f captures a real pattern in the world ("is real"): Having established the reality of formal gravitational energy, Read's next step is to affirm (b): The real pattern gravE f captures should be identified as genuine gravitational energy-stress; it represents gravitational energy-stress also in a substantive, physical sense. Read's rationale encompasses three elements: a general functionalist principle for characterising quantities, a particular functional profile for genuine gravitational energy-stress, and the premise that gravE f exhibits this profile. Read deploys what he terms a "functionalist" (p. 20) general strategy: "In our view, it is plausible to maintain that in situations such as those in which [the integral conservation law] holds, there exists a quantity in GR that fulfils the functional role of gravitational stress-energy" (pp. 19). That is, Read adopts the following "functionalism about gravitational energy-stress": For a quantity Q to be (represent, " . ") genuine gravitational energy-stress is to exhibit a certain profile F(gravE) of functional roles: How to flesh out the functional profile of gravitational energy-stress, F(gravE)? Read determines it to comprise two functional roles: (F(gravE)) (i) balancing the non-gravitational energy such that the sum is conserved & (ii) "(bearing) some relation to the 'gravitational' degrees of freedom in the theory in question" (p. 20). To complete his argument, a final premise is needed-viz. that gravE f plays the preceding two functional roles: (F(gravE))[gravE f ] gravE f instantiates the profile (F(gravE)). By construction, gravE f obeys a (formal) balance equation. Hence, (i) is satisfied. Likewise, (ii) looks harmless: It's customary (e.g. Misner et al. 1973, passim) to identify the metric with the gravitational degrees of freedom (the "gravitational field"); gravE f is directly and solely built from it. From the conjunction of (FUNC gravE ) and (F(gravE)) now follows that gravE f earns the label "gravitational energy". It represents genuine gravitational energy-stress: In summary, Read has thus given a formally valid argument for (REAL LOC ) & (REAL GLOB ). Based on the alleged expedience of the gravitational pseudotensor and its associated charge, Read argued for a realist stance towards them. Furthermore, meeting his functional desiderata of gravitational energy, they indeed represent, on his proposal, gravitational energy-stress. What to make of Read's proposal? Is the appeal to functionalism convincing? Does gravitational energy-stress in GR really satisfy the functional roles, stipulated by Read? Does his proposal overcome the difficulties that undergird Hoefer's eliminativism (Sect. 3.3)? To these questions we now turn. Objections In this subsection, I'll evaluate Read's realism about gravitational energy. Apart from Dennett's Criterion (DC), and the fact that the formal notions of gravitational energy play the two functional roles stipulated by (F(gravE)) gravE f , I'll question each assumption in his reasoning sketched above. I'll discuss each premise separately and in increasing order of generality: (F (gravE)), (FUNC gravE ) and (DC)[gravE f ]. Is Read's functional characterisation of gravitational energy-stress adequate? Consider first Read's functional profile of gravitational energy-stress, i.e. (F(gravE)): Are the functional roles of gravitational energy-stress adequately characterised by (i) and (ii)? I dispute that: They are neither jointly sufficient nor necessary. Two facts cast doubt upon the view that (i) and (ii) are jointly sufficient: the triviality of continuity equations, and ambiguity, respectively. Read may demur at continuity equations thus constructed as they hold irrespective of any field equations (and furthermore that they also depend on the matter degrees of freedom). They are indeed mathematical identities. Read might parry by supplementing (i) with a proviso: the conservation law not be a mathematical identity (and not directly depend on the matter degrees of freedom). 15 This doesn't alleviate the above worry, though: The previous argument can just be rehashed forΓ μν : ∂ ,σ (γ μν γ σ − γ νσ γ μσ )− 1 κ √ |g|G μν . The continuity equation continues to hold-but now in virtue of the Einstein Equations. Another problem arises from ambiguity. Recall from Sect. 2.3: There exist infinitely many pseudotensors satisfying a local continuity equation. All are built solely from the metric. One needn't even restrict oneself to pseudotensors. Nothing in Read's proposal seems to prevent one from introducing e.g. additional flat background metrics, an orthonormal tetrad or a flat connection (Pitts 2011b for a survey of such options). Ditto quasi-local notions (see e.g. Szabados 2009). 16 Objects with the functional profile F (gravE) abound. 15 This meshes with common practice in the literature on conservation laws: Therein, one distinguishes between "improper" (Hilbert) or "strong" (Bergman) conservation laws on the one hand, and "proper" or "weak" conservation laws on the other (see Brading and Brown 2000;Brading 2005). However, whether "proper conservation laws" have physical significance eo ipso is a delicate question (Sus 2017). As Read's counter-manoeuvre would arguably seek to ensure physical significance, the proviso would have to be formulated carefully. 16 Quasi-local approaches are plagued by ambiguities of their own, as both Hoefer (2000, p. 196) and Lam (2011, p. 1022) correctly point out. Unless their mutual consistency can be established, this proliferation of candidate objects that satisfy F(gravE) should unsettle Read. (Recall our discussion of (R2) in Sect. 3.2.) I therefore conclude: (i)&(ii) is an insufficient characterisation of the functional profile of gravitational energy. Further scepticism about the functional roles of F(gravE) is in order. 1. Conserved quantities are contingent on symmetries. Hence, criterion (i) isn't necessary. 2. Criterion (ii) is bedevilled by general fuzziness, as well as equivocation about the gravitational degrees of freedom. I'll first argue that (i) imparts a spurious essentiality to a contingent feature of our most familiar spacetime settings. Underlying Read's stipulation is the intuition that total energy should be conserved. This intuition stems from our habituation to classical theories in flat spacetime (cf. Nerlich 1991). Why expect this to carry over to GR? The principal motivation stems from the Noether theorems. They establish a general correlation between symmetries of the action and conserved quantities (see e.g. Brading and Brown 2000). Due to its general covariance, GR's action has infinitely many rigid symmetries (see Bergmann 1949Bergmann , 1958Brown and Brading 2002;Brading 2005). The Noether Theorems then guarantee, at least formally, infinitely many conservation laws of the pseudotensorial type. To take these formal infinitely many conservation laws seriously, i.e. to regard them as also physically meaningful, leads us back to Pitts' proposal. Whether it deserves realism, remains controversial, as we saw. One source of reservations about the infinitely many conservation laws may derive from GR's general covariance. Because of the latter, they belong to so-called "improper conservation laws" (Hilbert). These arise from Noether's theorems for all theories with local symmetry group that have a global subgroup (see e.g. Bergmann 1949;Brading and Brown 2000). Their interpretation and physical significance-as Hilbert's label intimates-is subtle: Under certain circumstances, they seem to be (at least, individually 17 ) trivial, i.e. mathematical identities (see e. Whether in our world we should take these infinitely many conservation laws seriously, thus depends on whether we should believe that our world instantiates such asymptotic background structure. And indeed, I'll argue below that one should-however, the asymptotic structure is that of a de Sitter space. But that entails two problems. The first is that the integrals of the pseudotensor-based continuity equations diverge. Thereby, the conserved global/integral charges aren't well-defined. But with the symmetries of de Sitter space, also the motivation for a local/differential conservation law, based on pseudotensors, becomes moot: Using the the associated so-called Killing vectors (see e.g. Read 2018, Sect. 2.4), one can define bona fide (covariant) matter energy-stress fluxes that are covariantly conserved-with no (overt) gravitational contributions (Duerr 2018a, Sect. 2). The connection with Killing vectors can be developed further along a different direction. Unless the spacetime possesses symmetries (to which special coordinates could be adapted (see e.g. Pooley 2017, sect.), coordinates that would be able to single out pseudotensor-based continuity equations) the pseudotensorial conservation laws thus seem to lack intrinsic meaning. But such spacetime symmetries are contingent: Generic spacetimes lack them; even most do. Why, therefore, cling to energy conservation as a default? It seems more natural to reverse the familiar explanatory asymmetry: Energy conservation, not its failure, needs explanation-in terms of a spacetime's special symmetries (see Carroll 2010 for a slightly brutal way of putting it; cf. Duerr 2018a, Sect. 2). 18 Of course, one might resist this whole reasoning by pointing to the mathematical fact that, due to general covariance, GR's action has symmetries. But as mentioned before, it's unclear that this, by itself, warrants wider-reaching physical conclusions. (Also bear in mind that that most will hesitate to regard an action as more than a merely auxiliary construct-not a physical quantity. Hence, inferences from its properties to properties of physical systems must be handled with care.) Let's move on to Read's second functional characteristic of gravitational energy, (ii). It can be opposed for two reasons. One is its vagueness: What exactly is the relation that should hold between a candidate for gravitational energy-stress and the gravitational field? A second worry is more subtle: What are the gravitational degrees of freedom-the "gravitational field"? 19 Which quantity represents them, e.g. the metric g μν , the connection coefficients λ μν (Einstein's choice, see Lehmkuhl 2014), the Riemann tensor (Synge's choice, Synge 1960), or the deviation from flatness g μν − η μν (Pooley's choice, Pooley 2013, fn. 20)? 20 Each choice has some merits in its favour (Lehmkuhl 2008). Read rightly cautions against any premature a priori preference for one. Yet, it's not obvious that his second functional role for gravitational energy, (ii), can avoid an a priori choice. The pseudotensors in (REAL LOC ) are the canonical energymomenta associated with the metric as the gravitational field. Suppose, however, that we identify the connection coefficients Γ α βγ as the gravitational field. Then, the associated canonical energy-stress is the Palatini-pseudotensor 18 One may flesh this out further in terms of Strevens'(2011) notion of difference-making. 19 An anonymous referee has voiced misgivings that this isn't a serious question for physics: Although various definitions are possible for the gravitational potential, the proposed choices don't essentially affect the canonical pseudotensor, understood as the canonical energy-stress associated with the gravitational degrees of freedom. To identify the latter, according to her or him, only the gravitational field (whose role is presumably analogous to that of the electromagnetic field) should be used. I beg to differ. First, arguments from analogy are notoriously defeasible. It's therefore unclear to me that speaking of gravitational potentials is legitimate, let alone illuminating. Secondly, even if we trusted the analogy, how to flesh it out? Which object represents the potential, which the field? For better or worse, we find various options considered in the pertinent physics literature (cf. Lehmkuhl 2008). 20 Read (p. 7 p. 20,fn. 35) acknowledges this. One may object: By comparing the Einstein pseudotensor and the Palatini pseudotensor, aren't we comparing apples and oranges? The Palatini pseudotensor ϑ ν μ [ ] is based on the full Einstein-Hilbert Lagrangian-not (as is the Einstein pseudotensor) on the truncated, "Γ Γ LagrangianL (g) . If one determines the corresponding Einstein pseudotensor for the full Einstein-Hilbert Lagrangian, both expressions coincide (Novotný 1993). Prima facie, this is satisfying (and a remarkable property of the Einstein-Hilbert Lagrangian!). Nonetheless, it spells a dilemma for Read. One horn is that Read's criterion seems incomplete: It can't decide between the Palatini pseudotensor and the metric-based pseudotensors. If then, in light of the above considerations, one rules out the former, one thereby has to identify the metric as the gravitational field. (Prima facie this isn't implausible: It surely plays a privileged role. For instance, one cannot write down matter coupling to gravity locally using only a connection. One also needs the metric or something equivalent. 22 ) But even so, the metric's special status doesn't by itself justify its elevation to the gravitational field-as Read himself admits. 23 The worry about the right identification of the gravitational field is even more general: Why assume that in GR there exists an ambiguously identifiable gravitational field to begin with? It's not implausible that no choice for the gravitational field is ultimately unique across different contexts (Rey 2013). In short, Read's second functional role, (ii), on pain of incompleteness, cannot remain neutral on the identification of a gravitational field-against his express intentions. 21 The connection needn't be assumed to be metrically compatible ab initio. A variation ofL with respect to both the metric and the connection as independent variables enforces metric compatibility. This variational method is called Palatini approach (e.g. Hobson et al. 2006, Ch. 19.10). The Palatini pseudotensor ϑ ν μ [ ] naturally emerges within this approach-hence its label. 22 It's worth recalling that also fermions essentially couple to the connection determined by the metric-not any other connection, cf. Pitts, (2012) for details. 23 What about the analogy between GR and Yang-Mills Theory? On the one hand, it would indeed strengthen the identification of the metric as the gravitational field variable. On the other hand, GR isn't a Yang-Mills Theory-at least not in the standard sense (see e.g. Aldrovandi and Pereira 2013, passim). So, from the outset the analogy harbors important subtleties. I therefore side with Read's admonition to caution: What we identify in GR as the gravitational field, requires explicit arguments, and can't be easily read off from the analogy with Yang-Mills Theories. Is a functionalist strategy appropriate for gravitational energy-stress? Now turn to (FUNC gravE ): Why appeal to functionalism in the specific context of gravitational energy in GR? I'll launch two lines of attack against it: First, I'll rebut Read's explicit argument for it; secondly, I'll rehearse the reasons that motivate functionalism in the philosophy of mind, and try to ascertain their analogues. First, let's examine Read's own argument for a functionalist stance towards gravitational energy. I reject it as unfounded. What primarily bolsters (FUNC gravE ) for Read is the sterility of its negation: "[…] the alternative to functionalism is to say that 'the structure of certain DPMs of GR is such that it appears that there exists gravitational stress-energy in those models, but really there is no such stress-energy there'; the payoff to be gained from making such a claim is unclear" (p. 20). In particular, he cautions that without (FUNC gravE ), one may be barred from potentially more perspicuous avenues for explaining some gravitational phenomena, e.g. binary systems. I concur with Read on the infertility of dogmatically boycotting higher-level explanations from the outset. Sundry examples from non-gravitational physics attest to that (e.g. Falkenburg 2015;Knox 2016;Knox and Franklin 2018). Yet, the use of higher-level concepts doesn't per se imply functionalism. 24 The latter is a specific thesis about the meaning and/or the ontological nature of certain quantities (depending on the strain of functionalism, see below). The purported explanatory pay-off of recourse to gravitational energy-stress as a non-fundamental explanans doesn't per se warrant functionalism about gravitational energy-stress. Moreover, not even the explanatory pay-off of gravitational energy as a higher-level explanans is obvious. Read concedes that appeal to gravitational energy-stress isn't necessary: "one could indeed explain all general relativistic phenomena, in any model of the theory, simply using the apparatus used to pick out the DPMs of the theory" (p. 20). 25 The existence of two alternative explanations prompts the question: Which of the two achieves the pay-off that Read extolls? (Contrast this with the case of quasiparticles. Fundamentally, they are collective excitations in a solid. In some regards, they behave like particles. A bottom-up, statistical mechanical treatment would require utopian computational power: We'd have to solve typically~10 23 coupled differential equations. The pay-off of the higher-level description is manifest.) 24 Read's source of inspiration for functionalism in the philosophy of physics is Wallace (2012), whom he quotes (op.cit, p. 58): "Science is interested with interesting structural properties of systems, and does not hesitate at all in studying those properties just because they are instantiated 'in the wrong way'. The general term for this is 'functionalism […]." This is a gross simplification of functionalism. Wallace's project is primarily concerned with a realist ontology for higher-level/emergent entities. Functionalism is first and foremost the doctrine that what makes an entity to be of a particular type doesn't depend on the entity's composition. Structural realists -such as Wallace (cf. op. cit., pp. 314)-are eo ipso functionalists about all entities, including higher-level ones. Those with different metaphysical penchants, however, can avail themselves of higher-level explanantia without being functionalists about them. 25 Schutz (2012, p. 7), for instance, writes: "We know today that it is perfectly possible to describe the generation of gravitational waves and their action on a simple detector without once referring to energy; the quadrupole formula for the generation of the waves and the geodesic equation for their action on a simple detector are all one needs […].". What about binary stars, which Read adduces as an example? The case isn't as clearcut as Read suggests. GR predicts that two stars revolving each other emit gravitational radiation, and increase their orbital frequency. With marvellous accuracy, this has been confirmed (e.g. ). In line with Read's claim, the standard account indeed involves gravitational (wave) energy as an explanans (cf. e.g. Hobson et al. 2006, Ch. 18): The gravitational wave is supposed to carry away the binary system's total (kinetic plus gravitational) energy; as a result, the stars' orbital frequency increases, with the stars spiralling in towards each other. In a recent detailed analysis, however, Duerr (2018b) compares this standard interpretation of the binary stars to the alternative without gravitational (wave) energy which Read adumbrates. The latter is found to trump the former on the four explanatory virtues of parsimony, scope, depth, and unificatory power. At least pro tempore, this diminishes the force of Read's argument, or even shifts the burden of proof upon Read's shoulders. 26 Two caveats are in order. First, examples might eventually be found in favour of Read's claim (e.g. in a similar analysis of instabilities in rotating neutron stars, induced by gravitational radiation, see e.g. Schutz and Ricci 2010, Sect. 6.2). 27 But for the dialectic of the debate to progress, detailed case-studies of such examples are needed. At the moment, they aren't available. Secondly, some of the persuasiveness of Duerr's (2018a, b) arguments depends on whether one shares his GR-exceptionalist creed (see Sect. 1). But Read gives no explicit reasons for or against it. My second line of attack against (FUNC gravE ) adverts to the motivation for functionalism in the philosophy of mind. I submit, it doesn't carry over to the case at hand. The functionalism, which Read (via Wallace) imports into the philosophy of physics, stems from the philosophy of mind (see e.g. Van Gulick 2009;Levin 2013;Braddon-Mitchell and Jackson 2007, Part I, II, IV). It's mainly motivated by two dif-26 It's possible to gainsay this conclusion, and still uphold an argumentative asymmetry in favour of Read's position. According to the pragmatic account of explanation, developed by Van Fraassen (1980), what counts as a good explanation is always relative to a particular context. From this angle, I seem to make the stronger claim that there is no context in which explanatory recourse to gravitational energy-stress pays off. Therefore, it would appear incumbent on me to corroborate it. Moreover, the heuristic and didactic benefits seem obvious-not least since appeal to gravitational energy is an almost undisputed practice in the physics literature. On the one hand, I readily acknowledge some merits of explanations with gravitational energy in certain contexts. On the other hand, the above counter doesn't sway me for two reasons. Firstly, Read would be ill-advised to be wedded to one particular-invariably controversial-account of explanation (cf. e.g. Woodward 2014). Secondly, the conjunction of the context-relativity of explanations, and Dennett's Criterion entails an outré context-relative ontology. If what counts as a successful explanans depends on the context, and if the role that a successful explanans plays determines what the explanans is, it depends on the context what constitutes a successful explanans. Combine this now with (DC): Successful explanantia merit realist commitment. A kaleidoscopic ontology ensues: The world is populated with a plethora of motley entities; depending on the context, what exists in the world would vary. This lack of coherence strikes me as unpalatable in an ontology. 27 The most promising place for such an argument is arguably black hole thermodynamics. I forgo the topic for two reasons. Firstly, the current paper's ambit is classical GR. I steer clear of any non-classical/quantum aspects. Secondly, the status of black hole thermodynamics is the current topic of dispute, cf. Dougherty and Callender (2016) and Wallace (2017). Hence, it's unclear what inferences to draw from the putative significance of gravitational energy for it. ficulties: the non-intersubjectivity of mental states, and the identity theory's failure to account for multi-realisability, respectively. The first is a general and epistemological point: We can't directly know other people's mental states. They defy inter-subjectivity: A tooth-ache is inherently "private". At best, we can infer mental states indirectly from external indicators (screams, tears, etc.). If thus we want to attribute mental states to other people, prima facie we have to postulate them as entities whose intrinsic nature is elusive. (Mental states might-at best-be accessible introspectively. 28 ) It's sound philosophical advice to strive to minimise the gap between our speculations about the world and our knowledge. How then to accommodate for mental states? A second motivation for functionalism arises from a shortcoming of the preceding identity theory. According to the latter, mental states (or properties) are identical with physical states (or properties). Mental states are multiply realisable: It seems unduly chauvinistic to decree apriori that organisms can't be ascribed the same (or sufficiently similar) mental states, despite neuroanatomical and neurophysiological differences. Why shouldn't, say, Read and an octopus both be able-at least in principle-to experience pain and pleasure? But on the identity theory it remains mysterious, how two intrinsically sufficiently different brain states can be identical with the same mental state. Both difficulties can be eschewed by characterising mental states not via intrinsic properties of brain states, but via their function: They are individuated by the structural roles they play in a (neuronal) network. Do these two motivations have counterparts for the case of gravitational energy-stress in GR? Three disanalogies speak against it: its absence in the manifest image, its non-privacy and absence of multi-realisability. First, on the one hand, gravitational energy-stress-unlike mental states-isn't an empirical phenomenon that needs to be accounted for. On the other hand, unlike (say) belief states, even as a theoretical concept, gravitational energy scarcely counts as a robust folk-theoretic notion in our manifest image that an adequate scientific theory in one way or the other must save. 29 Read himself acknowledges that it's-at least conceivably-dispensable. 28 Dennett (1991b) denies even that. 29 Herein lies a key difference to other areas in philosophy of physics where functionalist strategies are deployed. Consider first Everettian quantum mechanics. One of its major challenges is how to recover our manifest image of macro-objects in 3-dimensional space, like crystals and anteaters, from the scientific image of a single, richly structured entity, defined on a higher-dimensional so-called configuration space. The appearance of three-dimensionality is a robust phenomenon that on pain of empirical incoherence arguably needs to be accounted for (e.g. Ney 2010). To achieve this, Everettians routinely appeal to functionalism (e.g. Wallace 2012, Ch. 2;Ney forth.). Another example are quantum theories of gravity (cf. Le Bihan forth. for a detailed analysis between functionalism in the philosophy of mind and philosophy of quantum gravity) devoid of familiar (e.g. smooth) spatiotemporal structure (Lam and Wüthrich 2018). Given that the latter is a robust phenomenon, one must arguably be able to give a story of how to recover it from our "a-spatiotemporal" scientific image. Notice also that, by contradistinction, Knox's "inertial frame functionalism" clearly doesn't aim at recovering the manifest image. In this regard, then, Wallace's and Lam and Wüthrich's projects are closer to functionalism in the philosophy of mind than are Knox's or Read's respective uses. Secondly, being a physical quantity, gravitational energy doesn't suffer from the privacy of mental states: Nobody is endowed with a privileged introspective access to gravitational energy-stress, opaque to lesser mortals. A less quirky sense of "privacy" in this context takes its cue from Dennett (1991a, b, cf. Ladyman andRoss 2007, pp. 161). 30 For him, it's typical of real patterns to become visible only on higher-levels of description. On the fundamental level, one may lose their salience out of sight: One doesn't see the wood for the trees. (This is the sense in which the higher-level explanations, discussed by Knox (2016Knox ( , 2017 and Knox and Franklin (2018) reveal the salient features, otherwise opaque on the microphysical level.) Is gravitational energy "private" in this sense? Can it only be properly understood on the coarse-grained, higher-level which Read's functionalist perspective envisions? That, too, I impugn. Formal notions of gravitational energy aren't higher-level concepts in the relevant sense: They are non-fundamental in that they are only definable in certain subclass of models. Again, the motivation from "privacy" founders. Thirdly, multi-realisability has no obvious counterpart. Recall that it's an interlevel relationship: It links higher-level and lower-level (more fundamental) entities. (FUNC gravE ) presupposes that the functional profile of gravitational energy is supplied from gravitational theories other than GR. The most straightforward such "reference theory" is Newtonian Gravity. GR reduces to it in the weak gravity limit. 31 Hence, the functional role would be fixed by GR itself in a particular regime. (One may already ponder: Isn't it ad-hoc to accord an ontological privilege to this particular regime? The more modest goal of identifying rough-and-ready functional counterparts of quantities in antecedent theories is, of course, harmless. See below.) Suppose now that in another regime, GR exhibits some structural similarity to the weak-field regime. This similarity doesn't constitute an inter-level relationship of the kind required for multi-realisability. It doesn't link a fundamental and a less fundamental level of description. Rather it's an intra-level relationship. The same applies to different reference theories, say massive graviton gravity. 32 Both GR and it vie for providing the best description of the same domain. They operate on the same ontological level. Again, we aren't dealing with multirealisability. It's terminological confusion to say that GR "instantiates" or "realises" some quantity, defined in massive graviton gravity. Of course, one could meaningfully ask: "What structures of a GR spacetime are the (rough) analogues or counterparts of some quan-30 I thank James Read (Oxford) for alerting me to this possibility. 31 Herein lies another, albeit arguably peripheral, difference to the philosophy of mind: It's not even clear how best to conceptualise a potential reduction of the mental to the physical-let alone whether it can successfully be carried out (cf. e.g. Beckermann 2008, Ch. 8,9). 32 Massive spin-2 graviton theory (e.g. Hinterbichler 2012; de Rham 2014) happens to be empirically adequate for suitable field masses (Pitts and Schieve 2007;Pitts 2011aPitts , 2016a explanans. 35 To my mind, this can only be satisfactorily gauged through detailed case studies (e.g. of energy extraction processes in Black Holes, see e.g. Geroch 1973). Nonetheless, a strong argument for eliminativism can already be made, turning on a wide range of astrophysical and cosmological phenomena. Recall that Read's (REAL GLOB ) hinges on realism about asymptotic flatness. In Sect. 3.3, I suggested that the disagreement between Hoefer and Read over the acceptability of asymptotic flatness best be understood as a disagreement between different classifications: Whereas for Read asymptotic flatness is a good approximation, for Hoefer it's an idle posit in an idealisation. The bone of contention is therefore: Do the salient features of empirically confirmed asymptotically flat models successfully refer? Primarily in light of contemporary cosmology, I contend, they don't. On the one hand, many spacetimes utilised for modelling the exterior of stationary astrophysical objects are indeed asymptotically flat. Apart from the Schwarzschild metric, the the Kerr-Newmann solution for the exterior of a rotating, charged black hole is a case in point (cf. Reiris 2014 for a proof of a large class of spacetimes). But unfortunately, no interior solution for the (uncharged) Kerr metric is known whose source is a perfect fluid-the simplest model for a star. This may merely be regrettable (perhaps even a temporary issue). But more generally, Christodoulou and Klainerman (1993, p. 10) warn: "[…] (I)t remains questionable whether there exists any nontrivial (non-stationary) solution of the field equations that satisfies the Penrose requirements [i.e. the geometric conditions encoding asymptotic flatness]. Indeed, his regularity assumptions translate into fall-off conditions of the curvature that may be too stringent and thus may fail to be satisfied by any solution that would allow gravitational waves. Moreover, the picture given by the conformal compactification fails to address the crucial issue of the relationship between the conditions in the past and in the behaviour in the future." The only known non-stationary, asymptotically flat solutions (e.g. within the Robinson-Trautman class of metrics describing expanding gravitational waves) are marred by singularities. This threatens their physicality. There are two responses to this. One is that singularities may not be as calamitous as orthodoxy (e.g. Earman 1995, p. 12) has it (Curiel and Bokulich 2009, Sect. 2;Lehmkuhl 2017). Another reaction points to approximate solutions based on perturbative methods. Via them one can determine the spacetime of, say, an in-spiralling compact binary system, yielding a spacetime that is non-stationary and asymptotically flat. This leads us to the major objection to asymptotic flatness as an approximation-cosmology. Prior to that, though, let's briefly dwell on the perturbative approximation schemes featuring in the treatment of binary systems. In a nutshell (see e.g. Maggiore 2007, Ch. 5;Poisson and Will 2014 for details), in the astrophysical system's neighbourhood, one employs the so-called Post-Newtonian approximation scheme-an expansion in powers of a small parameter (1/c 2 )-to determine the system's near field. But this expansion in the near-zone expansion is a singular perturbation theory: For distances tending to infinity, higher-order terms blow up; the Post-Newtonian scheme isn't uniformly valid for all distances. In particular, it cannot incorporate the no-incoming radiation boundary conditions, apt for gravitationally radiating objects. One therefore adopts a different approximation scheme for the so-called "far-field zone". In the intermediate region, both expansions are then smoothly glued together ("matched asymptotic expansion"). Which boundary conditions to impose for the far-field zone? A standard choice is asymptotic flatness. Here lies the principal reason for classifying asymptotic flatness as an idealisation: According to today's best cosmological model, we live in an FLRW universe with a positive cosmological constant Λ. It leads to infinite (albeit ever slower) expansion in our universe's long-term future: Our universe is asymptotically deSitter; it's not asymptotically flat (see e.g. Carroll 2003; Rubin and Hayden 2016 for details). Already for the exterior of the simplest, i.e. spherically symmetric star model, immersed in a deSitter spacetime, asymptotic flatness breaks down. Does this vitiate all-well-confirmed!-calculations based on an asymptotically flat far-field? Luckily-no: Far from the source, but still much closer than cosmological scales, spacetime is approximately flat-for all practical purposes. So, the usual techniques apply-as long as one doesn't venture too "far out" in space and time (Ashtekhar et al. 2016;Bonga and Hazboun 2017). Asymptotic flatness is therefore an idealised extrapolation of the ambient spacetime at a particular phase of a star's life: One ignores its future beyond a certain point, prescinding from the star's cosmic embedding. The referents of asymptotically flat spacetimes are therefore ahistorical fictional objects. The practising physicist uses them as convenient surrogates for the real target objects, e.g. a pulsar, a galaxy, etc., because they share with the latter the relevant structural features up to cosmological scales. (It's this omission of actual history that physicists mean, when taking asymptotic flatness to characterise isolated systems. An object in an asymptotically flat spacetime is dynamically isolated in the sense that it quiesces into a stationary state. 36 ) Asymptotically flat spacetimes thus are idealisations. Even when successful, they describe surrogate systems, distinct (with respect to their past or future evolution) from remotely physical ones. More importantly, the working posits of successful asymptotically flat models aren't their fall-off behaviour at infinity. Rather, they are the right fall-off behaviour up to cosmological scales: All empirical content is garnered from the properties of a finite patch of an asymptotically flat spacetime. But it's, of course, the behaviour at infinity that is salient of asymptotic flatness. Asymptotic flatness is therefore an idle posit. Recourse to (DC) is thus blocked. In short: Gravitational energy in Read's proposal contravenes both conditions of Dennett's Criterion. Owing to its coordinate-dependence and ambiguity, local and global gravitational energy is ill-defined (unless Read's position collapses onto Pitts', for which then he should argue explicitly). Moreover, asymptotic flatness is an idle posit. Hence, it doesn't yield the explanatory mileage that a realist would urge. I conclude that Read's argument for a realism about pseudotensor-based global and local gravitational energy fails. In consequence, vis-à-vis Read's proposal, Hoefer's alternative seems preferable. It cuts the Gordian knot: We should indeed be eliminativists about gravitational energy, and recognise that in GR, energy just ceases to be conserved as a default (see Schroedinger 1950, p. 105 for a "singularly striking example", cf. Misner et al. 1973, Sect. 19.4). Outlook While I argued that Read's proposal should be rejected, his general functionalist approach to gravitational energy can be salvaged, and prove fecund in two slightly different contexts. For that, though, we must be clear on what it is: A scheme that allows us in a principled manner to 1. assess when a (cautious) realist stance towards certain non-fundamental quantities is apposite-via Dennett's Criterion; 2. identify those as counterparts of Newtonian gravitational energy in other theories-via Lewisian functional definitions. One such promising context concerns non-pseudotensorial approaches to global gravitational energy-stress; the other concerns a research programme inaugurated recently by Ashtekar and collaborators. Whilst Read doesn't mention them, three other candidates for global gravitational energy lend themselves to his agenda (as I believe, it ought to be understood): the Komar mass, the Bondi-Sachs mass, and the ADM mass. Being non-pseudotensorbased, they circumvent the two main defects of pseudotensors discussed above. To gauge the prospects, I'll comment on each. (I'll skip the technical details. For them, I refer to Wald 1984, Ch. 11.2;Poisson 2004, Ch. 4.3;Jaramillo and Gourgoulhon, Ch. 3, and references therein.) Start with the Komar mass. I submit, it violates either the first or the second antecedent conditions of Dennett's Criterion. For stationary (and asymptotically flat) spacetimes, it furnishes a well-defined, coordinate-independent notion of global gravitational energy. But this augurs only a Pyrrhic victory for Read. The casualty is physical significance: Stationarity precludes astrophysical processes like stellar evolution, gravitational or electromagnetic radiation. Realism about the Komar mass thus is Pickwickian: Its limited applicability is at variance with any demand for explanatory utility, i.e. the second condition of (DC). The only spacetimes capable of describing in any sense realistic systems, and hence the only ones capable of empirical confirmation, are of course non-stationary. But for non-stationary, asymptotically flat spacetimes, the Komar energy requires a gauge-fixing in the following sense: For the integral to be well-defined, a coordinate condition needs to be imposed on the representative of the equivalence class of socalled Bondi-Metzner-Sachs time translations. (Loosely speaking, the latter encode translations at infinity, see e.g. Wald 1984, pp. 283 for details.) This gauge-fixing amounts to privileging a certain subclass of time-translations. (NB: The problem isn't the imposition of a coordinate condition per se. Rather it's the fact that thereby one singles out time-translations along directions that aren't intrinsically distinguished.) This looks like a drastic, if not ad-hoc restriction of the concept of energy. It thus seems that for empirically relevant contexts, the Komar mass violates the demand for well-definedness, i.e. the first condition of (DC). A second approach to gravitational energy is the Bondi-Sachs mass. For asymptotically flat spacetimes, it's defined as an integral at "null infinity". Roughly speaking, that is: One evaluates the solution-valued Hamiltonian of GR in the limit surface at infinity along the light cone. (Equivalently, one can conceive of the Bondi-Sachs quantities as Noether charges, associated with the symmetries of asymptotically flat spacetimes at null infinity.) The Bondi mass captures the energy that electromagnetic or gravitational radiation carries off to infinity. In the presence of an outward energy flux, the Bondi-Sachs mass decreases. But it always remains non-zero. It's also bounded from above by the third candidate for gravitational energy in asymptotically flat spacetimes-the ADM mass. In contrast to the Bondi-Sachs mass, it's defined at "spatial infinity": One evaluates the solution-valued GR Hamiltonian in the limit of spacelike hypersurfaces stretching to infinity. The ADM mass is a suitable candidate for total energy-momentum of spacetime. By construction, it's conserved. A celebrated result of mathematical physics is that the ADM mass can be shown to be positive (for matter satisfying certain energy conditions). Furthermore, under suitable conditions, it initially coincides with the Bondi-Sachs mass. Accordingly, the latter can be interpreted as the residual ADM energy after gravitational and electromagnetic wave energy has been extracted from the system. What to make of the Bondi-Sachs and ADM mass in the present context? The reflections on the status of asymptotic flatness as an idealisation in Sect. 4.3.2 curtail rash hopes: With asymptotic flatness as their prerequisite (Jaramillo and Gourgoulhon 2010) both the Bondi-Sachs and ADM mass don't seem to merit realist commitment. In short, Read's functionalism about gravitational energy doesn't fare significantly better for the three standard non-tensorial notions of global gravitational energy. The Komar mass is either ill-defined or deficient in explanatory power. Both the (standard) Bondi-Sachs and the ADM mass presuppose asymptotic flatness. With the latter being an idle posit of an idealisation, neither seems to merit realist commitment. Another context, however, deserves greater attention. In it, Read's proposal (understood as sketched above, and with the suitable amendments with respect to the characterisation of the functional profile of gravitational energy) may prove valuable: the framework for asymptotic structure of spacetimes with a cosmological constant, > 0, recently developed in a series of papers by Ashtekar et al. (2015a, b, c). It promises to circumvent some of the problems diagnosed for Read's approach. In particular, given that our universe is arguably asymptotically deSitter, the asymptotic structure on which Ashtekar et al.'s framework relies may well count as a working posit of an approximation. It remains to be seen whether the symmetries of deSitter space allow for a satisfactory formal definition of gravitational energy-and what the functional roles are that it plays. But supposing that gravitational energy does admit of a well-defined expression in this context, prima facie it's a counterpart of Newtonian gravitational energy, about which we should be realists. thank you to Brian Pitts (Cambridge) for many inspiring and instructive conversations. This research was generously supported by a doctoral scholarship of the British Society for the Philosophy of Science, and the Heinrich Hertz scholarship in History and Philosophy of Physics (Bonn). Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
18,278.4
2019-12-11T00:00:00.000
[ "Philosophy" ]
AIDE: Accelerating image-based ecological surveys with interactive machine learning 1. Ecological surveys increasingly rely on large-scale image datasets, typically tera-bytes of imagery for a single survey. The ability to collect this volume of data allows surveys of unprecedented scale, at the cost of expansive volumes of photo-interpretation labour. 2. We present Annotation Interface for Data-driven Ecology (AIDE), an open-source web framework designed to alleviate the task of image annotation for ecological surveys. AIDE employs an easy-to-use and customisable labelling interface that supports multiple users, database storage and scalability to the cloud and/or multiple machines. 3. Moreover, AIDE closely integrates users and machine learning models into a feed-back loop, where user-provided annotations are employed to re-train the model, and the latter is applied over unlabelled images to e.g. identify wildlife. These predictions are then presented to the users in optimised order, according to a cus-tomisable active learning criterion. AIDE has a number of deep learning models built-in, but also accepts custom model implementations. 4. Annotation Interface for Data-driven Ecology has the potential to greatly accelerate annotation tasks for a wide range of researches employing image data. AIDE is open-source and can be downloaded for free at https://github.com/micro soft/ aerial_wildl ife_detec tion. . To this end, software solutions have been proposed, such as Trapper (Bubnicki et al., 2016), Aardwolf (Krishnappa & Turner, 2014) and camtrapR (Niedballa et al., 2016). While these facilitate data management, they lack labelling assistance and require users to carry out all annotation work manually. On a different track, some interfaces were designed with explicit focus on annotation, like VATIC (Vondrick et al., 2013), LabelImg, 1 VGG Image Annotator (Dutta & Zisserman, 2019), VIOLA (Bondi et al., 2017), LabelMe (Russell et al., 2008) and commercial tools like LabelBox. 2 A few of them have some form of simple annotation assistance; for example, both VATIC and VIOLA offer interpolation for video data to reduce the number of annotations required. However, more elaborate labelling assistance is often absent. Recently, computer vision research has focused on automatically interpreting ecological imagery (Kellenberger et al., 2018;Norouzzadeh et al., 2018;Schneider et al., 2019;Tabak et al., 2019;Willi et al., 2019) through machine learning (ML) models, in particular convolutional neural networks (CNNs; LeCun et al., 2015). CNNs are a family of deep learning models designed for recognition tasks in images, such as image classification (Krizhevsky et al., 2012) or object detection (Lin, Goyal, et al., 2017), and have become the most widely used variant of ML models in computer vision tasks. 3 However, employing these models requires substantial programming effort, as well as a very large collection of labelled images for training. In ecological applications, data acquisition campaigns often result in large quantities of images, but no annotations, which prevents CNN training. Furthermore, although methodologies like pre-training and transfer learning exist that can reduce the required number of images and annotations (Kornblith et al., 2019), obtaining a model that can generalise across an entire image dataset still requires large amounts of annotated data from the target image campaign. This can be attributed to the visual heterogeneity of the objects of interest in an image, as well as the images themselves: for example, objects (animals, plants, etc.) may exhibit viewpoint or pose variations, they may be of different sizes depending on their age and distance to the camera, or they might have different fur colours and patterns. Similarly, images may be taken with different camera models, resolutions or during the day or at night. ML models need to be exposed to these variations by means of training data, and labels, for them to be able to generalise and yield high-quality predictions thoughout the full dataset. These data may not be readily available for image labelling campaigns, which limits the usefulness of CNNs, unless they can be included in the annotation process and incrementally trained on new annotations provided by the users. In this work we address both problems-the tedium of manual photo-interpretation and the constraints of ML models-by unifying them into one labelling framework, which we denote Annotation Interface for Data-driven Ecology (AIDE). AIDE is a web-based, opensource collaboration platform that integrates a versatile labelling tool and ML models for image annotation, without the requirement of writing code. The incorporation of ML models into annotation platforms has been proposed before, e.g. by the camera trap image tool Timelapse (Greenberg et al., 2019). However, AIDE does so by means of a feedback loop, leveraging a heuristic known as active learning (AL; Settles, 2009). In AIDE, the ML model is repeatedly trained on the latest, user-provided annotations. Once training has finished, the model is used to obtain predictions on (yet) unlabelled images. Critically, the images are further sorted by an AL criterion, which e.g. prioritises images that contain highly unconfident ML model predictions. The promise of using AL then is that a lower number of annotated images are required to train an ML model for the task at hand. AIDE has a number of CNN-based ML models and AL criteria built-in, but also accepts custom, user-provided implementations. The result is a collaborative platform that (a) has the potential to greatly accelerate large-scale image annotation projects and (b) allows training ML models with potentially lower amounts of training data. To the best of our knowledge, AIDE is the first open-source software suite that integrates ML models in an AL manner for image annotation. | Overview Annotation Interface for Data-driven Ecology is a web-based, collaborative annotation platform that includes humans and a prediction model in a loop, with both parties reinforcing each other for accelerated label retrieval. Figure 1 illustrates this loop and the key components of AIDE, including: • Labelling interface, the primary access point for annotators and a window into the dataset to be annotated (Section 2.2). • Database, the storage solution for annotations and metadata (Section 2.3). • Integrated model training, which allows training an ML model on user-provided annotations and obtaining predictions in (yet) unlabelled images (Section 2.4). • Active learning (AL) criterion, responsible for ordering the model predictions, e.g. to maximise model accuracy gain during retraining (Section 2.5). By default, AIDE iterates this loop until the entire dataset has been annotated. The annotation process can also be terminated earlier, e.g. upon satisfactory prediction quality of the model. The following sections outline this loop and the individual components. | Labelling interface The labelling interface (Figure 2) is written in JavaScript with the jQuery library 4 and is accessible through any modern web browser. Since the main target of AIDE is to obtain labels in the most efficient way, multi-step workflows, nested dialogues and pop-up messages have been avoided as much as possible. | Annotation types Annotation Interface for Data-driven Ecology supports a number of annotation types, namely image labels, points (with pixel coordinates), bounding boxes and segmentation maps (where every pixel gets assigned a label). The interface and tool set are automatically adjusted depending on the annotation type selected for a project. AIDE has been designed to allow one type of annotation per project, rather than e.g. a fully customisable cascade of dialogues or annotation tags. This allows for a leaner annotation interface and more straightforward integration of the ML model (Section 2.4). Figure 3 illustrates examples of the interface set up for the four currently supported annotation types. | Annotating images Users can create, modify and delete annotations; the precise interaction depending on the annotation type. For instance, a click Most of the labelling tools are assigned keyboard shortcuts, so that the user can keep their focus on the images, without having to look around to find the necessary tool. This also applies to the list of label classes, whose entries can be organised into hierarchical groups, collapsed and searched. For instance, the search field can also be accessed through a keystroke-this way, users can keep the mouse cursor in the image view, and select the desired label class through simple keyboard operations, without having to scroll through the list of classes. After a user annotates a set of images, clicking 'Next' commits the annotations to the database (see Section 2.3.1 below) and presents a new set of images. Metadata related to the annotation process are stored as well, e.g. annotation author, image view count, date and time of creation, time required, browser agent, window size, number of interactions and more. Clicking 'Previous' re-displays the image (or batch of images, depending on the configuration) the user has seen before and allows modifying annotations therein. Finally, the platform also supports re-visiting existing annotations, filterable by date and annotation presence/ absence to skip empty images. | Server backend Annotation Interface for Data-driven Ecology stores annotations and metadata in a relational database (RDB), specifically Postgres, 5 an open-source database system. RDBs enable concurrent (i.e. multiuser) access, scalability and security on the one hand, but also facilitate tabular data download for further analyses on the other. Note that images are only referenced through the database, but stored as files on disk for easier organisation. Images can be uploaded and managed through the web browser; large images can automatically be split into patches on a regular grid during upload, if requested. Data input and output between the RDB and the annotation interface is handled by the server-sided logic of AIDE, which is written in Python and based around bottle.py, a lightweight web server engine. 6 | User performance evaluation Expertise and diligence of annotators may vary, which might become a challenge in collaborative labelling projects. To assist project 5 https://www.postg resql.org 6 https://bottl epy.org F I G U R E 3 AIDE's labelling interface can be customised in many ways and supports multiple annotation types (clockwise, from top left): image labels, points, bounding boxes and segmentation masks administrators, AIDE offers tools for assessing the performance and annotation accuracy of users. All users' annotations can be compared to each other (including project administrators) through the web interface ( Figure 4). The returned statistics are calculated on the server and adjusted to the annotation type: for image labels and segmentation masks, the overall accuracy is returned; for points and bounding boxes, AIDE provides precision and recall scores as well as average spatial point distances, resp. intersection-over-union (IoU) scores. Furthermore, AIDE also allows the specification of 'golden questions' which are images that serve as a reference for evaluation: project administrators can flag an arbitrarily large set of images as 'golden questions'. Every annotator then first sees only the golden question images when they begin with the labelling process in a specific project. The platform can further be configured to only allow new users to continue if they pass a certain accuracy criterion (e.g. a recall of 80% or more) on the golden questions, or after explicit admission by the project administrator. | ML backend At the heart of AIDE lies its capability of training ML models, based on the annotations provided by the users. Including ML models into the labelling process provides a number of potential advantages, such as: Although we did not observe any speedups or accuracy improvement of annotators when showing predictions in the images in our tests, the option is available. 3. Acceleration: AIDE can alter the order of images based on the model predictions to e.g. prioritise particularly difficult images (i.e. with low-confidence predictions), or images with a high number of predictions ( Figure 6). Annotation Interface for Data-driven Ecology is designed to accommodate any ML model, as long as it can be trained in a supervised way on images annotated by the users of the interface. To this end, AIDE comes with a number of ML models built-in (Section 2.4.2), but also accepts third-party models (Section 2.4.3). | Model training Upon project creation, or later on, administrators can select one of the available model types that is compatible with their project's selected annotation and prediction types. AIDE has a number of ML models built-in, but those built-in models can be replaced by almost any user-provided ML model. Figure 7). Furthermore, all statistical evaluation functionalities described in Section 2.3.2 are also available for evaluating model performance. | Built-in models Annotation Interface for Data-driven Ecology has a number of deep learning models built-in that have been shown to yield high performances on computer vision tasks. These include: • RetinaNet for object detection and classification with bounding boxes (Lin, Goyal, et al., 2017). RetinaNet is an evolution of Faster R-CNN (Ren et al., 2015), which is widely used in computer vision research and ecology (Schneider et al., 2018). RetinaNet provides two advantages over Faster R-CNN: the first is a sequence of layers called 'Feature Pyramid Network' , which enables obtaining both high-resolution and semantically expressive features for each location in the image for object detection with high accuracy. The second is the 'Focal loss', which reduces the penalty for correct predictions whose confidence is not perfect, but is already good enough, allowing the model to become more robust to datasets that exhibit strong class imbalances. RetinaNet has been successfully used for aerial wildlife counting (Eikelboom et al., 2019) and coral detection (Modasshir et al., 2018). • U-Net for semantic segmentation (Ronneberger et al., 2015). 7 U-Net contains a sequence of encoder and decoder, which map the image to a lower spatial resolution, but high-dimensional fea- | Custom models In some cases, the built-in models of AIDE might not be adequate, or else users of the system may already have an ML model available that they would like to use in the annotation process. For these cases, AIDE supports the integration of third-party models. | Active Learning for human-machine collaboration In most ML workflows, a model is trained once on parts of a dataset and then kept static during a prediction phase on the rest of the images. While this may work if sufficient data have been labelled, it is less than optimal for situations where the initial number of existing annotations is low, or a model is to be re-used over e.g. a new set of images whose visual appearance is very different from the one in the training set. In this case, specific domain adaptation strategies can be devised to compensate for the domain shift (Tuia et al., 2016), but at the cost of custom-built ML models that are difficult to use for non-specialists. Instead, AIDE integrates prediction models in an active learning (AL) loop (Kellenberger et al., 2019;Settles, 2009), also known as a 'human-in-the-loop' system (Brodley, 2017) click 'Next' they are automatically presented with the newly predicted images, sorted by the priority score. In the end, this means that more relevant images are shown to the user with higher priority throughout the entire labelling process, with the notion of relevance depending on the task. As an example, AL can be used to improve model performance after a given number of annotated images. Figure 9 shows precision-recall curves of CNNs on large mammal detection in aerial images, before fine-tuning (grey) and after five (dashed) and ten (solid) iterations with different AL criteria (see Appendix 5.4 for details). Note that the prediction quality of the CNN improves with all tested F I G U R E 8 AIDE allows configuring model options for each project through the web browser. If model settings are provided in the right format, they will be rendered with graphical elements and can incorporate explanation texts and links for each parameter; this is also available for third-party models (see Appendix 5.2) F I G U R E 9 Precision-recall curves of an object detector CNN with initial performance (grey) and after five (dashed) and ten (solid) AL iterations with three different AL criteria criteria, including simple random image ordering (i.e. no AL) over the original base model (grey), but the improvement after only five AL iterations is the highest with a dedicated AL criterion (Breaking Ties; Luo et al., 2005). | AIDE FOR COMMUNIT Y DE VELOPMENT Integrating ML models into the labelling process eventually results in model states that are highly optimised for the data at hand. This is particularly helpful for large-scale image campaigns, where a well-trained model may result in reduced annotation efforts. However, the benefits of ML reach further: once trained, models can be used across individual projects. Oftentimes, ecologists conduct image campaigns with similar targets in mind, e.g. with images containing the same species, comparable types of background, or from the same viewpoint (ground-based, airborne, etc.). In these cases, re-using ML model states from other, similar projects provides a starting point that has the potential to accelerate labelling campaigns even further. To this end, an upcoming release of AIDE will include a 'model marketplace' where users will be able to share trained ML model states across projects. At the start of each annotation project, users will be able to browse through a catalogue of available model states. Each state is accompanied with a description, a list of label classes the model supports and other related metadata. This way, users can select the most appropriate model state as a starting point and obtain even higher quality predictions straight from the start of the annotation process. Likewise, once a user decides that the model in their own project is sufficiently trained, they can decide to share its state with others by providing the mentioned metadata (name, description, etc.) and sharing it on the marketplace. For privacy reasons, only the aforementioned metadata and model parameters will be shared, which sufficiently prevents conclusions about the images of the originating project. Also, model states have to be shared explicitly by a project administrator and will be shareable either only across the administrator's own projects, or globally. Owners of the model states can further discard any information about the origin, such as their AIDE account name. Eventually, we foresee AIDE and the model marketplace as a platform to enhance ecological image analysis in a collaborative way, beyond the individual project. Once a sufficient number of applications and image types have been covered by shared model states, labelling efforts will be reduced to a minimum for any new image campaign. This will enable ecologists to allot more time for the data interpretation, rather than the annotation process. | LIMITATI ON S OF AIDE Annotation Interface for Data-driven Ecology was designed to enable large-scale, collaborative annotation projects for ecological applications by means of interactive integration of ML models in an easy-to-use manner. Effectively, AIDE does not require users to write a single line of code, if they decide to use one of the built-in or contributed third-party models. However, AIDE is still a growing project, and as such has a number of limitations, including the following: • Annotation Interface for Data-driven Ecology currently only supports RGB images and is not compatible with multi-band images, georeferenced data or other media types like videos. • Only the four annotation types mentioned are supported at this moment. We plan to add compatibility for other types, such as more complex polygons or instance segmentation maps, in upcoming releases, and will also include appropriate ML models for them. • Models need to be trained to a certain degree on the data to be useful for interactive setups. In the case of deep learning models, this requires a comparably large set of existing labels, limiting their use at the start of annotation projects. If a new project is started with a completely untrained deep learning model, the latter will usually provide random labels per image, resp. per pixel in the case of image classification and semantic segmentation, or predictions in all possible locations of the image for points and bounding boxes. We intend to address this obstacle in a future release of AIDE through the 'model marketplace' as highlighted in Section 3. • While AIDE offers tools to train ML models and evaluate model prediction and user performances (cf. Section 2.3.2), it does not guarantee high-quality annotations or well-performing ML models by itself. Eventually, it will always be the project administrators' responsibility to verify the accuracy of provided annotations, and to ensure that ML models are trained to the degree required for the individual annotation project. Finally, we would like to note that AIDE is still work in progress and will grow in functionality over time. We hope to be able to deliver a solution that facilitates using ML models in as many ecological applications as possible. | CON CLUS ION Ecological research increasingly relies on large-scale visual datasets, which can dramatically scale the spatial coverage of wildlife surveys, but requires tedious and expensive photo-interpretation of the images acquired. ML models, in particular convolutional neural networks (CNNs), have demonstrated high potential for accelerating this manual work. However, they often require involved coding efforts, which likely prevented broad adoption in many ecology projects. In this study we presented Annotation Interface for Data-driven Ecology (AIDE), an open-source web framework that integrates a flexible and easy-to-use annotation platform with CNN-based prediction models. AIDE is a versatile labelling tool that offers a high degree of customisability, support for various annotation types and support for multiple users. It is also one of the first annotation platforms that employs ML models to assist annotators in their task. Critically, AIDE employs these models through active learning, where humans and the machine work hand-in-hand: humans provide annotations the model can learn from, and the model returns suggested predictions and prioritises images with respect to their relevance. Annotation Interface for Data-driven Ecology is under active development, and will be expanded in functionalities in upcoming releases. This includes addressing the shortcomings mentioned like support for more annotation types, the ability to share pre-trained models across projects, as well as implementing new functionalities that have the potential to enhance image labelling projects for ecology. Annotation Interface for Data-driven Ecology is an open-source platform that is free to use. The source code is available at https:// github.com/micro soft/aerial_wildl ife_detec tion. ACK N OWLED G EM ENTS The authors would like to acknowledge the SAVMAP consortium PE E R R E V I E W The peer review history for this article is available at https://publo ns. DATA AVA I L A B I L I T Y S TAT E M E N T The proposed platform (AIDE) is open source and available for download at https://github.com/micro soft/aerial_wildl ife_detec tion. The version of AIDE used in this manuscript (Microsoft, 2020) can be obtained at https://doi.org/10.5281/zenodo.4028309. Note that this is a frozen code that will not contain the latest updates and developments beyond the state at publication of this manuscript. For the official and latest release, please refer to the official GitHub link. The images used for the studies behind Figures 6 and 9 are available at https://doi.org/10.5281/zenodo.1204408 (Reinhard et al., 2015). Labels are available from the authors upon request.
5,290.6
2020-11-01T00:00:00.000
[ "Computer Science" ]
The Development of Tetrazole Derivatives as Protein Arginine Methyltransferase I (PRMT I) Inhibitors Protein arginine methyltransferase 1 (PRMT1) can catalyze protein arginine methylation by transferring the methyl group from S-adenosyl-L-methionine (SAM) to the guanidyl nitrogen atom of protein arginine, which influences a variety of biological processes. The dysregulation of PRMT1 is involved in a diverse range of diseases, including cancer. Therefore, there is an urgent need to develop novel and potent PRMT1 inhibitors. In the current manuscript, a series of 1-substituted 1H-tetrazole derivatives were designed and synthesized by targeting at the substrate arginine-binding site on PRMT1, and five compounds demonstrated significant inhibitory effects against PRMT1. The most potent PRMT1 inhibitor, compound 9a, displayed non-competitive pattern with respect to either SAM or substrate arginine, and showed the strong selectivity to PRMT1 compared to PRMT5, which belongs to the type II PRMT family. It was observed that the compound 9a inhibited the functions of PRMT1 and relative factors within this pathway, and down-regulated the canonical Wnt/β-catenin signaling pathway. The binding of compound 9a to PRMT1 was carefully analyzed by using molecular dynamic simulations and binding free energy calculations. These studies demonstrate that 9a was a potent PRMT1 inhibitor, which could be used as lead compound for further drug discovery. Chemistry A series of 1,5-substituded tetrazole derivatives 9a-f, 10a-e, 16a-e, 18a-e and 20 were synthesized as illustrated in Schemes 1-3. In Scheme 1, the commercially available substituted anilines 1a-f were respectively reacted with ethyl oxalyl monochloride in the presence of triethylamine in anhydrous dichloromethane to generate the compounds 2a-f in good yields (90%-95%). Treatment of 2a-f with triphenylphosphine under refluxing in carbon tetrachloride gave compounds 3a-f, which were reacted directly with sodium azide in acetonitrile to generate 1-substituted phenyl-1H-tetrazole-5-carboxylate ethylesters 4a-f in moderate yields over two steps (63%-65%). Then compounds 4a-f reduced by diisobutylaluminium hydride (DIBAL-H) to form 1-substituted phenyl-1H-tetrazole-5-aldehydes 5a-f The syntheses of tetrazole derivatives 16a-e were depicted in Scheme 2. The compounds 4c-d could be converted into 11c-d by demethylation using boron tribromide in the yield of 67%. Then the treatment of compounds 11c-d with methanesulfonyl chloride in the presence of triethylamine by sulfonylation generated 12a-b. Meanwhile, the compounds 11c-d were reacted with 2,2,2trifluoroethyltrifluoromethanesulfonate in the presence of potassium carbonate by alkylation to give 13a-b. The intermediate 12a-b and 13a-b were directly reduced by DIBAL-Hto form 14a-d in moderate yields over two steps (75%-78%). Subsequently similar with the procedure in Scheme 1, compounds 14a-d were respectively reacted with side chains 6a and 6b by reductive amination to generated compounds 15a-e, which were deprotected with saturated hydrochloric acid ethanol solution to afford the target compounds 16a-e in good yields (92%-95%). The syntheses of tetrazole derivatives 16a-e were depicted in Scheme 2. The compounds 4c-d could be converted into 11c-d by demethylation using boron tribromide in the yield of 67%. Then the treatment of compounds 11c-d with methanesulfonyl chloride in the presence of triethylamine by sulfonylation generated 12a-b. Meanwhile, the compounds 11c-d were reacted with 2,2,2-trifluoroethyltrifluoromethanesulfonate in the presence of potassium carbonate by alkylation to give 13a-b. The intermediate 12a-b and 13a-b were directly reduced by DIBAL-Hto form 14a-d in moderate yields over two steps (75%-78%). Subsequently similar with the procedure in Scheme 1, compounds 14a-d were respectively reacted with side chains 6a and 6b by reductive amination to generated compounds 15a-e, which were deprotected with saturated hydrochloric acid ethanol solution to afford the target compounds 16a-e in good yields (92%-95%). The syntheses of tetrazole derivatives 18a-e and 20 were illustrated in Scheme 3. The side chains 6c-e were commercially available, and the preparation of the side chains 6f-g was followed the reported methods [24,27]. Then the compound 5a was reacted with varied side chains 6c-g by reductive amination to generate the compounds 17a-e in moderate yields (75%-85%). The target compounds 18a-e were obtained by deprotection with saturated hydrochloric acid ethanol solution in good yields (92%-95%). Moreover, the compound 18a was reacted with 1,3-bis(tert-butoxycarbonyl)-2-methyl-2-thiopseudourea in the presence of triethylamine and mercury dichloride to give compound 19 in the yield of 71%. Finally, the target compound 20 was afforded by deprotection with saturated hydrochloric acid ethanol solution in the yield of 92%. In vitro PRMT1 Inhibition Assays and Selectivity Assays A series of 1,5-substituded tetrazole derivatives were synthesized to investigate the structure-activity relationship (SAR) of PRMT1 inhibitors (Table 1). Initially, we designed and synthesized a series of molecules (9a-e, 16a-d, Group I) as shown in Table 1, which contained substituted phenyl on 1H-tetrazol and ethylenediamine side chain. The chemical modification of these compounds is mainly focused on the substituted benzene ring. The initial screening of these compounds was carried out using the radioactive PRMT1 methylation inhibition assay, which measured the amount of methyl groups that transferred from [ 3 H]-SAM to a biotinylated histone H4 peptide (ac-SGRGKGGKGLGKGGAKRHRKVGGK(Biotin)). In the assay, SAH and AMI-1 were used as the positive controls. It was found that, in Group I, the inhibitory activities of the para-substitution compounds were in general better than those with meta-substitution. Three compounds (9a, 9f, 16c), which contained 4-OCH(CH 3 ) 2 , 4-OCF 3 and 4-OCH 2 CF 3 respectively, showed strong inhibitory effects (over 47%) at 10 µM against PRMT1 in the initial screening, and were selected for the IC 50 determinations, which were 3.5 µM, 23.8 µM and 19.9 µM, respectively. It should be noted that the IC 50 of compound 9a is around seven times weaker than SAH but around 20 times stronger than AMI-1 (Table 1). In the second step, the Group II compounds were designed and synthesized by extending the ethylenediamine side chain to propylenediamine. However, the inhibitory activities of all the compounds in the Group II (10a-e, 16e) were either totally abolished or remarkably reduced in comparison with those of Group I. In the third step, in order to evaluate the influence of other amino side chains, we designed and synthesized a series of compounds (18a-e, 20, Group III) based on the structure of compound 9a. Among these compounds, 18a and 18e exhibited 10.0 µM and 29.0 µM IC 50 s respectively against PRMT1, while the other compounds (18b-d, 20) showed relatively poor activities and their inhibitory effects were all lower than 39% at a concentration of 10 µM. To further investigate the selectivity profiles of compounds 9a (represented in Figure 4), 9f, 16c, 18a and 18e, the IC 50 s against PRMT5 (the type II PRMT) were measured over 100 µM, as illustrated in Table 1. It suggested that in comparison with type II PRMT, these inhibitors showed good selectivity against PRMT1. Mechanism of Action (MOA) Study of the Compound 9a The most potent PRMT1 inhibitor in the current study, compound 9a, was selected for the Mechanism of Action (MOA) study. As illustrated in the Figure 5, various concentrations of SAM or peptide were used to evaluate the potency (IC 50 ) of 9a, and no significant changes were observed according to the concentration changes of either the SAM or peptide. The result of the MOA studies showed that the compound 9a is a noncompetitive inhibitor for either the cofactor SAM or peptide substrate. Does this result indicate that the compound 9a inhibit PRMT1 by binding to an allosteric site? Both PRMT1 and PRMT6 are Type I PRMTs, and their structures share the high degree similarity. The co-crystal structure of PRMT6 and its inhibitor, MS023, which also contains ethylenediamine moiety, was solved and clearly showed that ethylenediamine moiety occupied the substrate arginine binding site. Based on the crystal structure, it can be predicated that MS023 should be a competitive inhibitor against the peptide. However, similar to our result, the MOA results showed that the MS023 was noncompetitive with either SAM or peptide substrate [28]. Therefore, we believed that compound 9a may occupied the substrate arginine binding site, although the MOA results did not show competitive inhibition effects with respect to the peptide. There are potential explanations for the current contradictory results. First, the main binding effects of the substrate were from the outside regions of arginine-binding site. Therefore, compound 9a did occupy the arginine-binding site, but it did not affect the binding of the whole substrate peptides. Second, the binding of compound 9a possibly induced major protein conformational changes, which caused the peptides could not enter the binding site to compete with compound 9a even at the high concentration of the peptides. Molecular Modeling Study of the Interactions between Compound 9a and hPRMT1-SAH Complex To gain more detailed information regarding the interactions between the compound 9a and hPRMT1-SAH complex, it was docked into the protein by using the position of substrate arginine binding site as the reference. There was 100 ns molecular dynamic simulation performed on the complex, and then the stable trajectory was collected for the binding free energy calculations (−9.64 kcal/mol) by using MM-PBSA and Normal Mode as shown in Table 2, which indicated a strong binding of compound 9a. It was noticed that, in the stable MD trajectory, the compound 9a occupied the substrate arginine binding site ( Figure 6A), in which N, N'-dimethylethylenediamine mimicked the guanidyl group of the arginine substrate and bound closely to the SAH, and the binding pose of compound 9a was stably maintained during the whole simulation process. The free energy decomposition was performed and the residues whose binding free energy contributions greater than -0.70 kcal/mol were recorded ( Figure 6C). It was noticed that the Tyr47, Ile52, Met56, Asp84, Glu108, Glu152, Met154, Tyr156 and Tyr160 formed strong interactions with the compound 9a ( Figure 6B). Among these residues, Tyr47, Ile52, Met56 and Tyr156 formed strong van der Walls interactions, in which Tyr47 and Tyr156 formed π-π interactions with the aromatic ring of compound 9a, and three negatively charged residues (Asp84, Glu108 and Glu152) contributed large electrostatic interactions because of the positive charges of compound 9a. The MD simulations also showed that the compound 9a could form three stable hydrogen bonds with Tyr156, Met154 and Glu152 along the stable trajectory. By using the molecular dynamic simulations, we believed that the binding site of compound 9a is at the substrate arginine binding position, so that the methylation process could be disturbed. However, the molecular dynamic simulation results need to be further validated by experimental data, such as the key residues mutations in the active site or co-crystallization, which will be considered in our following work. (B) The interactions between key residues (carbon atoms were colored in cyan) and compound 9a (carbon atoms were colored in green). The hydrogen bonds were represented by black dotted lines; (C) The residues whose binding free energy contributions were greater than −0.7 Kcal/mol. The Alteration of 9a on PRMT1 Patterns at the Cellular Level Western blotting was performed to confirm that compound 9a altered the arginine methylation patterns of PRMT1 in the highly metastatic breast cancer cell line MDA-MB-231, as it has been reported that the expression of PRMT1 is high in the MDA-MB-231 [29]. Before the western blotting experiment was performed, the potential cytotoxic effect of compound 9a against MDA-MB-231 (breast cancer) cells was examined using WST-8/CCK8 cell viability assays. The results showed the IC 50 of compound 9a against MDA-MB-231 for 48 h was much higher than 500 µM. We then extended the time to 72 h, and the IC 50 was obtained as 541.1 ± 6.5 µM. Therefore, the top concentration of compound 9a for the western blotting experiment was chosen as 200 µM, and during the experiment, no obvious cell damage was observed. As shown in Figure 7, after the treatment of MDA-MB-231 cells with 9a for 48 h, the global level of ADMA, which is mainly produced by the type I PRMTs, was significantly decreased in a concentration-dependent manner. It was also noticed that the level of dimethylarginine dimethylaminohydrolases (DDAH), which can metabolize more than 90% of ADMA [30], was significantly decreased in a concentration-dependent manner. The results showed that the decreasing of ADMA was because of the inhibition of PRMT1 rather than being metabolized by DDAH, which indicated that the compound 9a influenced the PRMT-AMDA pathway. Asymmetric dimethylation of histone H4 at arginine 3 (H4R3me2as) is mediated by PRMT1, and it was observed that the compound 9a decreased the global level of H4R3me2as in a concentration-dependent manner. The global level of SDMA, which was catalyzed by Type II PRMTs, was not significantly affected. By combining the above information, it could be confirmed that 9a has significant influences on PRMT pathway at cellular levels. showed that the global levels of asymmetrical dimethylarginine (ADMA), dimethylaminohydrolases (DDAH) and H4R3me2a were significantly decreased in concentration-dependent manners, and the global level of symmetrical dimethylarginine (SDMA) was not significantly changed. On the Wnt pathway, the western blotting results showed that the global levels of Wnt3A and β-catenin were significantly decreased in concentration-dependent manners, while the level of Wnt5a/b was not changed. The experiments were repeated in triplicates. It was reported that the PRMT1 is required for canonical Wnt signaling [31], and another recent paper reported that the depletion of PRMT1 blocked Wnt-induced micropinocytosis [32], which reminded us to see if the inhibition of PRMT1 by compound 9a shows influences on the Wnt signaling pathways. The global levels of Wnt3a/β-catenin (Canonical Wnt signaling) and Wnt5a (Non-canonical Wnt signaling) were evaluated, in which the levels of Wnt3a and β-catenin were decreased in concentration-dependent manners, and the level of Wnt5a did not change significantly. The results suggested that the inhibition of PRMT1 by compound 9a selectively down regulated the canonical Wnt signaling pathway, which was of potential medical interests. To further confirm the relationships between PRMT1 inhibitors and Wnt pathway, more factors or biomarkers need to be evaluated in the future work, such as matrix metallopeptidase 2 (MMP2), matrix metallopeptidase 9 (MMP9), and methylation of Ras GTPase-activating protein-binding protein 1 (G3BP1) etc. General Information All the required chemicals were purchased from commercial sources and used without further purification. TLC was performed on silica gel 60 pre-coated aluminium plates (0.20 mm thickness) from Macherey-Nagel and visualisation was accomplished with UV light (254 nm). Compounds were purified by flash column chromatography using 80-100 mesh silica gel. 1 HNMR and 13 CNMRspectra were obtained from Bruker AVANCE III 400 spectrometers using chloroform-d, DMSO-d6 or deuterium oxide as a solvent. The chemical shifts, given as δ values, were quoted in parts per million (ppm); 1 HNMR chemical shifts were measured relative to internal tetramethylsilane; multiplicities quoted as singlet (s), doublet (d), triplet (t), quartet (q) or combinations thereof as appropriate. HRMS spectra were obtained on a Thermo Q-Exactive Orbitrap mass spectrometer. Melting points were determined on a WRS-1B apparatus without corrected. Synthesis of 9a-f and 10a-e General procedure for the preparation of compounds 2a-f. Triethylamine (2.5 eq) was added to the substituted aniline 1a-f (1.0 eq) dissolved in anhydrous dichloromethane (1.5 mL/mmol). The reaction mixture was cooled to 0 • C and ethyl oxalyl monochloride (1.2 eq) was added dropwise to the solution. Subsequently, the reaction was warmed to room temperature and stirred for 1 h. The reaction mixture was quenched with water and extracted with dichloromethane, the organic layer was dried over anhydrous Na 2 SO 4 , filtered and concentrated. The residue was purified by column chromatography on silica gel using EtOAc/petroleum as eluent to give 2a-f. . 2a-f (1.0 eq) was dissolved in CCl 4 (2.0 mL/mmol), and a solution of triphenylphosphine in CCl 4 (0.8 mL/mmol) was added dropwise to the reaction flask at the room temperature. The reaction was refluxed and stirred for 6 h, and then cooled to the room temperature. The precipitate was filtered off and washed with CCl 4 . The filtrate was concentrated in vacuo without further purification to give 3a-f. General procedure for the preparation of compounds 4a-f. To a solution of 3a-f (1.0 eq) in acetonitrile (1 mL/mmol) was added sodium azide (1.5 eq), and the reaction was stirred at the room temperature for 3 h and monitored by TLC. The mixture was quenched with ice water, concentrated and extracted with ethyl acetate. The combined organic layers were washed with water, and dried over anhydrous Na 2 SO4, filtered and concentrated. The residue was purified by column chromatography on silica gel using EtOAc/petroleum as eluent to give 4a-f. General procedure for the preparation of compounds 5a-f. To a solution of 4a-f (1.0 eq) in anhydrous dichloromethane (1.8 mL/mmol) was added diisobutyl aluminium hydride (1.0M in hexanes, 2.0 eq) dropwise at −78 • C and the reaction was stirred for 30 min. The mixture was quenched with methanol, concentrated and extracted with ethyl acetate. The combined organic layers were washed with water, 1 M HCl and dried over anhydrous Na 2 SO4, filtered and concentrated. The residue was purified by column chromatography on silica gel using EtOAc/petroleum as eluent to give 5a-f. General procedure for the preparation of compounds 7a-f and 8a-e. To a solution of 5a-f (1.0 eq) in 1,2-Dichloroethane (8.5 mL/mmol) was added sodium triacetoxyborohydride (2.0 eq) and 6a or 6b (1.0 eq) and the reaction was stirred for 12 h at room temperature. The reaction mixture was diluted with water and extracted with dichloromethane. The dichloromethane layer was dried over anhydrous Na 2 SO4, filtered and concentrated. The residue was purified by column chromatography on silica gel using EtOAc/petroleum as eluent to give 7a-f or 8a-e. N To a solution of 11c-d (0.40 g, 1.71 mmol) in dry DMF was added K 2 CO 3 (0.28 g, 2.05 mmol), 2,2,2-trifluoroethyltrifluoromethanesulfonate (0.27 mL, 1.88 mmol) at room temperature and stirred for 3h. The mixture was quenched with water and extracted with ethyl acetate. The organic layer was dried over Na 2 SO 4 , and the solvent removed under reduced pressure without further purification to give 13a-b. General procedure for the preparation of compounds 15a-e. To a solution of 14a-d (1.0 eq) in 1,2-Dichloroethane (8.5 mL/mmol) was added sodium triacetoxyborohydride (2.0 eq) and 6a or 6b (1.0 eq) and the reaction was stirred for 12 h at room temperature. The reaction mixture was diluted with water and extracted with dichloromethane. The dichloromethane layer was dried over anhydrous Na 2 SO4, filtered and concentrated. The residue was purified by column chromatography on silica gel using EtOAc/petroleum as eluent to give 15a-e. Radioactive Methylation Assay and MOA Study The enzyme inhibitory activities were measured by the radioisotope assay in ShangHai Chempartner Co. Ltd. according to the standard protocol. The similar procedure was recently performed in the publication [25] The radioactive methylation assay was performed in 1× assay buffer (modified Tris buffer) system containing enzyme (PRMT1/5), peptide and [ 3 H]-SAM solution and compounds on the assay plate. After 250 nL of compound solutions were added to the assay plate, 15 µL PRMT1/5 enzyme solution or 1× assay buffer for negative control was transfer to per well of prepared compound stock plates and the whole system (the final concentration of PRMT1 or PRMT5 was 0.5 nM or 2 nM in the system) incubated for 15 min at room temperature. Then, 10 µL of peptide and ( 3 H)-SAM mixed solution was added to each well to start the reaction (the final concentration of ( 3 H)-SAM was 0.25 µM in the system) and the reaction incubated for 60 min at room temperature. Afterwards, the reaction was stopped with addition of 5 µL cold SAM solution to per well. Then, 25 µL of volume per well was transferred to Flashplate from assay plate and incubated for 1 h minimum at room temperature. Finally, the Flashplate was washed with dH 2 O and 0.1% Tween-20 three times and then reading plate in Mi crobeta using program 3 H-Flashplate. The data was analyzed in GraphPad Prism 5 to obtain IC 50 values. The MOA study of compound 9a was performed against PRMT1 with respect to SAM and peptide substrate independently. In brief, the peptide concentration was kept at 100nM (10× Km) and the IC 50 values of 9a were determined at different SAM concentrations (0.5, 1, 2, 6, 20 and 60× Km). And then, the SAM concentration was kept at 250 nm (1× Km) and the IC 50 values of 9a monitored at different peptide concentration (0.5, 1, 3, 10, 30 and 100× Km). The method is consistent with the procedure described above. Molecular Modeling Since the crystal structure of human PRMT1 (hPRMT1) has not been solved, the homology model of hPRMT1 in complex with SAH, which was built by referencing the crystal structure of rat PRMT3 (rPRMT3, PDB 1F3L) as the template according to our previous publication [23], was used for the current molecular modeling study. By overlaying the crystal structures of PRMT6-MS023 [PDB ID 5E8R] with hPRMT1, the position of MS023 was used as the reference to indicate the binding site, and then compound 9a was docked by using glide module in Schrödinger Release 2017-4 with default settings [33]. The compound 9a and hPRMT1-SAH complex was then prepared for molecular dynamic simulations. QM calculations were performed by using the B3LYP 6-31G* basis set within Gaussian16 [34] to optimize the molecular geometries of compound 9a and SAH, and the atom-centered point charges were calculated to fit the electrostatic potential using RESP [35]. The system was explicitly solvated in a truncated octahedral TIP3P water box (12 Å from the complex to avoid periodic artefacts from occurring) by using Amber 16 with the amber ff14SB force field [36] and the Generalized Amber Force field (GAFF) [37]. 11 K + ions were added to neutralize the charges of the system. The whole system was first optimised by energy minimisations and equilibrations according to our standard protocol [23], and then 100 ns free production molecular dynamic simulation was followed in the NPT ensemble (T = 300 K; P = 1 atm). The long-range electrostatic effects were treated by using Periodic boundary conditions (PBC) and particle-mesh-Ewald method (PME), and the temperature was coupled to an external bath using a weak coupling algorithm [38]. The non-bonded interaction cutoff was set as 8 Å and the bond interactions involving H-atoms were constrained by using the SHAKE algorithm. The time step to solve the Newton's equations was chosen as 2 fs and the trajectory files were collected every 100 ps for the subsequent analysis. MM-PBSA and Normal Mode were performed for the binding free energy calculations, based on 300 snapshots collected from the stable trajectory. The binding free energy contribution of each residues was calculated, and only those greater than −0.7 Kcal/mol were recorded for detailed interactions analysis. Cell Culture The MDA-MB-231 cell line used in this study was obtained from the American Type Culture Collection (ATCC, Rockville, MD, USA), and was grown in dulbecco's modified eagle medium (DMEM, Gibco, Thermo Fisher Scientific, Waltham, MA, USA) supplemented with 10% fetal bovine serum (FBS) (Gibco). The cells were incubated at 37 • C and 5% CO 2 . Cell Viability Assay The cytotoxicity was determined by the CCK8 assay. Briefly, MDA-MB-231 cells (5 × 10 3 cells/well) were seeded in 96-well plates in DMEM containing 10% FBS and grew for 24 h. The exponentially growing cells were incubated with various concentrations of compounds for 48h/72h in serum free DMEM at 37 • C (5% CO 2 , 95% humidity). CCK8 reagent (10 µL) was added to each well and incubated for further 2 h, and then the absorbance was analyzed in a multiwell-plate reader (BioTek ELx800) at 450 nm. Western Blotting MDA-MB-231 cells were maintained on 6-well plates at the appropriate density. After the attachment, cells were treated with 9a at indicated concentrations or DMSO control for 48 h. Total cell lysates were separated by SDS-PAGE and transferred onto nitrocellulose membranes. The blots were blocked with 5% nonfat milk for 30 min and the target proteins were probed with appropriate specific antibody overnight at 4 • C, respectively. The blots were washed with TBST for three times and then incubated with anti-rabbit secondary antibody (HRP conjugated) for 1 h. After another three washes, bands were detected in a ChemiScope3400 imaging system using ECL substrate (Millipore, Burlington, MS, USA). Primary antibodies used were as follows: anti-PRMT1 (Cell Signaling Technology no.2449, Danvers, MA, USA), anti-ADMA (Cell Signaling Technology no.13522), anti-SDMA (Cell Signaling Technology no.13222), anti-GAPDH (Cell Signaling Technology no. 5174), anti-H4R3me2a Abcam, ab9231), Wnt3a (Cell Signaling Technology no. 2721s), anti-DDAH (Abcam, ab9231, Cambridge, UK), Wnt5a/b (Cell Signaling Technology no. 2530s), β-catenin (Cell Signaling Technology no. 8480s), MMP9 (Cell Signaling Technology no.1366s). The results were obtained from multiple membranes. No significant differences were observed among loading controls. Therefore, we presented the representative loading control blot image in the figure. Conclusions In summary, through computer aided drug design, we designed and synthesized 22 1,5-substituded tetrazole derivatives. Among them, five compounds (9a, 9f, 16c, 18a, 18e) showed strong inhibitory effects on PRMT1. Compound 9a was identified as the most potent PRMT1 inhibitor (IC 50 = 3.5 µM) in the current study, and it showed strong selectivity to PRMT1 (type I PRMT) with respect to PRMT5 (type II PRMT). The MOA assay showed that compound 9a did not compete with either SAM or peptides, but according to the crystal structure of human PRMT6 with its inhibitor and combining with the molecular dynamic simulation study, it was believed that compound 9a bound to substrate-arginine binding site. By Western blotting, it was confirmed that compound 9a inhibited the PRMT1 pathway, and the canonical Wnt/β-catenin signaling pathway was down-regulated. The discovery of compound 9a is likely to prove to be very important for the understanding of PRMT1 function, and is a potential lead compound for future drug design efforts targeting PRMT1.
5,934.6
2019-08-01T00:00:00.000
[ "Chemistry", "Medicine" ]
Chitosan Sensitivity of Fungi Isolated from Mango (Mangifera indica L.) with Anthracnose In Mexico, the mango crop is affected by anthracnose caused by Colletotrichum species. In the search for environmentally friendly fungicides, chitosan has shown antifungal activity. Therefore, fungal isolates were obtained from plant tissue with anthracnose symptoms from the state of Guerrero in Mexico and identified with the ITS and β-Tub2 genetic markers. Isolates of the Colletotrichum gloeosporioides complex were again identified with the markers ITS, Act, β-Tub2, GADPH, CHS-1, CaM, and ApMat. Commercial chitosan (Aldrich, lot # STBF3282V) was characterized, and its antifungal activity was evaluated on the radial growth of the fungal isolates. The isolated anthracnose-causing species were C. chrysophilum, C. fructicola, C. siamense, and C. musae. Other fungi found were Alternaria sp., Alternaria tenuissima, Fusarium sp., Pestalotiopsis sp., Curvularia lunata, Diaporthe pseudomangiferae, and Epicoccum nigrum. Chitosan showed 78% deacetylation degree and a molecular weight of 32 kDa. Most of the Colletotrichum species and the other identified fungi were susceptible to 1 g L−1 chitosan. However, two C. fructicola isolates were less susceptible to chitosan. Although chitosan has antifungal activity, the interactions between species of the Colletotrichum gloeosporioides complex and their effect on chitosan susceptibility should be studied based on genomic changes with molecular evidence. Introduction Diseases caused by phytopathogenic fungi during pre-and post-harvest storage leads to significant losses for farmers and generate conditions for food insecurity [1]. Farmers have managed to minimize losses in the production of horticultural products with the use of agrochemicals, for example, fungicides such as azoxystrobin, fludioxonil, captan, merivon, imazalil, propiconazole, fosetyl-Al, orthophenylphenol, prochloraz, pyrimethanil, methylthiophanate, thiabendazole, and fludioxonil, among others [1][2][3][4]. However, some of the disadvantages of using these products are the resilience that fungi may develop [5], the damage to health, and damage to the environment [6]. This highlights the need to control post and pre-harvest diseases caused by phytopathogens with compounds that contribute to the success of sustainable agriculture and reduce the use of harmful agrochemicals [7]. The development of alternatives to traditional fungicides aims to reduce environmentally harmful products to control phytopathogenic fungi [8]. In this regard, some compounds of natural origin, such as essential oils, methanolic extracts, plant extracts, lipoproteins, and chitosan, have shown antifungal effects [7,9,10]. Chitosan is the direct derivative of chitin; it is a natural, biodegradable, non-toxic compound with fungicidal effects that induce defense mechanisms in plant tissues [11]. Likewise, chitosan has been evaluated in phytopathogenic fungi showing its antifungal activity against Fusarium, Rhizopus, Aspergillus, Alternaria, and Colletotrichum [12][13][14][15]. The benefits of chitosan in agriculture encourage its use for the pre-and post-harvest control of horticultural fruits [16]. However, the sensitivity of the different fungal strains to chitosan often varies according to intrinsic characteristics proper of each species, e.g., particularities in the cell wall and membrane composition. One of the most important fruits in Mexico is the mango (Mangifera indica L.), the 2019-year production of 2,089,041 t positioned Mexico as the sixth producer of mango worldwide, where the state of Guerrero is one of the leading producers nationwide [17]. However, farmers in Mexico still report losses related to various fungal diseases, one of them being anthracnose caused by fungi of the Colletotrichum genus [18,19]. Earlier studies have shown the antifungal effect of chitosan on Colletotrichum isolates [20], most of them identified as part of the Colletotrichum complexes; nevertheless, there is still little information on the chitosan sensitivity at the level of the Colletotrichum species. A Colletotrichum complex requires identification with the genomic alignment of at least one gene, while species identification requires at least three genes [21]. This research aimed to evaluate the in vitro chitosan sensitivity of fungal isolates obtained from anthracnose injuries in mango from Guerrero, Mexico. The species were identified with seven genes using a genomic alignment approach. Identification of Fungal Isolates The sequences of ITS and β-Tub 2 (first genomic alignment) allowed us to classify the fungal isolates obtained from leaves and fruit into seven main clades consistent of different fungal genera, in which seven isolates belong to the C. gloeosporioides complex, the causal agent of anthracnose in mango ( Figure 1). Likewise, the non-Colletotrichum fungal isolates found in the present work belong to species associated with mango infections (Figure 1). The second genomic alignment using ITS, Act, β-Tub 2 , GAPDH, CHS-1, CaM, and ApMat sequences from the isolates of the Colletotrichum complex, allowed the identification of four species of the Colletotrichum genera ( Figure 2). Among the isolates belonging to the Colletotrichum complex species, only one was isolated from fruit, while the rest were found in infected leaves. The complexes of C. gloeosporioides are more adapted to infect vegetative tissues than fruit, contrasting, for example, with the C. acutatum complex, which is more adapted to fruit infection [21]. Concerning the fungal identification of mango isolates, earlier Tovar-Pedraza et al. [22], obtained isolates from the C. gloeosporioides complex, and found the species C. alienum, C. asianum, C. siamense, and C. tropicale using Apn2/MAT intergenic spacer sequences. In contrast, our results suggest the species C. fructicola, C. chrysophilum, C. musae, and C. siamense as causal agents of anthracnosis in mango. These differences may be related to the sample size and the goals of each study. Tovar-Pedraza et al. [22] obtained samples from eight Mexican states (Sinaloa, Nayarit, Colima, Michoacan, Guerrero, Oaxaca, Chiapas, and Veracruz); their goal was to find the distribution of Colletotrichum species in mango, while our study considered samples from one Mexican state (Guerrero) to obtain Colletotrichum isolates from mango for evaluating the sensitivity to chitosan. Additionally, Li et al. [23] reported C. asianum, C. fructicola, and C. siamense on mango in China. Studies on anthracnose disease in mango have shown that the Colletotrichum species belong to the C. gloeosporioides complex, and multiple markers are necessary for proper species identification [24]. In addition to the Colletotrichum species related to anthracnose in mango, the other identified fungal isolates belong to six different genera, including Alternaria, Fusarium, Pestalotiopsis, Curvularia, Diaporthe, and Epicoccum ( Figure 1). These fungi can act as saprophytes and have been reported to occasionally cause diseases in mango, with some symptoms such as anthracnose [25][26][27][28][29][30]. Chitosan Characterization and Sensitivity of Isolated Fungi The FT-IR spectrum corresponding to the chitosan sample ( Figure 3) allowed calculating 78.5 ± 0.1% degree of deacetylation, and the molecular weight was 32.0 ± 6.4 kDa using a capillary viscosimeter. Usually, chitosans at the 10-200 kDa range are considered low molecular weight [31]. The degree of deacetylation and the molecular weight is related to the type of biological activity of the chitosan with the fungus and is fully documented; Grande-Tovar et al. [32] summarize the three main mechanisms proposed in the last few years: 1) interaction between amino groups of chitosan with anionic groups on the cell wall surface; 2) interaction of the positive amino groups of chitosan with the negative charges of phospholipids; and 3) binding of DNA. The isolate H1-3 was sensitive, but C. fructicola isolates H4-1 and 003 were less sen-132 sitive to 1 g L −1 chitosan ( Figure 4, Table 1). All the other Colletotrichum isolates were sen-133 sitive to chitosan (Table 1). 134 The C. fructicola isolate 003 was the only one obtained from the fruit. Its high growth 135 on PDA-lactic acid contrasts with the growth on PDA because it may be adapted to de-136 velop in tissues with organic acids present (such as those present in the fruit). Lactic acid 137 can sham the organic acids present in the fruit. The absence of these in the artificial PDA 138 medium without the addition of lactic acid can be a factor that affects mycelial develop-139 ment. However, there are not enough data to hypothesize what happens with this isolate 140 since it is the only one obtained from the fruit. The isolate H1-3 was sensitive, but C. fructicola isolates H4-1 and 003 were less sensitive to 1 g L −1 chitosan ( Figure 4, Table 1). All the other Colletotrichum isolates were sensitive to chitosan ( Table 1). The C. fructicola isolate 003 was the only one obtained from the fruit. Its high growth on PDA-lactic acid contrasts with the growth on PDA because it may be adapted to develop in tissues with organic acids present (such as those present in the fruit). Lactic acid can sham the organic acids present in the fruit. The absence of these in the artificial PDA medium without the addition of lactic acid can be a factor that affects mycelial development. However, there are not enough data to hypothesize what happens with this isolate since it is the only one obtained from the fruit. Table 1 shows the radial growth rates in the log phase and the percentage of radial 158 inhibition at 120 h of the Colletotrichum species causing anthracnose on mango from Mex-159 ico to chitosan at a concentration of 1 g L −1 . Most isolates were susceptible to chitosan 160 except for two C. fructicola specimens. Table 1 shows the radial growth rates in the log phase and the percentage of radial 158 inhibition at 120 h of the Colletotrichum species causing anthracnose on mango from Mex-159 ico to chitosan at a concentration of 1 g L −1 . Most isolates were susceptible to chitosan 160 except for two C. fructicola specimens. Table 1 shows the radial growth rates in the log phase and the percentage of radial inhibition at 120 h of the Colletotrichum species causing anthracnose on mango from Mexico to chitosan at a concentration of 1 g L −1 . Most isolates were susceptible to chitosan except for two C. fructicola specimens. The effect of chitosan on fungal biological systems related to molecular weight (low, medium, or high) is well known. Low-molecular-weight chitosan can be more effective against mycelial growth [31]. Additionally, other studies have suggested that concentration may also be a factor that generates diverse defense responses in fungi. In general, over 1 g L −1 of chitosan inhibits 80-100% of the fungal growth [33,34], and it has complete in vitro inhibition from 10 g L −1 [35]. However, after several hours, growth recovers [36]. In contrast, low concentrations of chitosan (1 g L −1 and below) inhibit fungal growth [37][38][39], but there are other effects on the cell at low concentrations. Chitosan binds to the negatively charged cell surface, disturbs the cell membrane, inducing leakage of intracellular components [40], stimulates respiration, and produces the efflux of significant amounts of cations [36]. In Colletotrichum species, inhibitions of 25% to concentrations of 1 g L −1 have a fungistatic effect [41]. In this study, the antifungal capacity on mycelial growth was evaluated at concentrations of 0.1 to 1 g L −1 to cause a moderate attack of chitosan on fungal cells or fungistatic activity (not total inhibition or fungicide activity). Chitosan inhibited most of the isolates exposed to 0.75 and 1 g L −1 of chitosan ( Table 2). The results of the other concentrations tested have been included in Table 2. The range of chitosan concentrations and inhibitions were insufficient to be able to estimate the MIC against isolated Colletotrichum. Adjusting final concentrations of chitosan at 10.0, 5.0, 2.5, 1.25, 0.625, 0.312 and 0.0 g L −1 can estimate the MIC for Colletotrichum gloeosporioides [42]. In the Colletotrichum isolates, the causal agent of anthracnose, it was possible to differentiate less susceptible strains of the same species at low concentrations (H4-1 and 003). This fact is relevant for future studies in our research group to elucidate the mechanisms of susceptibility or the possible resistance of Colletotrichum isolates to chitosan and should be studied based on genomic changes with molecular evidence. The percentage of radial growth inhibition at a 1 g L −1 concentration of chitosan was greater than 10%, and the radial growth rates of the log phase with respect to PDA-acid were reduced except for H4-1 and 003 isolates (Table 1). Earlier studies have shown the effect of chitosan on the radial growth of Colletotrichum species. Ramos et al. [43] reported radial growth inhibition at concentrations higher than 5 g L −1 chitosan with 40 kDa molecular weight and 85% DD in C. asianum, C. fructicola, C. tropicale, and C. siamense species. Radial extension rate decreased in most of the Colletotrichum species growing in vitro with chitosan. The growth of fungi includes four distinctive phases: the lag phase (I), the log phase (II), the slow down phase (III), and the steady growth phase (IV) [44]. During the log phase of balanced growth, the mycelium of fungi undergoes primary metabolism [45], so a decrease in rate is indicative of fungistatic activity by inhibition of the primary metabolism. However, radial growth of the C. fructicola isolate H4-1 from leaves was less inhibited, while 003 from the fruit was not inhibited with 1 g L −1 chitosan at 120 h; their log phase radial extension rates were not affected by the low molecular weight chitosan. No earlier studies were found in which the effect of chitosan was evaluated on C. fructicola isolates identified to the species level using more than three genetic markers, which is necessary to ensure proper identification to the species level in the genus Colletotrichum. Ramos et al. [43] reported inhibition of C. fructicola at a 5 g L −1 chitosan concentration; however, this species was one of the less inhibited by chitosan, although it showed hyphae with granular and corrugated surface when exposed to chitosan. The lower inhibition that chitosan exerted on isolates H4-1 and 003 may also be related to the virulence of the fungus; for instance, C. fructicola was more aggressive than C. siamense on peach [46], but in mango from Mexico, it has been reported that C. siamense and C. asianum have higher virulence than C. fructicola [22]. These variations in the degree of virulence of the fungus with its host can also be reflected in the sensitivity to fungicides that are applied, so one of the aspects to consider for the control of Colletotrichum complex species is the execution of fungicide sensitivity test [21] with a species-level identification using more than three genetic markers. In our study, two strains of C. fructicola were less susceptible. Therefore, it would be interesting to evaluate if there is a relationship between the degree of virulence (high or low) in mango and the sensitivity to chitosan as a hypothesis to be tested in future studies. Concerning the fungi that did not belong to the Colletotrichum genus, all the isolates found were susceptible to chitosan (Table 3; Figures 8-13 and Table 2). In Mexico, anthracnose is the primary disease caused by Colletotrichum that affects mango crops, but it is not discarded that the infections caused by other fungal genera can generate problems for mango production; in that case, chitosan may be an alternative to evaluate. Furthermore, the presence of these fungi could affect the fruit quality by worsening necrotic signs at sites initially injured by Colletotrichum. discarded that the infections caused by other fungal genera can generate problems for 231 mango production; in that case, chitosan may be an alternative to evaluate. Furthermore, 232 the presence of these fungi could affect the fruit quality by worsening necrotic signs at 233 sites initially injured by Colletotrichum. discarded that the infections caused by other fungal genera can generate problems for 231 mango production; in that case, chitosan may be an alternative to evaluate. Furthermore, 232 the presence of these fungi could affect the fruit quality by worsening necrotic signs at 233 sites initially injured by Colletotrichum. Fungal isolates of plant tissue with anthracnose in mango from Mexico belong to the Colletotrichum complex. The use of seven genetic markers in the genomic alignment identified the species C. fructicola, C. musae, and C. chrysophilum. The fungi of the Colletotrichum complex are susceptible to chitosan except for two isolates of the species C. fructicola that showed less susceptibility to chitosan. Likewise, the genera Alternaria, Fusarium, Pestalotiopsis, Curvularia, Diaporthe, and Epicoccum, which cause other diseases in mango, showed susceptibility to chitosan in all cases. Therefore, chitosan is an alternative to be evaluated in the control of anthracnose and other fungal infections in mango. However, due to the demonstrated lower susceptibility to chitosan presented by a C. fructicola specimen, the interactions between the species of the complex of C. gloeosporioides in anthracnose and their effect on the susceptibility or resistance to chitosan must be considered. Table 3. Chitosan sensitivity of other fungi species isolated from anthracnose on mango. Radial growth in medium PDA, PDA-lactic acid (0.05 M), and PDA-lactic acid (0.05 M) with 1 g L −1 chitosan at 25 • C. Identification of Fungal Isolates Colletotrichum isolates were obtained in the 2019 agricultural cycle from Cuajinicuilapa, Guerrero, Mexico. Mango leaves with anthracnose symptoms from the lower foliage of the tree were cut from the petiole and stored individually on paper towels. A leaf with anthracnose disease contains black necrotic spots of irregular shape on both sides of the leaf. The infected leaves were transferred to the laboratory at room temperature to obtain fungal isolates. Likewise, samplings were carried out on commercial maturity mango fruit with anthracnose from Guerrero. A mango with anthracnose shows deep, prominent, and generally rounded dark brown to black spots [20,47]. From the infected leaves and fruits, 0.5 × 0.5 cm tissue sections cuts were obtained and disinfected with sodium hypochlorite (NaOCl) 0.5% (v/v) for 2 min; then they were washed with sterile distilled water and dried with sterile filter paper. Each fragment was individually deposited in the center of a Petri dish with culture medium Potato Dextrose Agar (PDA) and incubated at 25 • C in the absence of light until the progress of mycelium for 8-10 days. DNA extraction was carried out from mycelium of each colony, a sterile scalpel was used to obtain 100 mg of mycelium, and it was placed in Eppendorf tubes (5 Ml). The mycelium breaking was carried out through three methods; in the first method, the mycelium sample was crushed by liquid nitrogen using a mortar; in the second, glass beads and vortex were used, crushing the mycelium for 1 min, and the third method was using a grinder with pellet pestles. The third method resulted in better DNA extraction yields. The fungal isolates were identified by a multi-locus sequence analysis scheme based on five genes (actin, Act; beta-tubulin 2, β-Tub 2 ; glyceraldehyde-3-phosphate dehydrogenase, GAPDH; chitin synthase 1, CHS-1; calmodulin, CaM) and the ribosomal internal transcribed spacer (ITS) and Apn2-Mat1-2 (ApMat) intergenic spacer regions. Genomic DNA was extracted with the Plant/Fungi DNA Isolation Kit (Norgen Biotek Corporation, Canada) following the manufacturer's instructions and used as a template for PCR reactions using the GoTaq ® Flexi DNA Polymerase (Promega, USA) and specific primers for each gene (Table 4). Table 4. Primer sequences for identification of fungal isolates. CHS1-79F (TGGGGCAAGGATGCTTG-GAAGAAG) CHS-1-354R (TGGAAGAACCATCTGTGAGAGTTG) [48,49] Apn2-Mat1-2 intergenic spacer (ApMat) AMF1 (TCATTCTACGTATGTGCCCG) AMR1 (CCAGAAATACACCGAACTTGC) [50] The PCR products were purified with GFX columns (Amersham Bio-sciences, Piscataway, NJ, USA) and sequenced at Macrogen Inc. (Seoul, Korea). Sequences were analyzed against the GenBank database with the Blast tool (https://blast.ncbi.nlm.nih.gov/Blast.cgi), and the DNA sequences of the top hit matches were used as reference organisms for phylogenetic analysis. DNA sequence alignments were performed with the Clustal W function. Two concatenated phylogenetic trees were constructed with MEGA-X software v. 10.2.6. using the maximum-likelihood method and the general time-reversible model with gamma distribution and proportion of invariable sites (GTR + G + I) to estimate the evolutionary distances (1000 bootstrap replicates) [49,51]. For the first one, the combined sequences of ITS and β-Tub 2 were used to determine the genus of the fungal isolates. In contrast, the second tree was constructed using ITS, Act, β-Tub 2 , GAPDH, CHS-1, CaM, and ApMat sequences that were directed to characterize the members of the Colletotrichum complexes, which are associated with anthracnosis disease in Mangifera indica L. Chitosan Characterization and Sensitivity of Isolated Fungi Low-molecular-weight chitosan (Aldrich, lot # STBF3282V, Saint Louis, MO, USA) was mixed and triturated with 120 mg of KBr for 10 min. The mixture was compacted with a hydraulic press (8 tons of pressure for 16 h), and the formed tablet was analyzed using a Fourier transformed infrared (FT-IR) resonance spectrometer (Spectrum GX FT-IR System, Perkin Elmer ™, Shelton, CT, USA). The spectrum obtained was within the frequency range of 4000-400 cm −1 [52]. The DD was calculated by the method proposed by Brugnerotto et al. [53] using reference baselines in the FT-IR spectrum to 1320 and 1420 cm −1 with equations 1 and 2 of Table 4. The molecular weight (kDa) was obtained using Mark-Houwink-Sakurada (Table 5) where [η] is the intrinsic viscosity of the polymer, Mv average molecular weight in Da (g mol −1 ), k = 0.070 g ml −1 , and α = 0.81 according to the DD of chitosan [54,55]. The intrinsic viscosity was calculated by extrapolating to zero concentration of the Huggins, and Kraemer equations (Table 4) [56,57], using an Ubbelohde viscosimeter (0B-L123, CANNON) submerged in a water bath recirculation system at a constant temperature of 25 • C. Chitosan solutions were prepared with concentrations of 0.003-0.002 g L −1 [54], using a solution of 0.3 M acetic acid and 0.2 M sodium acetate as a solvent [58]. Table 5. Equations for determining the deacetylation degree and molecular weight of chitosan. (DA: acetylation degree) A 1320 A 1420 = 0.3822 + 0.03133 DA 2 (DD: deacetylation degree) DD = 100 − DA Relative viscosity η rel = t flux solution of chitosan/ t flux of solvent Specific viscosity η sp = η rel -1 Huggins [η] = η sp /C Kramer [η] = (ln η rel )/C Mark-Houwink-Sakurada [η] = k (Mv) α The chitosan activity in mycelial growth was evaluated over the fungal isolates obtained. A chitosan solution in lactic acid (0.05 M) was prepared; the pH was adjusted with NaOH (1 N) to 5.6 and sterilized for 15 min at 121 • C. Likewise, a culture medium with potato dextrose agar (PDA) was prepared and sterilized. At 45 • C, the chitosan solution and the PDA culture medium were mixed, and 20 mL were poured into sterile Petri dishes and cooled until solidified. The treatments obtained were PDA culture medium, acidified PDA culture medium (0.05 M lactic acid), and acidified PDA culture medium (0.05 M lactic acid) with chitosan (0.1-1 g L −1 ). Spores from colonies of 10 days of growth were extracted with a sterile microbiological loop and inoculated to the center of the medium culture within the Petri dish. It was incubated at 25 • C and colony diameters were manually measured every 24 h until the colony covered 80-90% of the surface of the plate. Subsequently, logarithmic growth kinetics were obtained (logarithm of the radius of the colony vs. time), and the log phase (exponential growth) was later used to calculate the radial growth speed of this growth stage [44]. The log phase is the most suitable stage for testing the susceptibility of filamentous fungi to antifungal compounds [59]. Likewise, the percentage of radial growth inhibition of the acidified PDA treatment with chitosan (0.1-1 g L −1 ) was calculated with respect to the acidified PDA culture medium. Finally, the strains that showed sensitivity and resistance to chitosan were scored. The experimental study design was completely randomized; the study factor was isolated fungi and culture media composition. Analyses of variance (ANOVA) and Tukey tests, with the significance level set at p < 0.05, were carried out using JMP version 5.0 (SAS Institute Inc., Cary, NC, USA).
5,488
2022-02-01T00:00:00.000
[ "Environmental Science", "Biology" ]
Dualising consistent IIA / IIB truncations We use exceptional field theory to establish a duality between certain consistent 7-dimensional truncations with maximal SUSY from IIA to IIB. We use this technique to obtain new consistent truncations of IIB on $S^3$ and $H^{p,q}$ and work out the explicit reduction formulas in the internal sector. We also present uplifts for other gaugings of 7-d maximal SUGRA, including theories with a trombone gauging. Some of the latter can only be obtained by a non-geometric compactification. Introduction The consistent Kaluza-Klein truncation of higher-dimensional (super)gravity to lower-dimensional theories is an old and generically difficult problem due to the highly non-linear gravitational field equations [1]. Typically, consistent truncations require very particular backgrounds together with very particular matter couplings of the higher-dimensional theory, see e.g. [2][3][4]. Recent progress has come from the realisation of non-toroidal geometric compactifications via generalised Scherk-Schwarz-type compactifications on an extended spacetime within duality covariant formulations of the higher-dimensional supergravity theories [5][6][7][8][9][10][11][12]. In this language, finding consistent Kaluza-Klein reduction Ansätze translates into the search for Scherk-Schwarz twist matrices satisfying a number of differential consistency equations in the physical coordinates. Most recently, this has been used to work out the full Kaluza-Klein reduction for the AdS 5 × S 5 reduction of IIB supergravity in the framework of exceptional field theory [13]. In this paper we use this framework to study consistent truncations from IIA and IIB supergravity down to seven dimensional gauged supergravities. Specifically, we establish a duality relating consistent IIA and IIB truncations for certain gaugings of maximal 7-dimensional supergravity. We then employ this duality to derive new consistent truncations of type IIB theory on the three sphere S 3 , as well as on hyperboloids H p,q , which lead to compact SO(4), noncompact SO(p, q) and non-semisimple CSO(p, q, r) gaugings, respectively. Finally, we discuss new uplifts to type IIA / IIB of gauged supergravities involving gauging of the trombone scaling symmetry. In this final set of gaugings, we find that some can only be obtained by non-geometric compactifications 1 , in a set-up reminiscent of that recently discussed in [14]. Let us get more specific about the 7-dimensional theories discussed in this paper. In general, the fluxes in half-maximal supergravity are parametrized by an antisymmetric tensor X ABC of the T-duality group SO(d, d) [15], which encodes the T-duality chain of [16] X ABC : as well as two SO(d, d) vectors X A and f A , [17], the latter of which encodes the trombone gaugings. Because the trombone symmetry is an on-shell symmetry, theories with non-zero f A can only be defined at the level of the equations of motion [18]. For d = 3, i.e. reduction to seven dimensions, X ABC splits into two irreducible representations 20 −→ 10 + 10 ′ , with the SO(3, 3) Γ-matrices (or 't Hooft symbols, see for example appendix B of [19]), and symmetric matrices M αβ ,M αβ . Here the indices α, β = 1, . . . 4 are fundamental SL(4) ≃ Spin(3, 3) spinor indices. Similarly, the vectors can be written in terms of the 6 of SL(4) as For simplicity's sake we will take X A = f A = 0 for the following discussion although we will reintroduce them later on. Depending on the choice of M αβ ,M αβ , there are various one-parameter families of sevendimensional gaugings most of which are of locally non-geometric origin [19]. A distinguished role is played by the theories satisfying the condition M αβM αβ = 0 . (1.4) First, these can be consistently embedded into the maximal theory and second the subset where either M αβ orM αβ is non-degenerate allow for a geometric uplift to the type-I theory in ten dimensions as compactifications on the sphere S 3 and hyperboloids H p,q . For the sphere case, the reduction formulas have been worked out in [4] and later explained in the context of generalized geometry/double field theory [19,9,20]. The duality is a symmetry of the quadratic constraints ensuring consistency of the gauging, as a manifestation of a particular triple T-duality [19,21], generated by an element of O(3, 3) rather than SO (3,3). In this paper, we will study the embedding of these structures in the maximal theory with U-duality group SL(5). The above representations are embedded into U-duality representations according to (1.6) Now the duality (1.5) is no longer a symmetry of one and the same theory. Instead, the different embeddings (1.6) into the representations of the U-duality group induce inequivalent maximal seven-dimensional theories with gauge groups CSO(p, q, 1) for the IIA background and SO(p, q) for the IIB background, respectively [22]. These theories only coincide after truncation to the half-maximal sector. The IIA uplift has been given in [11] via a generalized Scherk-Schwarz Ansatz in an exceptional space in the framework of exceptional field theory [23]. Here we realise the duality (1.5) as an outer automorphism of SL(4) acting on the Scherk-Schwarz twist matrices, and thereby derive the full IIB reduction Ansatz. In particular, the duality exchanges the IIA and their dual IIB coordinates within the 10 coordinates of the exceptional space [24,25] 10 −→ 3 IIA + 3 ′ IIB + 3 + 1 . (1.7) We will also show how the triple T-duality acting on the 6's [19] ξ αβ ←→ ξ αβ = 1 2 is realised in the maximal theory. The paper is organized as follows. In section 2 we briefly review the pertinent structures of the relevant exceptional field theory and its generalized Scherk-Schwarz reduction ansatz. In section 3 we realize the duality (1.5) on the Scherk-Schwarz twist matrix, relating consistent IIA / IIB truncations. As an application we work out the full truncation Ansätze for the internal sectors of the IIA and IIB reductions. In particular, this establishes the consistency of the S 3 reduction of the IIB theory. Finally, in section 5 we extend the analysis to the construction of more general twist matrices and obtain new uplifts of various maximal supergravities including those in which the trombone scaling symmetry is gauged. EFT and 7-dimensional maximal gauged SUGRA Our key tool for the study of consistent truncations is the 'exceptional field theory' (EFT) [23,[26][27][28] with its associated extended geometry, see [24,29,30]. This is the duality covariant formulation of higher-dimensional supergravity which renders manifest the exceptional symmetry groups that are known to appear under dimensional reduction [31]. The formulation of interest for studying reductions to maximal seven-dimensional supergravity, is the SL(5) exceptional field theory. Apart from metric and scalars, it carries 10 vectors A µ ab , as well as 5 two-forms B µν a and 5 three-forms C µνρ a , all fields depending on 7 external and 10 internal coordinates {x µ , Y ab }, µ = 0, . . . , 6; a = 1, . . . , 5 with all fields subject to the section constraint [32] Three-forms enter the Lagrangian only under internal derivatives as ∂ ab C µνρ b . While the full SL(5) exceptional field theory has not yet been worked out (see [33][34][35] for EFTs in higher dimensions), its scalar sector has been given and studied in [24,36,37]. on weight zero tensors in the fundamental representations of SL(5). The section constraint (2.1) admits two solutions [25]. Breaking the U-duality group SL(5) down to the geometric SL(3), the internal coordinates decompose into respectively. Depending on the higher-dimensional origin, it is convenient to parametrize the scalar matrix M ab in a IIA or IIB basis according to where for IIA g mn is the metric, C m is the Ramond-Ramond one-form, B m = 1 2 ǫ mnp B np is the dualised Kalb-Ramond two-form, C = 1 3! ǫ mnp C mnp is the dualised Ramond-Ramond three-form and ϕ is the dilaton. For IIB, we follow the conventions of [25] so that all four-dimensional indices are placed "upside-down". Thus, g mn represents the metric, C m u = (B m , C m ) = 1 2 ǫ mnp (B np , C np ) represents the SL(2) doublet formed from the Kalb-Ramond and Ramond-Ramond two-forms and H uv is the SL(2) matrix parameterised by the dilaton ϕ and Ramond-Ramond scalar C 0 as follows (2.6) Throughout the paper the metric will be given in Einstein frame, unless otherwise specified. The EFT formulation of supergravity is a powerful tool for the study of consistent truncations, since a number of geometrically non-trivial reductions can be reformulated as generalized Scherk-Schwarz reductions on the extended space [5,6,19,7,38,[8][9][10][11]. In the reduction Ansatz, all dependence on the internal coordinates is carried by an SL(5) valued twist matrix U aā (Y ) with the scalar fields reducing according to and the remaining EFT fields factorizing as [11] with a scalar function ρ(Y ) . The 7-dimensional metric of the full 10-dimensional type II theory, g µν , is related to G µν above by where |g| here is the determinant of the metric in the internal directions and G µν (x) is the metric of the 7-dimensional gauged SUGRA. Consistency of the reduction Ansatz translates into the set of differential equations [7] (we use the conventions of [11]) for the twist matrices, with U abāb ≡ U [aā U b]b , and constant tensors Sāb, Zāb ,c , τāb transforming in the 15, 40 ′ , and 10, of SL(5), respectively. These tensors form the torsion of the Weitzenböck connection of EFT [29,39,37,9] and correspond to the embedding tensors of maximal D = 7 supergravity, which describe the allowed gaugings of the seven-dimensional theory [22]. The quadratic constraints which these tensors need to satisfy for consistency are a direct consequence of their definition by (2.10) together with the section constraint (2.1) and ensure that the gauge group closes. For later convenience, we spell out these equations (2.11) In particular, these identities imply that where is the "intertwining tensor" coupling two-forms to the vector field strengths [40] whose rank encodes the number of massive two-forms in the theory. The Xāb are the gauge generators evaluated in the vector representation, which take the form in terms of the embedding tensors (2.10). With τāb = 0, the corresponding theories are the conventional gaugings of D = 7 supergravity constructed in [22]. In particular, the gaugings triggered by Sāb correspond to CSO(p, q, 5 − p − q) gauge groups. The corresponding twist matrices for their D = 11 embedding have been provided in [11]. The gaugings triggered by Zāb ,c contain theories with gauge groups CSO(p, q, 4 − p − q) × (U (1)) 4−p−q and IIB origin. We will construct the corresponding twist matrices in this paper. A non-vanishing τāb corresponds to a gauging of the trombone scaling symmetry of the D = 7 theory, resulting in a theory that can be defined on the level of the equations of motion but does not admit an action [18] while still allowing for an uplift to the IIA/IIB equations of motion. Dualising IIA / IIB truncations In the above we have reviewed how consistent truncations of the IIA/IIB theory are encoded in Scherk-Schwarz twist matrices on the extended space (1.7) satisfying the consistency conditions (2.10) and the section constraint (2.1). In this section, we will first realize the duality (1.5) on the twist matrices and the coordinates of extended space in order to map consistent IIA truncations into consistent IIB truncations. In particular, this will provide the full non-linear reduction Ansätze for the reduction of the IIB theory on S 3 and the hyperboloids H p,q . At the level of the effective seven-dimensional theories this duality is realized on the embedding tensors that define the maximal gaugings. Decomposing the embedding tensors under the T-duality group as SL (5) We will now discuss the O(3, 3) transformation (1.5) that exchanges the 10 ←→ 10 ′ and maps the two 6's into themselves. We will show that it corresponds to a duality between IIA and IIB truncations. This transformation extends SL(4) ∼ Spin(3, 3) to P in (3,3), acting on SL(4) as the outer automorphism. Duality as an outer automorphism In order to consider type II truncations, we first perform a dimensional reduction of the exceptional space (1.7). In terms of SL(4) irreducible representations, the coordinates decompose as . We assume no dependence on the Y α5 , i.e. reduce the exceptional space to the doubled space of DFT [41,[43][44][45], see [46]. Depending on the choice of the physical coordinates among the remaining Y αβ , the theory is of IIA or IIB origin according to (2.4). Let us also introduce the notation where ǫ αβγδ is the 4-dimensional totally-antisymmetric symbol. The flip between IIA and IIB coordinates in (1.7) is then realized as We start from the following GL(4) Ansatz for the SL(5) Scherk-Schwarz twist matrix with V αᾱ ∈ SL(4). It follows from (2.10) that this Ansatz can only produce gaugings in the 10's and 6's according to the decomposition of (3.1). The corresponding embedding tensors are given in terms of the twist by The explicit form of these equations shows that combining the induces the duality (1.5) on the embedding tensor. Concretely this takes where τᾱβ = 1 2 ǫᾱβγδτγδ and ξᾱβ = 1 2 ǫᾱβγδξγδ. Additionally, the dualisation of the coordinates (3.3) exchanges IIA and IIB sections so that this duality relates IIA and IIB truncations. The NS-NS sector remains invariant under the duality since in the half-maximal theory both 10 and 10 ′ lie in the same O(3, 3) orbit [19]. Within the maximal theory the duality (1.5) relates consistent truncations of the maximal theories, which in general have different gauge groups, vacua, and fluctuations. The gaugings above are the only ones that survive the Z 2 projection to the half-maximal theory [19]. In section 4, we will also discuss gaugings which do not survive the Z 2 projection (these are the 20 ′ , 4's and 1) and we will show that the duality above does not, in general, hold for these cases. Example: IIA and IIB on S 3 and H p,q Before discussing the duality further, let us apply it to work out the consistent truncation of IIB SUGRA on S 3 and on warped H p,q manifolds. According to the above discussion, these are dual to the consistent truncations of IIA on these manifolds and yield a gauging in the 10 ′ ⊂ 40 ′ with gauge group CSO(p, q, r) × U (1) r where p + q + r = 4. Let us begin by reading off ω and V αᾱ from the IIA twist matrices for CSO(p, q, r) gaugings given in [11]. Here and throughout this paper we will order the rows and columns of V αᾱ as (i, 4, x) with i, j, = 1, . . . , 4 − r and x, y = 5 − r, . . . , 3. The twist is and we make no distinction between un/barred and upper/lower indices on the IIA coordinates y m . From (3.5), we further see that setting ρ = ω induces vanishing trombone parameter τāb as required for these gaugings. Together, the twist matrix then induces the gauging The function K(u, v) appearing in the twist satisfies the differential equation with u = δ ij y i y j . This can be solved analytically for all allowed values p, q. The internal space corresponding to these truncations are warped hyperboloids H p,q together with r flat directions [11]. We now apply the duality (3.3), (3.6), to obtain the IIB truncations on H p,q which give rise to the CSO(p, q, r) gaugings in the 10 ′ ⊂ 40 ′ such that . The IIB twist matrices are thus with ρ = ω and where nowỹ i are IIB coordinates (2.4),ũ = δ ijỹ iỹj andṽ = η ijỹ iỹj . Using (2.7) and the parameterisation (2.5) we can read off the internal space of the compactification. At the origin of the scalar coset, Māb(x) = δāb, we find the background, given by: Here we let m, n = 1, 2, 3 and we denote byB mn the Kalb-Ramond form and byφ the dilaton. We recall that following the conventions of [25] and matching the indices of the IIB coordinates (2.4), the four-dimensional IIB indices are "upside-down" compared to the usual placement. The internal space here is the a warped product H p,q × R r , where H p,q is the surface satisfying η ijỹ iỹj +z 2 = 1 in R 4−r , with z an additional coordinate. This coincides with the IIA background for this truncation, see [11]. The Kalb-Ramond background field strength is given bẙ upon using (3.10). Using (2.7), we can furthermore determine the full truncation Ansatz for the internal fields as fluctuations about the background (3.13). To simplify the notation, we will for this discussion not distinguish between barred and un-barred indices and we will simply refer to the IIB coordinates as y i , i.e. drop the tilde. Let us start by considering the case where p + q = 4. The truncation Ansatz can be elegantly formulated in terms of the harmonics the auxiliary metricg 17) and the auxiliary two form see section 3 of [13] on details of the construction. We note that only for the sphere case, when η ij = δ ij , these auxiliary structures coincide with the background (3.13). Furthermore, it will be useful to decompose the scalar fields M ab (x) as with SL(4) matrix m αβ . The truncation formulae for the internal components of all IIB fields are then read off from (2.5), (2.7) and yield 20) in terms of the objects (3.15)- (3.19) and with the function ∆ given by It is straightforward to verify that at the scalar origin M ab (x) = δ ab , these formulae reduce to the background (3.13). Let us now compare this result to the IIA truncation formulae on the dual background. Define, now in terms of the IIA coordinates, the harmonics the auxiliary metric (as before but now with the reverse position of indices) with volume formω 24) and the auxiliary two-form With the same scalar matrix (3.19), the truncation formulae for the internal components of all IIA fields are again read off from (2.5), (2.7) and yield 26) in terms of the above objects and with We can now see that the full reduction formulae of the IIA and IIB truncations coincide for the NS-NS sector and are related by the same SL(4) outer automorphism we have used for the twists, extended to the scalar fields (3.19) Finally, let us also give the reduction formulae when 2 ≤ p + q ≤ 4. To keep the notation more compact it will now be useful to use the dualised form potentials. For IIB these are while for IIA they are Let us once again start with the IIB reduction. Recall that our convention is that m = (i, x) where η xy = 0 and η ij = 0. Let andg ij ,g ij as before. Then we obtain the IIB truncation formulae (3.32) The corresponding IIA formulae can be given in terms of and read (3.34) A no-go theorem on IIA/IIB uplifts We have just shown that the IIA truncations with gaugings in the 10 ⊂ 15 induce dual IIB truncations with gaugings in the 10 ′ ⊂ 40 ′ , according to the embedding (3.1). A natural question to ask is whether it is possible to obtain the 10 ⊂ 15 gauging by a IIB truncation -or equivalently the 10 ′ ⊂ 40 ′ by a IIA truncation. We will now show that this cannot be done by analysing the symmetries of the embedding tensor. In order to use symmetry properties of the embedding tensor, we work in the 10-dimensional representation with where τāb ,cd represents the embedding tensor as given in (2.14). The consistency equations (2.10) in this representation can conveniently be computed in terms of We first assume that the twist only depends on IIA coordinates Y m4 and introduce the following Thus, a necessary requirement for gaugings to be lifted to IIA is that For completeness let us also consider the analogous consistency equations for the IIB theory. In terms of Here Lāb denotes the standard IIB Lie derivative, i.e. with upside-down indices (see for example [25]), with the diffeomorphism parameter K i,āb . We see that the right-hand side of the first equation is antisymmetric under the exchange of the pair of indices āb ↔ cd . Thus, we find that for a gauging to be of IIB origin, we must have Let us now return to the question of whether the 10 ′ ⊂ 40 ′ can come from IIA. To differentiate between the IIA and IIB theories we require dependence on all three internal coordinates and so we consider the case where the gaugings of the 10 ′ are not degenerate. Using (3.40) it is easy to show that whenMᾱβ = ηᾱβ is not degenerate, (3.41) can only be satisfied by a vanishing twist matrix. Thus these gaugings cannot be obtained from a IIA truncation. In particular, this applies to the SO(4) theory. By the duality established above, in turn a non-degenerate Mᾱβ = ηᾱβ cannot be obtained from a IIB truncation. This is interesting in the light of the half-maximal theory, where there is a family of SO(4) gaugings involving non-degenerate gaugings in both Mᾱβ andMᾱβ, i.e. in the 10 and 10 ′ [19]. The result here suggests that such gaugings can only be obtained by violating the section condition, as the corresponding twist matrix would be required to depend both on IIA coordinates and their dual IIB ones. Indeed, this has been shown for the half-maximal theory in [19,47]. Dualising the 4's Recall from (3.1) that the embedding tensor also contains two 4's and one 4 ′ . Can the duality discussed above be extended to these gaugings? Let us begin by relaxing the Ansatz (3.4) in order to have non-zero 4's. Consider first The consistency equations are then where Aᾱ = Vᾱ α A α . We see that the equations for the 10's and 6's are unchanged but additionally the 4 ′ ⊂ 40 ′ can be gauged. If we instead take the Ansatz we again find the same 10's and 6's as in (3.5) but additionally the following can be gauged: The SL(4) (co-)vectors A α and B α should be exchanged by the outer automorphism of SL(4) so that This maps a solution of the equations (4.4) to a solution of (4.2) but not vice versa. Thus, it is not in general possible to map a twist that gauges the 4 ′ ⊂ 40 ′ into a twist gauging the 4 ⊂ 15, 4 ⊂ 10 and 20 ′ ⊂ 40 ′ . Furthermore, if we start with a gauging of the 4 ′ ⊂ 40 ′ that satisfies the quadratic constraints (2.11) and perform the duality to obtain a gauging in the 4 ⊂ 15, 4 ⊂ 10 and 20 ′ ⊂ 40 ′ , then this dual gauging does not in general satisfy the quadratic constraint. Then the dual gaugings do not define a consistent gauged SUGRA. We will see an example of this in section 5.2. Further examples We will now use our twist Ansätze (3.4), (4.1) and (4.3) and the duality discussed above to obtain new uplifts of various maximal gauged SUGRAs. This is not an exhaustive list of solutions to the quadratic constraints, but rather a selection of examples for which uplifts to type II SUGRA can be constructed nicely with the twist Ansätze we have considered so far. The gaugings we consider are summarised in table 1. Each value of α in the range −π/2 ≤ α ≤ π, as well as each λ taking the values λ = 1, 1 2 , 0, each η, η ′ = ±1 and each a ∈ R labels different inequivalent orbits. Note that for orbits 1 and 7 -9 we have indicated that the gaugings in the 4 vanish. This is because any non-zero gaugings in the 4 allowed by the quadratic constraint (2.11) can be removed by an SL(5) transformation and thus lead to equivalent 7-dimensional theories. Orbits 6 -9 involve the trombone gauging (when λ = 0) and thus the 7-dimensional theories they represent do not admit an action principle. We will see in section 5.5 that in some cases their uplifts are non-geometric, where the trombone scaling symmetry is used to patch together the solution. Orbits 1 and 2 In section 3.3 we showed that non-degenerate gaugings in the 10 descend from IIA and those in the 10 ′ descend from IIB. Let us now uplift gaugings which mix the 10 and 10 ′ . The quadratic constraint is nowMᾱβ The solutions are given by orbits 4 -11 of [19]. Orbit 1 This orbit can be represented by the gaugings These correspond to an embedding of orbits 6 and 9 (with α = π/4) of [19] into the maximal theory. The twist matrices are given by with ρ = ω and where v = ηy 2 1 and u = y 2 1 . From (2.5) we find the internal space in string frame to be Note that when η = 1 the background is the Kaluza-Klein circle encountered in (3.13). However, the internal space will be different at other points in the scalar moduli space. It is of course also possible to generate the gaugings by applying the duality discussed in section 3.1. As before, the internal space remains the same under the duality. Orbits 3 and 4 When Mᾱβ and Zᾱ5 ,5 are the only non-zero gaugings, the quadratic constraint is We could use an SL(5) transformation to set c = 1 but we will not do so here to keep track of c in the internal space. However, the reader should keep in mind that all values of c = 0 correspond to equivalent 7-dimensional theories. From the no-go theorem (3.41) one finds that this gauging cannot be obtained by a IIA truncation. It can, however, be lifted to 10-dimensional IIB SUGRA using the Ansatz (4.1) with the same V αᾱ as in (3.12) with r = 1 and with where y 3 = Y 12 is the third IIB coordinate. Recall that the other two coordinate are given by The background for this truncation is given bẙ As before, we use the convention of [25] where IIB indices are placed "upside-down" andC ij labels the Ramond-Ramond two-form. The metric here is the T-dual of the H p,q solutions in (3.13). Furthermore, only the two-form depends on c and the NS-NS sector remains invariant as c is turned on. We again use the Ansatz (4.1) with V αᾱ as in (3.12) with r = 2 and solve the gauging of the Zᾱ5 ,5 by with all other A α = 0, α = 1. The twist now only depends on y 1 = Y 14 , y 2 = Y 24 and y 3 = Y 34 and so gives an uplift to IIA supergravity. From (2.5) the internal space is found to be This is the same circle / hyperbola reduction as in (3.13) but with an additional Ramond-Ramond one-formC 1 turned on. Similar to orbit 3, only the Ramond-Ramond one-form depends on c and d. To conclude the discussion of these orbits, let us consider the dual gaugings. The duality would give gaugings of the 4 ⊂ 15, 4 ⊂ 10 with as well as possibly the 20 ′ . However, these gaugings violate the quadratic constraint (2.11) and hence they do not define a consistent gauged SUGRA. Orbit 6 To keep our formulae simple we will actually uplift the gaugings with inequivalent gaugings for λ = 1, 1 2 , 0. We can obtain these gaugings easily using the blockdiagonal Ansatz for the twist matrix (3.4) and by choosing the scalars ρ and ω appropriately. The internal space in string frame is We can see that the string-frame metric is independent of λ and the dilaton tunes between the different gaugings. In particular, when λ = 1 we have a standard 7-dimensional gauged SUGRA, whereas for the cases λ = 0 and λ = 1/2 the 7-dimensional theory does not have an action principle, even though it can still be uplifted to 10-dimensional SUGRA. For each λ the outer automorphism discussed in section 3.1 relates equivalent gaugings. Orbits 7 -9 The gaugings we consider here involve some of the gaugings encountered previously in this paper together with both 6's. These can be uplifted by using almost the same twist matrices as without the 6's. In particular we will keep V αᾱ unchanged but change ρ = ω. Let us write where ω 0 is the value of ω where the 6's vanish. The function h then has to satisfy 2τᾱβ = −2ξᾱβ = 5Vᾱβ αβ ∂ αβ ln h . For a = 0 these are S 1 and H 1 reductions. Now, we find The internal space in string-frame is given bẙ We see that when η = 1, the internal space is non-geometric because the dilaton is not globally well-defined. Instead, it is patched by the trombone scaling symmetry of the equations of motion. This is a reminiscent of the non-geometric construction in [14] albeit in seven dimensions. Conclusions In this paper we studied consistent truncations of type IIA and IIB SUGRA to 7-dimensional maximal gauged SUGRA using exceptional field theory. By using a GL(4) Ansatz for the twist matrices, we showed that IIA / IIB consistent truncations are related by the outer automorphism of SL(4) which acts on the irreducible representations of the embedding tensor as 10 ←→ 10 ′ , 6 40 ←→ 6 40 , 6 10 ←→ 6 10 . (6.1) Here 6 40 and 6 10 denote the 6's coming from the 40 ′ of SL(5) and from the trombone gauging, respectively. We also showed that this duality between IIA and IIB consistent truncations always exists when the embedding tensor has vanishing components in the 4 ′ of SL(4). Otherwise, the dual gaugings will in general not satisfy the quadratic constraints (2.11). We used this duality to prove the consistent truncation of IIB on S 3 and H p,q by constructing twist matrices that give rise to the relevant CSO(p, q, r) gaugings with embedding tensor in the 40 ′ . The twist matrices are dual to those describing the IIA uplift of gaugings in the 15 [11]. Using the dictionary between EFT and IIA / IIB fields, we used the twist matrices to derive the full truncation Ansätze for the internal sectors of the IIA and IIB reductions. They were shown to coincide in the NS-NS sector. This is a general feature of the duality: it relates truncations with the same NS-NS sector. Finally, from the form of the consistency equations we derived some no-go theorems showing that non-degenerate gaugings with IIA origin cannot also be uplifted to IIB and vice versa. In the second part of this paper we further generalised the twist matrices of [11] to uplift other gaugings of 7-dimensional maximal gauged SUGRA to type II SUGRA. These examples include gaugings of the 15 and 40 ′ simultaneously, and of the trombone, where the gauged SUGRA does not admit a Lagrangian. In the latter case, the internal space of the truncation is only well-defined up to the R + scaling symmetry of the equations of motion. Among the direct applications of these uplift formulas is the higher-dimensional embedding of the vacua found in the lower-dimensional theories, such as [48]. The twist matrices used throughout this paper are defined in local patches. For the truncation to be consistent, these twist matrices must yield a generalised parallelisation [9]. To show this we would have to patch our twist matrices to obtain globally well-defined vector fields. A patching prescription for exceptional field theory is still lacking, although it is known for double field theory [49][50][51][52]. Whatever this covariant patching prescription will be, it should consist of the global SL(5) × R + symmetries of the 7-dimensional SUGRA. We can thus argue that our twist matrices are well-defined by checking that the internal space they define is well-defined up to i.e. if IIA and IIB reductions give rise to inequivalent lower-dimensional theories. A natural starting point for further investigation are 3-dimensional maximal gauged SUGRAs. These are known to have two inequivalent SO(8) gaugings, expected to arise from S 7 reductions of IIA / IIB [53]. Indeed, the full EFT has been constructed for this case [54] so that the full reduction Ansätze of the S 7 truncations could then also be derived. It would also be interesting to cast into this framework consistent truncations of the massive IIA theory such as [55] which would require a (modest) dependence of the twist matrices on one of the non-physical coordinates, c.f. [56]. Finally, it would be interesting to try and find a systematic procedure for the construction of the twist matrices for all possible allowed gaugings of the quadratic constraint (2.11). An interesting proposal for the case of half-maximal gauged SUGRA appeared in [20]. However, the resulting twist matrices are not O(d, d)-valued so that it is not immediately clear how to find the associated reduction Ansätze. 23
7,820.4
2015-10-12T00:00:00.000
[ "Physics" ]
Sampling-Noise Modeling & Removal in Shape From Focus Systems Through Kalman Filter Shape from Focus (SFF) is one of the passive techniques to recover the shape of an object under consideration. It utilizes the focus cue present in the stack of images, obtained by a single camera. In SFF when the images are acquired, the inter-frame distance, also known as the sampling step size, is assumed to be constant. However, in practice, due to mechanical constraints, sampling step size cannot remain constant. The inconsistency in the sampling step size causes the problem of jitter, and produces Jitter noise in focus curves. This Jitter noise is not visible in images, because each pixel in an image (of the stack) will be subjected to the same error in focus. Thus, traditional image denoising techniques will not work. This paper formulates a model of the Jitter noise, followed by the design of system and measurement models for Kalman filter. Then, the jittering problem for SFF systems is solved using the proposed filtering technique. Experiments are performed on simulated and real objects. Ten noise levels are considered for simulated, and four for real objects. RMSE and Correlation are used to measure the reconstructed shape. The results show the effectiveness of the proposed scheme. I. INTRODUCTION Three-dimensional shape recovery using two-dimensional images is a well-established research problem in computer vision applications, robot and machine vision, bio-informatics, medical imaging, consumer cameras, microscopy, and so forth [1]- [6]. In recent years, many techniques have been proposed to recover depth maps from acquired images, as natural scenes under different conditions produce different cues [7]. These cues are distinguished from each other, and can be measured, depending on various factors. One of these cues is focusing that is measured by determining the blur-degree of the image. The method by which object shapes within scenes are estimated, by accommodating focus cues by means of fixed-axis-multiple-images, is referred to as shape from focus (SFF). This area of research has progressed extensively in recent years. All of SFF methods broadly consist of three main steps; image acquisition, focus measure (FM) application, The associate editor coordinating the review of this manuscript and approving it for publication was Liang Hu . and shape improvement techniques, which are discussed briefly below. Shape from Focus systems are modeled by the simple lens equation, given as: where f is the focal length of the imaging device, u is the distance of the object point from the imaging device, and v is the position of the object point where it is best focused by the lens. Fig. 1 illustrates this. In SFF, the image stack is acquired by manipulating one of the factors of (1), while keeping the other two factors constant. Conventionally, the images are acquired by changing u of the system. An example of such type of system is an optical microscope. In other systems, focus settings (focal length f ) can also be changed to capture images. Examples of these systems can be found in [8]. Either f or u is changed in small steps, and an image (of dimensions l × m) is obtained, and stored in the image stack, giving the total number of images as n. Changing v for image acquisition is quite challenging, or mostly not feasible [9]. However, whatever factor is manipulated, the magnification of the imaging system should remain constant, while the depth of field should be as shallow as possible [10]. When all the images are acquired, the result is an image stack I of dimensions l × m × n, and each pixel in the stack is represented by P i,j (k), where 1 ≤ i ≤ l, 1 ≤ j ≤ m and 1 ≤ k ≤ n are the indices in the l, m, and n directions. P i,j (k) also represents the pixel curve along the optical axis. This is shown in Fig. 2. The number of images (n) is given by: where is the sampling step size. The step size expression for change in u is provided in [9]. The main idea of SFF is to estimate the shape of the object in consideration using the focus cues present in the image stack. The sharpness of focus in an image is measured by a sharpness criterion, called the focus measure (FM) operator. After the image stack is obtained, the FM is applied to each pixel, to measure the amount of focus each pixel possesses, by the following: where F is the FM transformation of pixel P i,j (k) to obtain the focus value i,j in the k th image, and represents the focus behavior or (in other words) the focus curve of the pixel. There are numerous FMs proposed in the literature, summarized by [8], each designed to suppress the out-of-focus regions, and enhance the in focus points in each image. For example, Sum of Modified Laplacian (SML) utilizes squares of the second derivative of the images, Tenengrad or Tenenbaum (TENG) utilizes the first derivative of the image, and Gray Level Variance (GLVA) uses statistical method to compute variance as focus measurement, whereas, Image Curvature(CURV ) calculates image surface curvature. There are other FMs such as Image Contrast (CONT ), 3D Laplacian The key role of FM in SFF systems is to provide a sharp focus curve (in parallel to the optical axis) for every object point in the image stack. Conventionally, the initial depth map D i,j can be obtained by maximizing the focus curve along the optical axis, and obtaining the value of k where i,j (k) is the maximum according to: When the images are acquired, the shape of the object is discretized into image frames, causing loss of information between two consecutive frames. To address this issue, many techniques have been proposed in the literature. The traditional techniques used SML as focus measure and apply Gaussian interpolation technique to compute intra-frame values for better focus [11], [12]. The other concept of focusedimage-surface and curved-focused-image-surface [13], [14], utilizes piecewise curved surface approximation. Alternatively, Neural Networks and Deep Neural Networks have also been employed [2], [15], [16]. Kim et al. in [17] proposed a method to improve the efficiency of neural networks by introducing a weight passing method. Muhammad & Choi in [10] proposed a method based on Bezier Surface approximation. Ali et al. in [18] increased the accuracy of 3D shapes by applying 3D weighted least squares to enhance image focus volume. Ali et al. in [19] also used the wavelet transform method to improve shape reconstruction. Guided image filtering for depth enhancement in SFF is proposed by [20]. Also, Ali et al. in [21] recovered several 3D shapes by applying different FMs, and combining their results into one final shape. Fan et al. in [22] used the combination of 3D steerable filters on treating texture-less regions. Since the reconstructed 3D shape quality in SFF depends on the applied FM, another FM based on the analysis of 3D structure tensor of the image sequence is proposed by [23]. Ma et al. in [24] proposed a method for depth reconstruction that utilized non-local matting Laplacian along with Markov Random Field. Yan et al. [25] utilized pulsed coupled neural network to aggregate shape using focus. Jang et al. in [26] proposed shape optimization through non-para-metric regression. The paper is structured as follows. Focus measurement and focus curve models are discussed in next section followed by motivation of the work. Section IV addresses Jitter noise Modeling. Section V proposed the methodology, while Section VI presents the Results and Discussion. Section VII then concludes the work. II. FOCUS MEASUREMENTS & FOCUS CURVE MODELS The focus curves or the focus behavior (of every individual pixel) depend on the FM used, the nature of cue the FM utilizes, the camera (imaging device) parameters, and most importantly, the image texture around that object point [8], [9]. If the images are acquired properly, then these focus curves are bell-shaped [12]. The Gaussian Model is given by: the Lorentzian-Cauchy Model is given by: and the Quadratic Model is given by: where the As, Bs and Cs are the parameters of each model. The unification of these models into Quadratic Model have been provided by [27]. If the logarithmic transformation is applied to (5), and the equations are simplified, it transforms to (7) [27], as follows: Similarly, the reciprocal transformation, when applied to (6) can result in (7) after simplification, as follows: This paper utilizes the Quadratic model (given in (7)) to model the Jitter noise in SFF systems in the next section. The Quadratic model provides computational advantage over Gaussian and Lorentzian-Cauchy models, due to its simplicity and robustness [27]. III. MOTIVATION In SFF, when the shape is discretized into image frames by sampling the object in scene, the step size for sampling is presumed constant [9]. Although shape from focus has been thoroughly investigated in recent years, there still exist several insufficiently solved problems that impact the performance of the system. One of these problems is the unstable or non-constant sampling step size. This can be due to the mechanical structure of the imaging device and lens-focusing methods. The resultant variation in the amplitude of the signal due to instability in sampling step size is referred to as Jittering or Jitter noise. Jang et al. in [28] proposed the removal of Jitter noise using Kalman Filter. Since then, many variants of their method have been proposed [29]- [33]. However, all of their methods used scalar-models for Kalman filter (i.e., the system matrix was taken as 1), and ignored the dynamic nature of focus cues. For each step, multiple images were acquired to eliminate the Jitter i.e., if there were n images (of dimensions l × m) in the stack, they required 100 samples for each step, and therefore, n × 100 samples were required for each focus curve. This increases the complexity of the system, and huge computational cost has to be paid. It also impacts the practical use of their methods. Also, Jang et al. in [28]- [33] considered only symmetric bell-shaped distributions for vibrational noise in translational stage, and their designed measurement model measures only a constant (each step position k). However, in such case taking the mean of the measurement values on every step position k can provide the similar results. The authors have also considered the Jitter noise to have Normal and Levy distributions [28], [31]; however, in practice, this resultant Jitter noise due to the vibrational noise does not follow (Normal or Levy) symmetric bell-shaped distributions. In this paper, the nature of Jitter noise has been studied and necessary conditions for approximating this noise are proposed and discussed. Jitter noise in SFF systems is a position dependent noise and varies according to the focus position. The manuscript models Jitter noise and conclude that it follows gamma ( ) distribution (a non-symmetric distribution) with a constant mean and position dependent variance (discussed in Section IV). A Kalman filter is then designed for removing this noise from SFF systems. The system matrix is formulated using the Taylor series, followed by explanation and design of the measurement model. The shape recovery expression is then provided. In the proposed scheme, a single measurement is taken for each step. Thus, for n images (of dimensions l × m) in the image stack, the proposed method requires only n samples for each focus curve, and utilizes 100 times less images as compared to previous methods, providing better shape recovery results in terms of correlation and RMSE (provided in Section VI). IV. JITTER MODELING IN SHAPE FROM FOCUS SYSTEMS In SFF systems, jitter occurs when there is uncertainty or unevenness in the step size of u or f . In this section, we discuss the step size in both situations of image acquisition i.e., change in object distance from the lens ( u), and change in focal length of the imaging device system ( f ). After this, we discuss types of Jitter, followed by the proposed model for Jitter noise. The next section utilizes this proposed Jitter noise model and Kalman filter to remove the effects of jittering on focus curves. A. STEP SIZE IN SFF IMAGE ACQUISITION The step size expression for u is provided by [9]. In their system, the object is moved towards (or away from) the imaging device in small constant steps of u, by keeping the focal length and magnification constant, and also keeping the depth of field as shallow as possible. Their simplified expression for u (step size) is provided as follows: where DoF is the depth of field of the system, and ρ = 2.9957, as provided by [9]. Equation (10) provides the maximum limit for u. The ideal example of such systems is an optical microscope. The images can also be acquired by changing the focal length of the system in small, constant increments of f [34]. In this type of image acquisition for SFF, the object is held static in front of the imaging device, and the device focal length is changed. Mostly, auto-focusing algorithms utilize this type of technique for searching for the best focal lens position for a single point. This can also be used for depth and shape estimation of the object under consideration [35]. In both the above cases, an image is stored at every step to obtain a stack of images, as discussed in Section I. B. MODELING JITTER IN SFF To model the jitter, consider again the Quadratic function, as follows: where a 2 , a 1 and a 0 are the equation parameters, g(k) is the quadratic function, and 1 ≤ k ≤ n represents the sample points of this function. The step size is k = k − (k − 1), and is considered as 1. To model Jitter noise, consider the uncertainty in step size as, ∼ N (0, σ 2 ), then (11) can be written as follows: Rewriting (12), by expanding the squared terms and simplifying using the Taylor series, the following equation is derived: Using (11) and (13), following is obtained: Equation (14) shows that the noise on the RHS of the equation is multiplied to the first and second derivatives of the function, concluding that the Jitter noise in SFF systems depends on the slope and concavity of the focus curves. If is Normal (N (0, σ 2 )), 2 will follow a chi-square distribution. Equation (14) can be rewritten as: where η k N and η kχ are given by the following: and, where η k N is normally distributed with mean µ N = 0 and variance σ 2 N = (2a 2 k + a 1 ) 2 σ 2 . Meanwhile, η kχ follows gamma ( ) distribution, with mean µ χ = a 2 σ 2 , and variance σ 2 χ = 2a 2 2 σ 4 . Therefore, the total resultant noise η will have mean µ η = a 2 σ 2 and variance as σ 2 η = (2a 2 k + a 1 ) 2 σ 2 + 2a 2 2 σ 4 . Derivation of the mean (µ η ) and variance (σ 2 η ) is based on the standard manipulations with probability theory formulas and properties of expectations. The value of the variance of η k N (σ N ) will be different at every k th step, and will become zero at: The direction of η kχ will always be towards the concavity of the function. The range (R χ ) of η k χ depends on g (k). If g (k) > 0, then 0 < χ < ∞; similarly, if g (k) < 0, then −∞ < χ < 0; and if g (k) = 0, the effect of this noise is zero. However, the physical limitations and restrictions of the imaging device restrict χ < k. The noise factor of η kχ will remain throughout the function g(k + ); however, the sign of η χ will depend on the sign of g (k), and will always be towards the concavity of the function. For the values of k other than (or near to) the value given in (18), η k N will be more significant for η kχ . But as the function approaches the value given in (18), the effect of η k N VOLUME 9, 2021 diminishes, while the contribution of η kχ becomes significant. However, if the variance of , given by: is chosen, the effect of η χ can be ignored, making η N the only contribution to the noise, and the resultant noise will be Normal. However, if the condition given in (19) is violated, then η kχ can play a significant role, and thus cannot be ignored. The combination of both η k N and η kχ results in a non-Normal noise. V. THE PROPOSED METHODOLOGY In the previous section, Jitter noise is modeled, and explained in detail. In this section, the proposed method is presented. The proposed scheme can be applied in two ways, before FM (as pre-FM application, i.e., on pixel curves P i,j (k)), or after FM (as post-FM application, i.e., on focus curves i,j (k)). To fully remove the Jitter noise from the pixel/focus curves, the Kalman Filter is designed as follows: A. KALMAN FILTER DESIGN To model and design the Kalman filter, the proposed method in this manuscript utilizes the cubic equation to design the system, and measurement equations for the filter to remove the Jitter noise from the pixel/focus curves. Although quadratic equation can also be utilized, cubic equation gives the system model more robustness and flexibility, since the cubic equation is of higher degree than quadratic equation. The system and measurement models are derived in the following sections. 1) SYSTEM MODEL In order to derive the system model for Kalman filter application, the cubic equation is considered and is given as: where a 0 , a 1 , a 2 , and a 3 are the coefficients of the equation, and h k is the cubic function. The first, second, and third derivatives of (20) are: Using the Taylor series again, the equation for h k+1 is written as: By expanding the powers in (22) and rearranging the following is obtained: Utilizing (20) and the set of equations in (21); (23) can be rewritten as: Using (24), and repeating the process for h (k + 1), h (k + 1), and h (k + 1), a similar set of equations can be obtained for the set of h k , and its derivatives: Thus, utilizing this set of equations in (25), the system equations for the Kalman filter can be written as: where A is the system state matrix, X k is the state vector. The predicted state noise at k is given by ω k ∼ N (0, Q), and h represents the focus curve ( i,j (k)) or pixel curve (P i,j (k)) values. The manuscript assumes that there is no system noise and the only noise present is due to jitter in measurements, therefore, Q (the process covariance matrix) is assumed to have a very small value but not 0. Then, state vector X k , and system matrix A are given by: The system covariance equation is provided by: where − k is the predicted (estimated) covariance matrix at k, k−1 is the predicted covariance matrix at k − 1, and Q is the process covariance matrix, respectively. 2) MEASUREMENT MODEL The next step in the proposed methodology is to design the measurement model for the Kalman filter. For this purpose, (20) is rewritten using the Taylor series and the steps explained in Section IV, as follows: by rearranging, and utilizing (21), the following is obtained: Utilizing the condition explained in (19), the 2 and 3 factors can be ignored, resulting in a simplified measurement model, as follows: where Y k represents the measurements of the pixel curve or focus curve of pixel P i,j (k) after FM application at the k th image, C is the state measurement matrix (given as C = 1 0 0 0 ) and η k is the noise in measurement due to jitter, as modeled in Section IV. As the filter can be applied in two ways: • when applied before FM (as pre-FM application), i.e., on pixel curves, then: Y k = P i,j (k), • or when applied after FM (as post-FM application), i.e., on focus curves, then: Y k = i,j (k). 3) UPDATED STATES AND KALMAN GAIN The Kalman gain is computed on every step using (31) as: where K k is the Kalman gain at k, and R is the measurement covariance matrix. The optimal state estimate is computed using the following: whereX − k = AX k−1 . B. SHAPE RECOVERY After the n th step iteration for the focus curve is completed, the depth for every pixel is recovered to obtain the shape of the object under consideration. As presented in the previous section, the filter can be applied in two ways, pre-or post-FM application. If the filter is applied as pre-FM application then before recovering the depth map, FM is applied onP i,j (k) to obtain i,j (k) using (3). However, if the filter application is post-FM, then the depth map can be recovered directly usingˆ i,j (k). For every object point i, j the coefficients of (20) (around k * ) are estimated using the following: where k * is the position whereˆ i,j (k) is maximum, and is obtained by the following: In (33), the vectorM i,j represents the collection of parameters of h k , i,j (k * ) are the values ofˆ i,j (k) around k * , and i,j (k * ) is the coefficient matrix; all for i, j object point and defined as follows: The refined and filtered depth (KD i,j ) for every P i,j is then recovered by: VI. RESULTS AND DISCUSSION This section analyzes the experimental results, and discusses them in detail. The section is divided into three subsections. First, details of the experimental setup are provided, followed by the depth map and shape assessment criteria, and the metric measures used. Later the detail analysis of the affects of Jitter noise on SFF is provided at the end of the section. A. EXPERIMENTAL SETUP Experiments for shape reconstruction analysis are performed on seven objects. Table 2, provides a summary of the objects used in the 3D shape analysis. Ten simulated datasets of simulated cone are generated with different lens positions and Jitter noise levels using camera simulation software (AVS). The details of AVS are provided in [8], [36], [37]. The Matlab code used can be downloaded from [8]. All the datasets consist of 97 images with 360×360 pixels. The AVS software is provided with the depth map, texture image, and camera parameters. The texture map consists of concentric circles of two alternating black and white stripes. The depth maps and the texture images used for image generation via AVS for all sequences of Simulated Cone are the same. The difference in each dataset is the uncertainty in step size u to generate the sequences, in order to study the effect of Jitter on shape reconstruction. The values of variance σ in u are (0, 0.1, 0.25, 0.5, 0.75, 1.0, 1.25, 1.5, 1.75, and 2.0). The real datasets contain real objects, Real Cone, Real Plane, LCD-TFT Filter, Groove, Coin, and Image-I. These image sequences were originally in gray-scale. Fig. 4 provides the ground truths of Simulated and Real cones. Fig. 5 shows the 10th frame of each image sequence. These image sequences have been widely used by many researchers including [19], [38]- [42]. Images of Real Cone were taken using the CCD camera system, [13], with dimensions of 200 × 200 × 97. The Real Plane image sequence is also obtained in a similar way, and contains 87 image frames each with a size of 200×200 pixels. Sixty images of the LCD-TFT filter were taken by the microscopic control system (MCS), with each image consisting of 300 × 300 pixels. The coin sequence consists of magnified images of Lincoln's head from the back of a US penny. The LCD-TFT filter images were microscopic images of an LCD color filter. These images were also obtained by means of MCS. The system consisted of a personal computer integrated with a frame grabber board (Matrox Meteor-II) and a CCD camera (SAMSUNG CAMERA SCC-341) mounted on a microscope (NIKON OPTIPHOT-100S). Computer software acquired the images by controlling the lens position through a stepper motor driver (MAC 5,000), possessing a 2.5 nm minimum step length. All the images being stored in sequence at every step were captured by varying the object plane. The sequence of Image-I is the letter I engraved in a metallic surface. This sequence consists of 60 images, and was also obtained via the same system under magnification of 10×. The dimensions of this image sequence are 300×300 pixels. The Groove image sequence is of a V-groove engraved in a metallic surface. The dimensions of this image sequence are 300 × 300 pixels, with 60 images. B. METRIC MEASURES The shape reconstruction quality is the characteristic that measures the perceived difference between the reconstructed shape and the ideal shape. As the difference increases, the quality of shape reconstruction reduces. In this article, the quality of the depth map obtained by using different focus measures under various levels of Jitter noise is analyzed. In the ideal case, the obtained depth map is indistinguishable from the original map, and the difference is zero, hence, the quality of the map is at its maximum. Several quality metrics have been provided in the previous literature [43]. In this manuscript, RMSE and correlation are used to compare the proposed method combined with various FM operators under different levels of Jitter noise. Root Mean Square Error (RMSE) is the square root of the variance of the residuals of the data under observation. This indicates how close the perceived shape is to the original shape. A lower value of RMSE indicates better results, written as: where G true is the ground truth, D obtained is the obtained depth map, and l × m are the dimensions of the depth maps. The higher the value of RMSE, the larger the error in shape reconstruction; for better results, the value should be close to zero. Correlation, or Pearson Correlation, is a linear relationship or similarity measure between two shapes [44], given by: where cov is the covariance, and σ 2 G and σ 2 D are the variances of G true and D obtained , respectively. In Table 3 the proposed method is compared with previous methods provided by Jang et al. in [28]- [33]. The results are provided for σ 2 = 0.1. The first method is a scalar Kalman filter method, followed by scalar version of modified the correlation results of the proposed scheme are better or comparable to other methods. Only the correlation values of IMCCKF method are better than proposed method as they use more data, but their RMSE results are poor in comparison to proposed scheme. This is due to the fact that the proposed scheme utilizes only the basic Kalman filter model. The results can be further improved if advance variants of Kalman filter are used with proposed scheme. For further experimentation, each level of noise and FM, three scenarios are considered: FM only (i.e., without proposed filter), proposed filtering as a pre-processing step, (i.e. applied before FM), and proposed filtering as a post-processing step, (i.e. applied after FM). Each table has seven columns, with the σ 2 in the first column, followed by one column each for correlation and RMSE for each scenario. Table 4 along with Fig. 6 show the results of shape reconstruction for Simulated Cone under various levels of Jitter noise. The solid lines with markers show the correlation and RMSE values, whereas the dotted lines show the general linear trend of the data. From the table and the figure, it is clear that when SML is used without the proposed scheme, as the noise level increases, the results become poorer. At lower noise levels, the values of correlation are similar for all three scenarios, whereas, when the noise levels increase, the correlation values without the proposed filter decrease sharply at 11.6%. The decrease in correlation values with the use of filter as pre-or post-step is merely 0.22% and 0.23%, respectively. The RMSE values of shape reconstruction when using SML start with 7.339 and increase at high rate of 17.95%, whereas the RMSE values when using the proposed scheme start at the lower value of 6.952 and 7.194 and decrease at 13.94% and 7.69% with increase in noise levels. The graphs and tables clearly show that when the proposed scheme is applied as pre-or post-step, the results are better, as compared to just using SML. Fig. 8 represents the shape reconstruction of Simulated Cone using SML under various noise levels in all three scenarios, along with the cross-sections of these shapes. The blue line in the cross-section figures represents the ground truth for simulated cone. Table 5 and Fig. 7 provide the results of shape reconstruction when TENG is used as FM. A similar trend is observed, as the values of correlation at lower noise levels are similar for all three scenarios, whereas, when the noise levels increase the correlation values without the filter decrease sharply at 0.83%. The decrease in correlation values with the use of filter as pre/post step is merely 0.18% and 0.19%. The RMSE values of shape reconstruction when using TENG start with 7.315 and increase at 9.43%, whereas the RMSE values when using the proposed scheme start at the lower value of 6.829 and 6.912 and decrease at 1.27% and 7.25% with increase in noise levels. The graphs and tables clearly show that when the proposed scheme is applied as pre-or post-step, the results are better, as compared to just using TENG. Fig. 9 represents the shape reconstruction of Simulated Cone using TENG under various noise levels in all three scenarios, along with the cross-sections of these shapes. Table 6 and Fig. 10 give the results of shape reconstruction when GLVA is used as an FM. When noise level increases and the GLVA is used without filtering, the correlation levels decrease by 0.83%. The correlations with filter in both preand post-application remain similar with decrease of 0.015% over the increase of noise from 0 to 2.0. When no filter is used, the RMSE values show a similar trend of 10.86% increase, and when proposed technique is used, about 3% and 10% decrease. Table 7 and Fig. 11 provide the results of WAVS as FM under all three scenarios of FM and the proposed method application. As noise increases, the correlation of WAVS decrease at the rate of 1%, but when combined with the proposed technique, with the increase in noise, it increases at 0.7%. However, at lower values of noise, the WAVS with or without filtering behaves similar. The trend in RMSE values also suggests that WAVS combined with the proposed technique offers better performance. The shape reconstruction results are provided in Fig. 12 and Fig. 13, respectively. Table 8 and Fig. 14 show the RMSE and correlation results when using CONT as an FM. In all three scenarios, it behaves poorly. Table 9 and Fig. 15 provide the results of the shape reconstruction of simulated cone when GRA3 is used as the VOLUME 9, 2021 RMSE values, as compared to using FMs only. The RMSE and Cor. graphs are shown in Fig. 16 and Fig. 17, respectively. Table 14 with Fig. 18 and Fig. 19, show the similar comparison of Real Cone with and without the proposed scheme for Jitter noise variance σ η affected by 0 ≤ σ 2 ≤ 0.75. The table and graphs clearly suggest the effectiveness of the proposed scheme. The correlation values across noisy conditions for the proposed scheme are higher as compared to using FM(s) only, whereas the RMSE values are lower. Figures 20 to 22 represent the shape construction of Real Cone with and without the proposed scheme. The reconstructed shapes when using the proposed scheme are smoother as compared to the ones reconstructed using FM(s) only, as these shapes have surface-roughness produced due to jitter, and FMs cannot remove this alone. only and the proposed filter used as post-FM application. In Real Plane reconstructed shape, the similar smoothness phenomenon (as Real Cone) can be observed in Fig. 23. The roughness in shape, due to jittering when using only FM is smoothed by the filter i.e., the jitter effect is removed. In LCD-TFT filter, the cylindrical shape of the filter is preserved, and the surface around it is smoothed by the filtering process. In the case of Image-I dataset, not much difference can be observed visually, as the jitter in this sequence is quite low. In the case of Coin sequence a depth abnormality can be observed also in Fig. 23 near the vertical axis of 175 value in shape reconstruction using all FMs, whereas, when the proposed filter is applied, it is removed. The Groove image sequence is a challenging problem in shape reconstruction [2]. The sides and center of this image sequence are over-exposed, resulting in texture degradation, which is critical in SFF systems. The slopes in the middle are the only ones that exhibit the change in focus levels. Fig. 25 shows the shape reconstruction of Groove using different FMs, along with the proposed filter applied as pre-and post-FM. The results presented in the tables, graphs and reconstructed shapes in the manuscript clearly show that the Jitter noise affects the overall accuracy of the SFF systems, and can be removed by applying the proposed filtering technique using Kalman filter. The proposed scheme shows promising results. VII. CONCLUSION In SFF when the shape of the object is discretised into image frames, constant inter-frame distance is assumed. However, in practice, this inter-frame distance is prone to errors. This is due to mechanical errors in gear assembly of the translational-stage or lens-focusing-assembly of the imaging device. This can cause errors, which are referred to as Jitter noise in the literature. Jitter noise is not visible in images, because each pixel in an image will be subjected to the same error in focus. Thus, using traditional techniques of the denoising of images will not work. In this paper, Jitter noise is first modeled and the mean and variance of this noise are formulated. It is also shown that this Jitter noise is dependent on the first and second derivatives of the focus/pixel curves, and follows gamma distribution. The design of the system and measurement models for the proposed scheme using Kalman filter are presented and applied to the focusing curves to remove this noise. The proposed scheme can be applied in two ways, as pre-FM application or post-FM application. Unlike previously proposed techniques for Jitter noise removal in SFF systems, the proposed scheme utilizes single measurement for each step, and utilizes the dynamic approach with Kalman filter. Thus, it is faster and more accurate as compared to its predecessors. The experiments are performed on seven objects: one simulated and six real. Ten noise levels are tested on the simulated object, and four levels on the real objects. Both pre-and post-applications are tested and the results are presented. The RMSE and correlation are used as metric measures. The experiments show the effectiveness of the proposed scheme.
8,299.2
2021-01-01T00:00:00.000
[ "Computer Science" ]
Dichotomy Between Orbital and Magnetic Nematic Instabilities in BaFe2S3 Nematic orders emerge nearly universally in iron-based superconductors, but elucidating their origins is challenging because of intimate couplings between orbital and magnetic fluctuations. The iron-based ladder material BaFe2S3, which superconducts under pressure, exhibits antiferromagnetic order below TN ~ 117K and a weak resistivity anomaly at T* ~ 180K, whose nature remains elusive. Here we report angle-resolved magnetoresistance (MR) and elastoresistance (ER) measurements in BaFe2S3, which reveal distinct changes at T*. We find that MR anisotropy and ER nematic response are both suppressed near T*, implying that an orbital order promoting isotropic electronic states is stabilized at T*. Such an isotropic state below T* competes with the antiferromagnetic order, which is evidenced by the nonmonotonic temperature dependence of nematic fluctuations. In contrast to the cooperative nematic orders in spin and orbital channels in iron pnictides, the present competing orders can provide a new platform to identify the separate roles of orbital and magnetic fluctuations. I. INTRODUCTION The discovery of low-dimensional superconductivity in iron-based ladder compounds provides a new point of view in the study of iron-based superconductors [1]. Reduced dimensionality changes electronic structures dramatically, and the ground states of iron-based ladder materials show insulating properties in stark contrast to bad metal behaviors in the typical quasi-two dimensional BaFe 2 As 2 (122) system. In spite of the totally different ground states, the ladder materials still have a stripe-type antiferromagnetic order similar to that of 122system, suggesting the common physics between these two systems. More intriguingly, pressure-induced superconductivity emerges with the suppression of antiferromagnetism [1,2]. These common features suggest unconventional pairing mechanisms of iron-based superconductivity robust against dimensionality. The insulating nature of the ladder materials implies strong correlation effects due to the reduction of their dimensionality. Heavily hole-doped 122 system also has strong correlation effects since the nominal electron filling 3d 5.5 in iron atoms, which is closer to half-filled states than 3d 6 of non-doped 122-system, pushes towards the putative Mott insulating phase [3]. In contrast to the single-orbital case in e.g. cuprates, where only Hubbard interaction U is dominant, in the multiorbital systems, Hund's coupling J H plays an important role for increasing orbital-dependent correlation effects, leading to orbital selective Mott states [3,4]. An incoherent bad-metal conduction in these systems can be considered as a precursor phenomenon in the vicinity of Mott phases, and * These authors contributed equally to this work. indeed some spectroscopy measurements reveal strongly orbital-dependent renormalized bands [5][6][7], although the system locates still far away from the parent Mott insulating state. The ladder materials show insulating behaviors, and thus the dimensionality may also serve as another promising parameter to tune the system to the Mott regime, which gives a new route to the study of orbital selective Mott phases. Superconductivity in BaFe 2 S 3 under pressure occurs without significant crystal structure changes [1], indicating that electronic state at ambient pressure smoothly connects to the superconductivity. Therefore, understanding the electronic states of BaFe 2 S 3 at ambient pressure is fundamentally important to study the effects of dimensionality in iron-based materials. It has been established that BaFe 2 S 3 exhibits the antiferromagnetic transition at T N ∼ 117 K with ferromagnetically aligned spins along rung directions [1]. One of the unsolved mysteries in BaFe 2 S 3 is the so-called T * anomaly characterized by a weak bump-like features found in the temperature dependence of resistivity at ∼ 180 K. The origin of the T * anomaly possibly derives from an orbital-involved phase transition, as magnetization, neutron scattering, and muon spin relaxation measurements do not detect any magnetic signatures of phase transitions [8][9][10]. Orbital ordering has been intensively discussed in terms of electronic nematic orders in iron-based superconductors, as one of the promising candidates producing large inplane anisotropy in the electronic state [11]. Therefore, it is effective to evaluate anisotropy of the electronic state in order to verify the orbital order. Polarization dependence of X-ray absorption spectroscopy (XAS) measurements have shown anisotropic electronic states, suggesting the existence of orbital order [12]. However, the observed polarization differences preserve up to 300 K, and there is no characteristic change in spectrum around T * . Three experimental geometries represent out-of-ladder, in-ladder, and as-grown plane anisotropy measurements as illustrated in the upper insets of (a-c) including our definitions of x, y, z axes in this study and field angle θ with field rotation plane in each geometry. Angle dependence of MR can be fitted to Eq. (1) (solid lines are the fitting results). Temperature dependence of two-fold oscillation term is shown in (d) and (e) (broken lines are guides for the eyes) with error bar based on the standard deviation obtained from the fittings. The guide line for T * , is determined from ER data as shown in Fig. 2(d). One possibility is that the T * anomaly involves change of low-energy states outside the scope of the XAS measurements which generally probes valence bonds whose energy level is deep below the Fermi level. It is desirable to measure the anisotropy of transport properties that is very sensitive to low-energy quasiparticle excitations. Here we report anisotropy of electronic states by measuring both field-angle resolved magnetoresistance (MR) and elastoresistance (ER) at zero field. From MR measurements, we can estimate the degree of anisotropy in the order parameter, and in contrast, ER couples to anisotropic electronic instability which corresponds to the nematic fluctuations. Measurements of these two physical values can provide complementary information on the rotational symmetry of the electronic states. The electronic properties of BaFe 2 S 3 are known to be quite sensitive to the growth conditions [13]. Here, by employing the method in Ref. 8, we prepare single crystals without iron deficiency showing T N ∼ 117 K and T * ∼ 180 K, which guarantee to acquire intrinsic information on the orbital order. Our measurements find the anisotropic state above T * indicating the formation of leg-directed orbital ordering at high temperatures, consistent with XAS measurements. At around T * , more isotropic electronic state forms, implying the formation of another orbital ordering (most likely d x 2 −y 2 orbitals). Approaching T N far below T * , we find the enhancement of nematic fluctuations that are distinctly different from the high-temperature fluctuations, suggesting the presence of competing orbital and magnetic orders in this system. II. FIELD-ANGLE RESOLVED MAGNETORESISTANCE MR was measured up to 17.5 T and field angle dependence is resolved by using rotating probes in a superconducting magnet with three geometric configurations labeled by G1, G2 and G2', where magnetic field H is rotated within out-of-ladder plane ((001) or xy plane), in-ladder plane ((100) or yz plane), and as-grown plane ((110)-leg plane), respectively. Hereafter, we define z and y axes along leg and rung directions, respectively, to discuss the electron d orbitals [upper insets in Figs. 1(a-c)]. From the view of space group Cmcm of BaFe 2 S 3 [15,16], our introduced x, y, z directions correspond to rung (a = y), plane (b = x), and leg (c = z), where a, b, c are the crystal axes. In all cases, the direction of current I is along the leg ([001] or z) direction. Magnetic field angle dependence of resistivity can be fitted by where θ is the angle from the rung axis as shown in Figs. 1(a-c) and its angle dependence of each term is determined through the restriction of orthorhombic symmetry [17]. Here, ρ H represents the contribution from the Hall effect possibly caused by its strongly anisotropic structure. The ρ M term is the amplitude of angle dependence of MR and can be unambiguously separated from the Hall contribution because of the parity difference to magnetic field. The ratio of ρ M /ρ 0 represents the degree of field-angle dependence reflecting the anisotropy of the electronic states in the field-rotation planes. The field-angle dependence of MR in G1 geometry starts to grow just above T N and MR shows maximum (minimum) when magnetic field H is applied perpendicular to the ladder plane (parallel to the rung direction). Magnetic moments aligned ferromagnetically along rung directions [1] can cause such angle dependence. The temperature dependence of the ratio ρ M /ρ 0 shows a sharp peak at T N as shown in Fig. 1(d). This suggests that antiferromagnetic fluctuations evolve towards T N , and such fluctuations start to develop below T * . To study in-plane anisotropy, G2 and G2' geometries are used, in which field rotation plane is parallel to and slightly tilted from the ladder plane, respectively, but both measurements reproduce essentially the same behavior as shown in Fig. 1(e). Below T N , MR shows strong angle dependence similar to that of G1 geometry and takes a minimum when the H is along the rung direction. At higher temperatures above T * , however, ρ M /ρ 0 has the opposite sign and the magnitude of MR decreases with decreasing temperature and becomes very small around T * . These results indicate that BaFe 2 S 3 has an anisotropic electronic structure above T * , which is distinct from that in antiferromagnetic state below T N , and exhibits a phase transition around T * leading to more isotropic electronic state. III. ELASTORESISTANCE ER, resistance change induced by uniaxial strain, is a powerful probe of nematic fluctuations [18,19]. We apply this technique to BaFe 2 S 3 to study putative orbital ordering at T * . The ER measurements require ideal geometry of thin bar shapes to control strain through the direct attachment of samples to piezoelectric device. Despite their fragile nature of single crystals, these requirements are achieved by cutting the samples with a wire-saw and polishing them carefully. In this study, both strain and current are applied along the [001] leg direction [ Fig. 2(c)]. We measured ER under two geometries with different conduction planes: the as-grown (110) and ladder (010) planes, the latter of which is a more appropriate configuration to probe the nature of ladder nematicity. In both cases, strain along the leg (z) direction most effectively changes electronic states in ladder plane, because strain along other directions would significantly alter the separation between ladders, which makes it more difficult to extract the ER response in the ladder. ER in tetragonal materials linearly couples to nematic fluctuations, but here the relationships between ER and nematicity fluctuations should be modified by the orthorhombic crystal structure of BaFe 2 S 3 . Thus the Curie-Weiss analysis, which widely works for typical twodimensional iron-based superconductors [18][19][20], may not be valid in this case. However, as the leg distortions work as a conjugate field to ladder nematicity, strain-induced changes in resistance still couple in some form to nematic fluctuations even with distorted ladder structure. Figures 2(a) and (b) show the strain response of resistance at low and high temperatures, respectively. The ER response is dominated by a linear slope with negative sign, in contrast to the metalization induced by hydrostatic pressure, which supports the successful control of anisotropic strain under little effect of symmetric strain. At high temperatures above T * , the magnitude of ER slope decreases with decreasing temperatures. Below T * , ER signal turns to grow and is enhanced towards T N . The temperature dependence of ER slope shown in Fig. 2(d) exhibits two anomalies; a kink around T N ∼ 117 K and a broad minimum around T * ∼ 180 K. Here, three measured samples including different experimental geometries show similar behaviors, implying that our ER data successfully capture the essential features of nematic fluctuations. The observed clear anomaly of ER provides strong evidence for the existence of electronic phase transition at T * . In iron-based superconductors with Fe square lattice, ER nematic signal increases from high temperature side towards the nematic transition, but in the present case it shows an opposite decreasing trend. The behavior taking a minimum in ER can be explained by assuming multiple components of nematic fluctuations and competitive relationship between them. IV. DISCUSSION Our measurements of field-angle dependent MR and ER consistently indicate a clear electronic change to a more isotropic state below T * . Both measurements indicate that the origin of T * is distinguished from antiferromagnetism below T N and most likely involves orbital degrees of freedoms as previously speculated [8,12]. Although orbital order in iron-based superconductors has been discussed in the view of nematic order which induces in-plane anisotropy, the T * phase in BaFe 2 S 3 rather reduces the anisotropy of electronic structure. Next we discuss possible orbital states above and below T * . Strong field-angle dependence of MR and large ER coefficient at ∼ 270 K is likely originating from legdirected orbital ordering. From the view of low-energy states, effective two-band model with the d x 2 −y 2 orbitals and d xz hybridized with d xy orbitals have been proposed [21,22]. Since transport coefficients are generally sensitive to low-energy quasiparticle excitations, our transport anisotropy should be related to the imbalance between these two orbitals. Namely, the origin of the anisotropic state along the leg direction in high temperature region should come from d xz -like orbital order in this model. Furthermore, the reduction of ER towards T * from room temperature indicates a suppression of this leg-directed orbital fluctuations. This accomplishment of an isotropic electronic state can naturally be explained by the cancellation of anisotropy by the developments of another orbital d x 2 −y 2 , which is elongated along rung direction and competes with the d xz -like orbital. Here, the evolution of rung-directed order is gradual, so that it just compensates the anisotropy derived from leg-directed orbital order in range of T N < T < T * , which is summarized in a schematic phase diagram in Fig. 3. We note that if we assume d 3z 2 −r 2 orbital ordering in the place of d xz -like orbitals as previously suggested from XAS [12], discussion on gradual orbital switching to d x 2 −y 2 at T * is also applicable. These orbital decoupling behaviors emphasize the importance of the orbital selective Mott physics with Hund's coupling. On the other hand, strong increase of ER below T * can be understood by the contribution from magnetic nematicity associated with stripe-type antiferromagnetism, which also coincides with the evolution of MR. It is contrasting that orbital order becomes isotropic around T * while the anisotropic antiferromagnetic order evolves. This is a quite unusual case because in most iron-based superconductors d xz orbitals and stripe-type antiferromagnetism give a cooperative formation of nematic orders with the help of spin-orbit coupling. Another exam-ple that is distinct from the present case can be found in a related iron-based ladder compound BaFe 2 Se 3 , which exhibits a structural distortion that supports the magnetic ordering [23,24]. However, d x 2 −y 2 orbital order hybridized along rung directions may support antiferromagnetic correlations along rungs, which is opposite to actual magnetic structure that magnetic moments align ferromagnetically along rungs [1]. We should also note that in BaFe 2 S 3 magnetic ordering temperature T N increases under pressure with a suppression of T * [14], which is consistent with the competing nature of these two orders. Such an unusual competition of two orders in BaFe 2 S 3 may provide hints to clarify the independent roles of spin and orbital degrees of freedoms. The T * anomaly is originally characterized by the slight improvement of conductivity [8]. In fact, this can also be explained by the present orbital competition model discussed above: d xz -like bands are strongly correlated and the evolution of d x 2 −y 2 orbitals, which are more weakly correlated, leads to a decrease in resistivity. This resembles with incoherent-coherent crossover observed in hole-doped iron-based superconductors originating from the coexistence of localized and itinerant electrons due to orbital selective correlation effects [25]. The density matrix renormalization group analysis on BaFe 2 S 3 shows the orbital selective features in the perturbation response to hole doping: d xz -like orbitals have a tendency to localize while d x 2 −y 2 orbitals become coherent [22]. Experimentally, the coexistence of local and itinerant electrons is revealed by photoemission spectroscopy [26]. The orbital switching from d xz -like to d x 2 −y 2 orbital proposed here based on the electronic anisotropy is thus consistent with the incoherent-coherent crossover associated with orbital selective Mottness. V. CONCLUSION In summary, we performed field-angle resolved MR, and ER measurements in iron-ladder material BaFe 2 S 3 . Our results reveal strong anisotropy above T * indicating leg-directed orbital ordering such as d xz orbitals hybridized with d xy , and the existence of an electronic phase transition leading to an isotropic state at T * . A plausible microscopic origin of this phase transition is the orbital switching from d xz -like orbitals to d x 2 −y 2 coincided with incoherent-coherent crossover related to orbital selective Mott physics. Our proposed orbital order below T * is an exotic state different from nematic orders in iron-based superconductors. One of the intriguing features is the dichotomy between orbital and magnetic nematic instabilities, which provides a new avenue for studying the roles of orbital and magnetic fluctuations. ACKNOLEDGEMENTS We thank K. Takubo, T. Mizokawa, J. C. Palmstrom, T. Worasaran, and K. Izawa for helpful comments or fruitful discussions. We also thank C. W. Hicks for technical supports on designing the wire-saw, and N. Abe, Y. Tokunaga, and T. Arima for experimental supports on Xray diffraction measurements. This work was supported by Grants-in-Aid for Scientific Research (KAKENHI) (Grant Nos. JP15H02106, JP16H4007, JP16H04019, JP16K17732, JP17H05473, JP17H05474, JP17H06137, JP17J11382, JP18H01159, JP18H04302, JP18H05227, JP19H00649, and JP19H04683) and on Innovative Areas "Quantum Liquid Crystals" (Nos. JP19H05823, JP19H05824, and JP20H05162) from Japan Society for the Promotion of Science (JSPS). This work was partly performed at the High Field Laboratory for Superconducting Materials, Institute for Materials Research, Tohoku University (Project No.17H0202, 18H0204). X-ray diffraction measurements were partly performed using facilities of the Institute for Solid State Physics, the University of Tokyo. A. APPENDIX:SAMPLE PREPERATION High-quality single crystals of BaFe 2 S 3 were grown by the melt-growth method. The starting materials are elemental Ba shots, Fe powders, and S powders with the ratio of 1 : 2.1 : 3. The excess of iron is necessary to prevent the iron deficiency in the products as reported in Ref. [8]. The starting materials in a carbon crucible were sealed into an evacuated quartz ampoule. The ampoule was slowly heated up to 1373 K, kept for 48 hours, and slowly cooled to room temperature. We characterized samples by measuring the temperature dependence of resistivity and magnetic susceptibility. According to Hirata et al. [8], the sample with the best quality has the highest antiferromagnetic transition temperature (T N ) and the clearest anomaly in the resistivity at the possible orbital-order transition temperature (T * ). Our electrical resistivity and magnetic susceptibility data shown in Fig. 4 exhibit clear anomalies at T N = 117 K and T * = 180 K. These features are consistent with those reported in Ref. [8]. We can therefore conclude that the sample used in this study is appropriate to acquire the intrinsic properties of BaFe 2 S 3 . In the main text, we deal with the magnetoresistance under the rotating magnetic field. We here show the temperature dependence of magnetoresistance at fixed magnetic field direction along the rung direction (Fig. 5). The data were obtained from the temperature dependence of resistivity (ρ) of in a constant magnetic field of 0 T and 17.5 T. One can clearly see the suppressed magnetoresistance with a bump-like structure at around T * and enhanced negative magnetoresistance at T N , although the T * position estimated from ER data is slightly deviated from the top of the bump structure. The former and the latter correspond to the formation of the isotropic electronic states and the enhancement of magnetic fluctuations, respectively. The details of each transitions are discussed in the main text.
4,697
2020-11-17T00:00:00.000
[ "Physics" ]
Do Metadata-based Deleted-File-Recovery (DFR) Tools Meet NIST Guidelines? Digital forensics (DF) tools are used for post-mortem investigation of cyber-crimes. CFTT (Computer Forensics Tool Testing) Program at National Institute of Standards and Technology (NIST) has defined expectations for a DF tool’s behavior. Understanding these expectations and how DF tools work is critical for ensuring integrity of the forensic analysis results. In this paper, we consider standardization of one class of DF tools which are for Deleted File Recovery (DFR). We design a list of canonical test file system images to evaluate a DFR tool. Via extensive experiments we find that many popular DFR tools do not satisfy some of the standards, and we compile a comparative analysis of these tools, which could help the user choose the right tool. Furthermore, one of our research questions identifies the factors which make a DFR tool fail. Moreover, we also provide critique on applicability of the standards. Our findings is likely to trigger more research on compliance of standards from the researcher community as well as the practitioners. Introduction Both in corporate and government settings, digital forensic (DF) tools are used for post-mortem investigation of cyber-crimes and cyber-attacks.Nowadays it is common [1] for law enforcement to use DF tools to follow an electronic trail of evidence to track down a suspect.To maintain the quality and integrity of DF tools, National Institute of Standards and Technology (NIST)'s Computer Forensics Tool Testing Program (CFTT) [2] defined expectations for these tools.Maintaining the standards of DF tools is especially critical for judicial proceedings: usage of a forensic tool that does not follow the standards may cause evidence to be thrown out in a court case, whereas incorrect results from a forensic tool can also lead improper prosecution of an innocent defendant. The focus of this paper is about standardization of one class of DF tools that are for Deleted File Recovery (DFR) [3].As the name suggests, a DFR tool attempts to retrieve deleted files from a file system of a computer.As an example, given a hard disk or a USB drive (which might have been seized from the suspect computer or collected from the crime scene), a forensics professional can use a DFR tool to investigate about (and potentially retrieve) files which a suspect deleted to hide important information.The success or failure of a DFR tool can decide the outcome of a case.DFR tools are typically classified as one of two varieties, corresponding to two different approaches to file recovery.These varieties are metadata-based tools and file carving tools.The focus of this paper is metadata-based DFR tools, with file carving left for future work.In the rest of the paper, unless otherwise mentioned, by DFR tool we mean metadata-based DFR tool. Our experiments with a popular DF tool suite named Autopsy [4] show that it does not meet all NIST expectations for DFR.Furthermore, we extensively experimented with other frequently used DFR tools.We compare those tools' performance and compile a comparative analysis, which could help the user choose the right DFR tool. Evaluating the performance of a DFR tool is complex because many elements of a forensics scenario determine the success or failure of the file recovery process.A few such factors are the type of the file system (FAT, NTFS, etc.), presence of other active/deleted files in the file system, fragmentation of a file, a deleted file being overwritten by another file, and so on.So, comparison of two DFR tools is scientific only if they are compared while keeping these factors same.Via extensive analysis, we design a set of test file system images (for either of FAT and NTFS) which considers each of the above factors independently.We claim that this list of test cases cover most of the scenarios (except few edge cases) in real-life, and thus claim that our evaluation has broad coverage. As there are many file systems (e.g., ext4, HFS, etc.) in addition to FAT and NTFS, one might be interested to know why we chose FAT and NTFS for the current work.Because FAT and NTFS are very widely used on external storage devices and devices running Microsoft Windows, respectively, reallife forensics investigation often involves these two file systems.While we leave other file systems for future study, our current methodology could be extended to other file systems to make a similar study. The main contributions of the paper are listed as follows: • We design and build a list of canonical test file system (FAT and NTFS) images to evaluate the DFR tool per NIST guidelines. • We perform evaluation of frequently-used DFR tools (including free tools as well as proprietary ones)1 on the test images. • For the interesting cases of tools' success or failure, we provide logical explanation. • We provide critique on applicability of some of the NIST guidelines in a practical setting. The NIST CFTT portal currently publishes reports of only a subset of DFR tools.However, that set needs to be expanded as many new tools come to market and become popular.Also, existing DFR tools should be retested to ensure their reliability is consistent as new patches and features come out.Adding new reports to the CFTT website will allow tool developers a chance to continually develop their tools for the better.We will submit our study reports to the CFTT portal.At the time of writing, the portal publishes reports for Autopsy [5] and FTK [6]; however, both reports are from 2014 and evaluate now-outdated versions of the software. As a side benefit, our work leads to a few hands-on lab-modules to be used in digital forensics courses at BGSU, enriching the new DF specialization program in the CS department.We will also make these modules publicly available to be used by relevant instructors at other universities. Research Questions A DFR tool is a piece of software that can retrieve residual data of a file that was deleted from a storage device (e.g., computer hard disk, flash drive, and so on).We evaluate a set of popular DFR tools on the scale of CFTT guidelines.In particular, following are the research questions (RQs) that we target to answer.RQ1.Do the popular DFR tools meet the NIST CFTT expectation?If not, which tool meets which part of the expectation? RQ2.What factors make the tools fail or succeed? RQ3. Are the free DFR tools more effective compared to the enterprise-level (proprietary) tools? The identification of errors, such as for not recovering a deleted file or attempting to recover a file that was never there (Type I and Type II errors, respectively), is an important metric for a DFR tool.Type I and Type II errors account for majority of the standard.Many factors impact the performance of a DFR tool, including file system type, whether the file content is not located in contiguous clusters, whether some part of the deleted file content is overwritten by another file, and more.We consider these variables in the design of experiment when we compare the tools.Note that our current study is limited to exploring the core features of NIST guidelines [3], i.e., we leave the optional features [3] of NIST guidelines for future study. Background In this section, we present some of design basics (often simplified to aid readability) of FAT file system and NTFS file system, which are relevant to what we discuss in the latter part of the paper.We highlight what information remains in the file system after a file is deleted, which leads to understanding of how metadata-based file-recovery might work.Finally, we present the core features of NIST standards for such recovery tools. FAT File System As in many other file systems, a file in FAT system has metadata in addition to the actual file content.The main metadata of a file (say foo.txt) in FAT system is called a The change in metadata and actual content of foo.txt is illustrated in Figure 2 after the file is deleted.The first character of the file name (say "foo.txt") is flagged ("_oo.txt") to denote that the file is deleted, but the remaining part of the directory entry can be still available.In the FAT table, the deleted file's corresponding entries are zeroed, which denotes that those clusters are available to be allocated to a new file (if necessary).However, the actual content carrying clusters (say cluster 100 to cluster 200) can still be intact until they are overwritten by another file. Furthermore, it is possible that the content of a file is not stored in contiguous clusters in FAT file system, and this phenomenon is called fragmentation.If the original file foo.txt has two fragments, it may look as illustrated in Figure 3 where the file's first fragment is from cluster 100 to cluster 101 and the second fragment is from cluster 200 to cluster 298. NTFS File System In an NTFS system, each file has an entry in the Master File Table (MFT) where every entry is typically 1024 bytes.If a file is short, then all of its metadata as well as the actual content can sit inside the MFT entry; otherwise, the file content can be non-resident (i.e., not in MFT entry) and located in other clusters. As an example, the MFT entry of foo.txt and the actual content carrying clusters are illustrated in Figure 4.The MFT entry indicates that there are two runs of clusters (100-101 and 200-298) which carry the actual file content.In this case, the example file has two fragments. Metadata-Based Deleted File Recovery The previous discussion implies that in many cases a deleted file's metadata (e.g., directory entry in FAT or MFT entry in NTFS) can be still available, and it is possible to recover the file content using this metadata.This is called metadata-based deleted file recovery. For instance, in the example of Figure 2, we can see from the directory entry of foo.txt that the deleted file starts in cluster 100 and has a size of 101 clusters; thus, we can reason that the deleted file content is in cluster 100 to cluster 200.We can recover the whole file via reading the raw content of these clusters (e.g., by using dd command in Kali Linux). Note that in FAT system the directory entry of a file only refers to the starting cluster, and it does not carry any information about the file fragments.That is why in certain situations where the deleted file is fragmented, metadata-based file recovery in FAT system encounters challenges.On the other hand, if a file is fragmented in NTFS, the corresponding MFT entry does contain information on all the runs (i.e., fragments) of the file (as shown in Figure 4).Consequently, fragmentation does not introduce any extra challenge in file recovery in NTFS. NIST Guidelines The NIST guidelines [3] list four core features upon which metadata-based DFR tools are to be judged.Following is the text of each core feature as well as our interpretation of each in the context of this work: 1. "The tool shall identify all deleted File System-Object entries accessible in residual metadata" [3].We consider a tool passing this standard if it identifies to the user each file system metadata entry that is marked as deleted. 2. "The tool shall construct a Recovered Object for each deleted File System-Object entry accessible in residual metadata" [3].We consider a tool passing this standard as long as it outputs a file for each deleted file, even if the output file is empty. "Each Recovered Object shall include all nonallocated data blocks identified in a residual metadata entry" [3].For FAT file systems, we consider a tool passing this standard if it recovers at least the first contiguous segment of unallocated sectors starting from the first sector originally allocated to the deleted file.For NTFS file systems, the tool must recover all unallocated sectors originally allocated to the deleted file. 4. "Each Recovered Object shall consist only of data blocks from the Deleted Block Pool" [3]. We consider a tool passing this standard if the recovered file consists only of data from the original deleted file, or null data to represent omitted portions. Overview To test the DFR tools, we first design hypothetical test scenarios to simulate the challenges of real-world file recovery.We then create each scenario in real file systems and save them as raw images.Using the images as input, we run each DFR tool and attempt to recover all deleted files.Finally, we compare the recovered files to their original versions in order to judge the tools' meeting with the NIST expectation.A high-level view of the methodology for a typical test case is illustrated in Figure 5. Designing Recovery Scenarios To test the DFR tools' meeting with the expectation, we designed a variety of scenarios in which a tool might have to recover a deleted file.We started with the simplest possible case: a file system containing just one deleted file.This case is ideal and trivial, but by adding more files, we can create a greater variety of scenarios. The NIST guidelines limit their scope to recovery of files which were "created and deleted in a process similar to how an end-user would create and delete files," [3] and exclude "files and file system metadata that is specifically corrupted, modified, or otherwise manipulated to appear deleted."[3] In other words, these guidelines address situations in which files were deleted via normal file system operations as implemented by a typical operating system, as opposed to direct modification of the file system by a user.Within these constraints, there are two factors which can significantly complicate the file recovery process: fragmentation, and overwriting.These factors are thus the foundation of our test scenarios, with all cases besides the first involving either fragmented files, overwritten files, or a combination of both.The goal is to create test cases which are canonical; in other words, they constitute the basic elements of a file recovery scenario.We suggest such a canonical list of test cases should be considered representative of wide variety of scenarios within the scope of the guidelines. Following are descriptions of the test cases we designed.We have separated them into five categories: (a) a case with just a simple deleted file, (b) cases involving fragmentation, (c) cases involving overwritten files, (d) cases with a combination of fragmentation and overwriting, and (e) cases with a file fragmented "out of order." When test cases are illustrated in figures, each row represents the file system at a point in time.An arrow indicates a change in the file system, which is illustrated in the following row.Files are distinguished by unique letters, and if they are deleted they are marked as such. Case 1 The file system contains a single deleted file. Case 2 Fragmented deleted file, with an active file in between the fragments (as illustrated in Figure 6) Case 9 Deleted file fragmented from the end of the file system to after an active file Case 10 Deleted file fragmented from the end of the file system to after a deleted file Because NTFS keeps track of the locations of all parts of a file even after deletion, fragmentation is not particularly interesting.Cases 8, 9, and 10 would be redundant with case 2, so we have excluded them for NTFS.Due to how NTFS allocates space for files, cases 4ii and 5ii cannot occur as a result of normal file operations, so they have also been excluded.No cases are excluded for FAT tests. Creating Test Images All test file systems were created in partitions on a 32 GB flash drive.For each test case, the first step is to entirely write over the partition with zeros.This ensures all cases start from identical, reproduceable conditions.A new file system is written to the partition, then files are written to the file system and deleted.The files used are simple text files containing one letter repeated (e.g., "aa1M" is 1 MiB of the letter 'a').Files are written to the test file system by simply copying them from another drive.In some cases we also append data to a file in the test file system to create fragmentation.Once the test file system matches the intended scenario, a read-only image of the partition is created.All tests are performed on these images rather than the original drive.Note that when creating FAT test cases we use Ubuntu 18.04 and for NTFS test cases we use Windows 10. Challenges. It is important to consider when creating test images that the low-level behavior of file operations is not always obvious.For example, when writing a file, there is no guarantee the file's data will be immediately written to the disk.The operating system may cache the operation and wait until the optimal time to perform the write, in order to maximize system performance.We observed this early on, as writing a file and subsequently deleting it would always result in the file's metadata being written, but often left no evidence of the file's data having ever existed.This behavior is obviously undesirable because it leaves nothing meaningful to be recovered.We resolved this by using the sync system call, which causes any such cached data to be immediately written to the disk, in between file writes and deletions.Unmounting the file system after writes has a similar effect. Another type of low-level behavior relevant to the image creation process is the allocation algorithm.The operating system must have some kind of algorithm to decide where in the data area new files should be written.Common allocation algorithms include "first available," "next available," and "best fit."Learning and understanding whatever algorithm the OS uses is very helpful for forcing a specific arrangement of files.We observed that when writing to a FAT file system, Linux uses a "next available" algorithm.After the file system is mounted, the first write will start at the first free space in the data area.The next file will be written starting from the first free space after the previous file.Meanwhile, when writing to an NTFS file system, Windows 10 appears to use a "best fit" algorithm.In this case, Windows tries to find the smallest space in which the file can fit without being fragmented, and write it there. Recovering Files We selected five popular DFR tools for testing: Autopsy [4], Recuva [7], FTK Imager [8], TestDisk [9], and Magnet AXIOM [10].Note that Autopsy uses a set of DF tools known as The Sleuth Kit (TSK) for metadata-based recovery, so TSK is also implicitly covered by this study.These tools were chosen based on popularity and availability.We also made sure to choose a combination of free and proprietary tools, in order to address one of the research questions. The settings we used when testing each tool are as follows: For Autopsy, we performed a standard recovery with all ingest modules disabled.For Recuva we performed a standard recovery using the free version with default settings.For FTK Imager, we performed a standard recovery using the free version with default settings.For TestDisk we used the "file undelete" feature under "Advanced Filesystem Utils."For Magnet AXIOM we performed a "full scan" in AXIOM Process and exported all files accessible in "Filesystem View" in AXIOM Examine. Results After testing each tool, we analyzed the recovered object(s) from each test case.If the recovered file is identical to the original, obviously all expectations have been met.While this is ideal, it is often impossible to perfectly recover a file (such as when it is overwritten) so the standards do not require it. In our results, the file is only ever recovered perfectly in FAT cases 1 and 2, and NTFS cases 1-3.For all other cases, the tool is judged on each core feature individually.These judgments are summarized in Figure 10 for FAT test cases and Figure 11 for NTFS test cases. For cases in which a tool does not fulfill core feature 1, in other words, it cannot find a deleted file, we make no judgment about the remaining core features. Recovering Fragmented Files.In cases of fragmentation in FAT file systems, we found each tool generally approaches recovery in one of two ways.Recuva and Magnet AXIOM start from the beginning of the file and recover the full length of the file even if an active file exists in that space.Autopsy, FTK, and TestDisk will start from the beginning of the file and recover the full length, but skip over any active files they encounter.Autopsy, FTK, and TestDisk recover all of file A, while Recuva and Magnet AXIOM's recovered images erroneously contain data from file B, causing them to fail core feature 4. When the space in between fragments is unallocated, all tools recover the file as though it was contiguous, pulling some erroneous data and failing core feature 4. When the fragmentation occurs at the end of the file system, Recuva, FTK, and TestDisk recover only the first fragment, while Autopsy returns a short file of null data, and Magnet AXIOM reports an error and returns an empty file.Cases with fragmentation are trivial for NTFS file systems as more information is available from the metadata.Unsurprisingly, no tools had problems with fragmentation cases for NTFS. Recovering Overwritten Files.In cases where a file has been overwritten by an active file, we found most tools recover the deleted file as though it is not overwritten, failing core feature 4. A few exceptions are FTK Imager, which recovers the file up to the point where it has been overwritten, and Autopsy, which generally recovers only the first cluster of an overwritten file in FAT, and behaves like the other tools for NTFS.TestDisk also exhibits the same behavior as FTK for FAT case 4ii only.Strangely, Magnet AXIOM's recovered objects for FAT cases 4i and 4ii include the overwritten sections, but nothing after them.Other Magnet AXIOM results were similar to the other tools.When the overwriting file has also been deleted, all tools recover the first file as though it is not overwritten. Abnormal Results. A few results stand out as unusual.These are cases for which it is difficult to infer from the recovered object what approach a tool is using. For FAT cases 4ii, 6, 8, 9, and 10, Autopsy returns a 1.5 KiB file of null data.1.5 KiB is equivalent to 3 sectors, while a FAT cluster in our cases is defined as 4 sectors (2 KiB). TestDisk fails to identify a file for NTFS cases 4iii and 4iv only.These are the only test cases in which a tool does not fulfill core feature 1. For FAT cases 4i and 4ii, Magnet AXIOM does not recover the entire length of the deleted file, but it also does not exclude the overwritten sections.In both cases, it recovers up to the end of the overwritten sections, rather than up to the beginning like FTK does. Answering Research Questions RQ1. Do the popular DFR tools meet the NIST CFTT expectation?If not, which tool meets which part of the expectation? Generally, we found that the DFR tools we tested did not consistently meet the NIST CFTT expectation.TestDisk failed to fulfill core feature 1 because it did not identify deleted files in two test cases.All tools fulfilled core feature 2, as they produced a recovered object for every deleted file they identified.Autopsy and Magnet AXIOM failed to fulfill core feature 3 because in several test cases they did not recover data they had access to.All tools failed to fulfill core feature 4 because in many cases they recovered data which was not part of the original file. RQ2.What factors make the tools fail or succeed? The most common factor causing tools to fail is when a deleted file has been overwritten.Core feature 4 requires that a tool only recover data which was originally a part of the deleted file.A tool's success regarding this feature is thus a measure of its restraint.The only tool to consistently fulfill core feature 4 is FTK Imager.When it detects that a file has been partially or completely overwritten by another file, it omits the deleted sections (and everything after them in FAT).However, in cases when the overwriting file has also been deleted, even FTK fails to fulfill this core feature.It is worth noting that Autopsy does appear to react to overwritten files; for some cases of overwriting in FAT, it returns only a single cluster, presumably the starting cluster of the deleted file.However, since that cluster has been overwritten, Autopsy still fails to fulfill core feature 4 in those cases. Another factor that causes multiple failures is simulated in FAT cases 8, 9, and 10.In FAT, a file can be written starting close to the end of the file system, without enough space to fit contiguously.In such cases, the file must be fragmented, and since it is already at the end of the file system, the second fragment will appear closer to the beginning of the file system (this is illustrated in Figure 9).This scenario could realistically occur when the file system is almost full.In these cases, no tool is able to recover the second fragment of the deleted file; however, because FAT fragmentation is unpredictable, we only require them to recover the first fragment.Interestingly, Autopsy and Magnet AXIOM fail to recover anything at all, with Autopsy returning a short file of null data, and Magnet AXIOM returning an empty file after displaying an error message. RQ3. Are the free DFR tools more effective compared to the enterprise-level (proprietary) tools?As can be observed in Table 1, no type of tool clearly outperforms the others.FTK Imager, a proprietary enterprise-level tool, passes the most test cases by a large margin.However, the other enterprise-level tool, Magnet AXIOM, passes the least test cases.Given the available data, we cannot reach a definite conclusion for RQ3. Related Work Arthur et al. published an article [13] in 2004 which analyzes several DF (digital forensics) tools, including FTK Imager.While the tools are judged based in part on file recovery capabilities, the article does not present how these judgments were reached.The article also addresses DF tools' disk imaging and hashing functionalities. James Lyle from NIST published an article [12] in 2011, which lays out a strategy to evaluate the metadata-based DFR tools.To the best of our knowledge, his is one of the first works that identified some of the challenges in setting standards for evaluation of metadata-based DFR tools.In our understanding, NIST considered the above findings [12] while they set the guidelines for metadata-based DFR tools.The NIST guidelines [3] are publicly available on the NIST CFTT portal [2], which we have used in the current work. Recently, Loja et al. [14] analyzed a variety of DF tools, including Autopsy and FTK.They discussed a wide range of DF tools, not just DFR tools, and compared them on metrics such as price and supported features.In contrast with our paper, their work does not follow a specific standards document and takes a more general approach instead.Furthermore, B. V. Prasanthi [15] presented a general review of DF tools.In particular, Prasanthi summarizes the features of several tools but does not make any claims about standards compliance of specific tools (which is contrast with our work).It [15] also includes a variety of DF tools besides DFR tools. Future Work The NIST CFTT guidelines [3] include several optional features; these features could be explored using a similar methodology.We only created test images for the FAT and NTFS file systems; our process could be expanded to other common file systems such as ext4 and HFS.NIST CFTT has a separate set of guidelines for file carving DFR tools; future work could involve creating and evaluating test cases for file carving tools. Conclusion We designed a set of canonical test cases to determine a metadata-based DFR tool's meeting with the NIST CFTT guidelines.We tested five popular DFR tools and evaluated their results.We presented a comparison of the tools based on the number of test cases for which each tool meets the standards.We concluded that none of the tested tools consistently fulfilled the NIST guidelines, and explain the factors which cause them to fail.We also identified potential weaknesses in the guidelines and suggested improvements. Figure 1 . Figure 1.File foo.txt in a FAT file system.The directory entry of this file and the actual file content clusters (shaded) are shown.The FAT table is also shown, which determines the chain of clusters (from cluster 100 to cluster 200) of foo.txt. Figure 2 .Figure 3 . Figure 2. The metadata and actual content of foo.txt are shown after the file is deleted whereas the corresponding entries (i.e., 100-200) in the FAT table are zeroed. Figure 4 . Figure 4. To illustrate NTFS file system, the MFT entry of foo.txt and the actual content carrying clusters are shown.This file has two fragments (cluster 100-101 and cluster 200-298). Figure 5 . Figure 5.A file system containing deleted files is created on the external drive.The drive's raw data is then saved as a read-only file, called a disk image or file system image.The disk image is given as input to a DFR tool, which attempts to recover the deleted files.The recovered files are then analyzed to judge the DFR tool's meeting with the NIST CFTT expectation. 4 EAI Endorsed Transactions on Security and Safety Online First Do Metadata-based Deleted-File-Recovery (DFR) Tools Meet NIST Guidelines? Figure 6 . Figure 6.Test Case 2. Fragmented file A is deleted. Figure 7 .Figure 8 .Figure 9 . Figure 7. Test Case 4ii.Deleted file A is partially overwritten by active file B. Figure 11 . Figure 11.Test results on NTFS test cases for each DFR tool.Rows represent test cases whereas columns represent NIST core features.Blue is passing, red is failing, gray is not tested.
6,833.4
2019-08-01T00:00:00.000
[ "Computer Science" ]
A Low-Dimensional Network Model for an SIS Epidemic: Analysis of the Super Compact Pairwise Model Network-based models of epidemic spread have become increasingly popular in recent decades. Despite a rich foundation of such models, few low-dimensional systems for modeling SIS-type diseases have been proposed that manage to capture the complex dynamics induced by the network structure. We analyze one recently introduced model and derive important epidemiological quantities for the system. We derive the epidemic threshold and analyze the bifurcation that occurs, and we use asymptotic techniques to derive an approximation for the endemic equilibrium when it exists. We consider the sensitivity of this approximation to network parameters, and the implications for disease control measures are found to be in line with the results of existing studies. Introduction In the past few decades, network-based models of epidemic spread have become a central topic (Kiss et al. 2017;Pastor-Satorras et al. 2015) in epidemiology. Their ability to capture mathematically the complex structure of transmission interactions makes them an invaluable theoretical paradigm. Mathematically, a network is modeled as a graph consisting of a set of nodes that are connected by a set of links (called edges). In the context of epidemiology, typically nodes represent individuals, and edges represent interactions that can transmit the infection. Used in conjunction with compartment models, the disease natural history determines the number of possible states an individual node might be in at any point in time. When disease spread is modeled as a continuous time Markov chain, the network size and disease natural history can lead to high dimensional state spaces. For example, in a network with N nodes where individual nodes can be in m possible states, the size of the state space for the network is m N . Efforts to describe this process with a system of ordinary differential equations are similarly hampered by size-the Kolmogorov equations governing this system are exact, but prohibitively large. Thus, an important goal in network-based modeling has been to find a (relatively) low-dimensional system that accurately approximates the underlying high-dimensional system. Many approaches (Pastor-Satorras and Vespignani 2001;Pastor-Satorras et al. 2015;Miller et al. 2012; Barnard et al. 2019;Karrer and Newman 2010) in recent years have sought to introduce models with systems of a manageable size. Pairwise models (Keeling 1999;Eames and Keeling 2002;House and Keeling 2011) have been a popular and fruitful approach to this question. Derived from the Kolmogorov equations and exact in their unclosed form (Taylor et al. 2012), pairwise models consider the evolution of not just the expected number of nodes in a given state, but also pairs and triples of nodes. The dynamical variables are of the form [A] (the expected number of nodes in state A), [AB] (the expected number of pairs in state A − B), and [ABC] (the expected number of triples in state A − B − C). Higher-order groupings of nodes are also considered but rarely written, as dimension-reduction efforts often focus on approximating the expected number of triples in terms of pairs and individual nodes. Pairwise models have been successful with a variety of different network types, with models developed for networks with heterogeneous degree (Eames and Keeling 2002), weighted networks (Rattana et al. 2013), directed networks (Sharkey et al. 2006), and networks with motifs (House et al. 2009;Keeling et al. 2016) to name a few. Moreover, pairwise models have been developed for a variety of disease natural histories, with particular focus on SIR (susceptible-infectious-recovered) and SIS (susceptible-infectious-susceptible) models. In this paper, we consider an SIS pairwise model for networks with heterogeneous degree. SIS dynamics are used to model diseases where no long-term immunity is conferred upon recovery, leading to their frequent application to sexually transmitted infections such as chlamydia or gonorrhea (Eames and Keeling 2002). Contact networks for diseases of this type frequently involve heterogeneity in the number of contacts for individuals, and thus node degree becomes an essential concept. The degree of a node in a network is the number of edges to which the node is connected, and thus the number of potential infectious contacts. In this way, heterogeneous networks can capture complex disease dynamics. An essential tool when working with such networks is the degree distribution, defined by p k which is the probability a randomly selected node has degree k. The degree distribution has played an important role in dimension reduction approximations for pairwise models. For the SIR-type diseases, accurate low-dimensional models have been derived from the pairwise family using probability generating functions (Miller et al. 2012), complete with conditions for finding the final size of the epidemic. Despite the successes of the SIR case, the dimension reduction techniques in Miller et al. (2012) do not apply to the SIS case. Instead, the development of low-dimensional models of SIS-type disease spread on networks has relied on moment closure approximations. Under the assumption of a heterogeneous network with no clustering, House and Keeling (2011) introduced an approximation reducing the system size from O(N 2 ) to O(N ), where N is the number of nodes in the network. Termed the compact pairwise model (CPW), it has shown good agreement with stochastic simulations despite its considerably smaller size. However, the number of model equations still grows as the maximum degree of the network, making its application challenging for large networks with significant degree heterogeneity. Perhaps the most successful model in reducing the number of equations of the CPW for SIS-type diseases is the super compact pairwise model (SCPW) (Simon and Kiss 2016). The system consists of only four equations, with network structure being encoded to the model through the first three moments of the degree distribution. While Simon and Kiss demonstrated excellent agreement between the CPW and the SCPW, bifurcation analysis of the model and an explicit formula for the endemic steady state remains to be done. This paper sets out on that analysis of the SCPW model. A common point of investigation among models of SIS-type diseases is the disease-free equilibrium (DFE) that loses stability as a relevant parameter passes a critical value known as the epidemic threshold Vespignani 2001, 2002;Boguñá and Pastor-Satorras 2002). The epidemic threshold serves as a dividing point between two qualitatively different types of outbreaks. Below the epidemic threshold, any outbreak will die out; above the epidemic threshold, the system converges asymptotically to a stable equilibrium where the disease remains endemic in the population. Many studies follow the "next generation matrix" approach for the basic reproduction number R 0 (van den Driessche and Watmough 2002) to characterize the epidemic threshold. We follow a more conventional bifurcation analysis to derive the epidemic threshold and offer a proof that the system undergoes a transcritical bifurcation, as one might expect. Perhaps more importantly, the SCPW's small fixed number of equations presents an excellent opportunity to investigate the endemic equilibrium for SIS models on heterogeneous networks, which has been heretofore inhibited by large system size. We present a novel asymptotic approach to approximating the endemic equilibrium, leveraging the low-dimensionality of the model. The results presented further our understanding of the SCPW model specifically and suggest potential new avenues in the challenging problem of analytically determining the nontrivial steady state of pairwise models of SIS-type diseases. The paper is structured as follows: in Sect. 2, we nondimensionalize the model and reduce the number of equations to 3 to facilitate computations. In Sect. 3, we derive the epidemic threshold and show that the system undergoes a forward transcritical bifurcation. In Sect. 4, we tackle the endemic steady state that emerges through the bifurcation. We use asymptotic methods to approximate the size of the endemic steady state under two regimes-the system near the epidemic threshold and the system far away from the epidemic threshold-and give examples of the efficacy of these approximations on prototypical networks. Finally, we examine the implications of these two approximations. In line with existing studies (Eames and Keeling 2002), we find that control measures for reducing the prevalence at the endemic equilibrium may require different tactics depending on the regime. Model Pairwise models of SIS-type diseases provide a network-based analog of the classical SIS model (Diekmann and Heesterbeek 2000).The essential characteristics of pairwise models of SIS epidemics are dynamical equations for not just the expected number of nodes in each state, but also pairs and triples of nodes. At the node level, The CPW closes the system by approximating the expected number of triples as where S 1 and S 2 are the first and second moments of the distribution of susceptible nodes; that is Simon and Kiss (2016) is given aṡ where k n is the nth moment of the degree distribution, τ is the transmission rate, and γ is the recovery rate. Here, the quantity Q serves as an approximation of (S 2 − S 1 )/S 2 1 . As well, the quantities With the goal of performing bifurcation and asymptotic analyses in mind, nondimensionalizing the SCPW model is a natural first step. To do so, we will rearrange the Eqs. (3)-(5) so that the network parameters k , k 2 , k 3 are consolidated into more workable constants. First, we rewrite Q as where As well, a natural rescaling of time is T = t/γ , which prompts the defining of the transmission-recovery rate ratio δ = τ/γ. The introduction of δ consolidates the two epidemiological parameters τ and γ into a single nondimensional parameter, so any changes to epidemiology of the disease will be captured in δ alone. With these substitutions, the system (1)-(5) becomesv where the dot notation represents the derivative with respect to the nondimensional time variable d dT . The conservation Eqs. (6) and (7) become respectively. At this point, the conservation equations can be used to reduce the system to a mere 3 equations. However, the elimination of different equations for different analyses will be convenient. For characterizing the bifurcation undergone by the disease-free equilibrium (DFE), it is convenient to work with variables that are 0 at the DFE. For approximating the endemic steady state using asymptotic methods, the most parsimonious equations will make the algebraic manipulation required easier. Thus, we will work with slightly different (but equivalent) characterizations of (10)-(14) in the sections that follow. Epidemic Threshold To derive the epidemic threshold, we consider the stability of the DFE in terms of the epidemiological parameter δ. We will show that as δ increases through a critical value δ c , the DFE loses stability. Typically as the DFE loses stability, an asymptotically stable endemic equilibrium emerges. The SCPW is no exception, and here we derive the epidemic threshold, with a proof that the system undergoes a transcritical bifurcation (and thus an endemic equilibrium emerges) when δ = δ c included in Appendix A. First, we use the conservation Eqs. (15) and (16) to eliminate Eqs. (10) and (13). The resulting system iṡ Though ostensibly a messier choice of equation reduction, we note that at the DFE, will be convenient moving forward. To determine the stability of the DFE, we compute the Jacobian at x = 0 : A straightforward computation shows that We can write DF as a block triangular matrix as where the dimensions A and B, respectively, are 1 × 2 and 2 × 2. The properties of determinants of block matrices tell us that the eigenvalues of DF are −1 and the eigenvalues of B, which will determine the stability of the DFE. We appeal here to the trace-determinant theorem, which tells us the eigenvalues ξ of the 2 × 2 matrix B are given by First, we observe that these eigenvalues are real, as which is clearly positive. As a consequence, for the DFE to be stable we must have Tr(B) < 0 and Det(B) > 0. The determinant can be written and is thus positive if and only if δ < 1/k. Moreover, if δ < 1/k, then Therefore, we conclude that the DFE is stable for δ < 1/k and unstable for δ > 1/k. Thus, the epidemic threshold is the critical value of the bifurcation parameter δ : Notably, this threshold value is identical to that of the CPW as shown in Kiss et al. (2017). However, it remains to be shown that a bifurcation actually does occur here, and that an asymptotically stable endemic steady state emerges. To prove this, we apply a theorem of Castillo-Chavez and Song (2004) in Appendix A. We note that both the CPW and SCPW models are approximations to the true SIS dynamics on a network, so while (25) is a good approximation of the true epidemic threshold, it may not be appropriate in some cases. For instance, (25) is greater than zero for networks with a power law degree distribution ( p k ∼ k −d ) with d > 3 in the large network limit (N → ∞). However, exact results show that the true epidemic threshold is zero in the large network limit (Chatterjee and Durrett 2009). The Endemic Equilibrium With the existence of an endemic steady state established, we turn to the question of finding an approximate analytic expression for it. In general, this is a difficult proposition with epidemic models on networks owing to the frequently high-dimensional nature of the dynamical systems. An exact closed-form expression for the endemic equilibrium of the SCPW model requires solving a system of polynomial equations in multiple variables, which we do not attempt here. However, with asymptotic techniques, a workable approximation can be derived for two cases of δ: near the epidemic threshold (δ ≈ δ c ), and far away from it (δ >> δ c ). We do not have a good approximation in the intermediate case. Two challenges are apparent. First, how to eliminate equations to facilitate asymptotic expansions of the equilibrium and second, the choice of small nondimensional parameter in each case. Unlike in Sect. 3, the most parsimonious characterization of (10)- (14) is desirable. So we eliminate (11) and (14) with the conservation equations. To promote the finding of a small nondimensional parameter, we rewrite the resulting system using δ = δ c · δ δ c and incorporate the constants σ = k δ c , λ = αδ c / k , μ = βδ c . With these alterations, the system becomeṡ At the endemic equilibrium,v =ẋ =ẏ = 0. We can solve (26) for v and substitute into (27) and (28). With some rearrangement of terms and a little algebra (and adding (28) to (27)), we arrive at the system of polynomial equations that determines the endemic steady state: Note that in (30), we have dropped a factor of x that corresponds to the DFE. For the endemic steady state, we are interested in knowing the prevalence when the system is at equilibrium: w * . We use the following procedure to approximate the solution. 1. Express δ c /δ in terms of a small parameter. 2. Use the Implicit Function Theorem to linearize P(x, y) = 0 as around a point (x,ỹ) that is mathematically and/or biologically justified for the given regime. 3. Expand x, y, and other relevant quantities in terms of the small parameter. 4. Substitute the expansions into Q(x, y) = 0 and obtain a regular perturbation problem and find an asymptotic solution for the equilibrium value x, which approximates x * . 5. Apply the relation w * = (δ c /δ) −1 σ x * to obtain an asymptotic series for the prevalence at the endemic equilibrium. We describe the results of this procedure for each case in the remainder of this sectionthe details of the computations are included in Appendix B. Case 1: Near the Epidemic Threshold (ı ≈ ı c ) For δ ≈ δ c , we choose η = 1 − δ c /δ as a small parameter. In terms of this small parameter, (29) and (30) become: When δ ≈ δ c , an endemic steady state has just emerged, so we can view this equilibrium as a small perturbation to the steady state x = 0, y = 1. Linearizing P(x, y) = 0 about this point gives we have Substituting into (32) and equating coefficients to 0, we find an η-order expansion of the approximate equilibrium value x * as Using the relation w To demonstrate the efficacy of this approximation, we compare the approximation (38) to the actual endemic equilibrium using bifurcation diagrams (Fig. 1). We consider two example configuration model random networks (Molloy and Reed 1995) with N = 10, 000. In Fig. 1a, a bimodal network is considered with 5000 degree 3 nodes and 5000 degree 5 nodes. In Fig. 1b, a network with a Poisson degree distribution (with average degree k = 10) is considered. As is clear in both examples, the agreement between the actual and approximate endemic equilibrium is quite good near the epidemic threshold. Interestingly, the approximate value of w * is greater than the exact value for the bimodal network and less than the exact value for the Poisson network. We suspect that this is due to network structure and higher order terms in the asymptotic expansion, which we have not computed. An analogous situation is found in the δ >> δ c case. Case 2: Far Away from the Epidemic Threshold (ı >> ı c ) For δ >> δ c , our small parameter of choice is = δ c /δ. We can rewrite (29) and (30) in terms of this parameter: When δ >> δ c , the transmission rate τ is large relative to the recovery rate γ. Thus, we expect the disease to affect much of the population, and consequently there will be very few remaining [SS] links, and therefore y ≈ 0. Solving P(φ, 0) = 0 for φ yields φ( ) = 2 − λ and slope of the linearization is then Next, we seek to expand y in terms of only. The relevant expansions for φ, ψ, and x are: To ease the writing of coefficients, we let φ α and ψ α refer to the coefficients on α for the respective series. From this, it follows that Substituting into (40), and equating the coefficients to 0, we find that we need the coefficients up to order 4 in order to find a 2 order expansion of the approximate equilibrium value of x * . The result is Finally, as w * = σ −1 x * , we arrive at an −order approximation for size of the endemic steady state as As with the δ ≈ δ c case, we compare the approximation (49) to the actual endemic equilibrium in Fig. 2 for the same networks as previously described. Again, the agreement is quite good, even for relatively small values of δ. In this case, the approximation for the endemic equilibrium also provides an approximation to the epidemic threshold. Whether this approximation is an overestimate or underestimate of the exact threshold depends on network structure. If k 2 ≥ k 2 + k , the approximation is an overestimate. On the other hand, if k 2 < k 2 + k , the approximation being an overestimate or underestimate depends on the relationship between k 3 and the other two moments. Sensitivity Analysis With any model of infectious disease, its implications in preventing or mitigating spread should be considered. For network models, some pharmaceutical and nonpharmaceutical interventions can alter the contact network structure in the effort to contain or mitigate outbreaks (Salathé and Jones 2010). For an SIS-type disease, particularly when containment is impossible, one such goal may be to decrease the size of the endemic equilibrium. To that end, we examine the sensitivity of our approximations of w * to network parameters in the SCPW model. One benefit of explicit asymptotic expressions for the endemic equilibrium is that sensitivity analyses are straightforward to implement. For a fixed δ, we have a three-dimensional parameter space. To visualize these parameter combinations, we use two-dimensional heat maps taken at slices of the third network parameter. In this case, we have decided to look at several fixed values of k 3 and draw sensitivity heat maps in the variables ( k , k 2 ). Further complicating matters is the fact that moments of a distribution are subject to many inequalities which restrict the domain of the sensitivity heat maps. Two natural restrictions to include are the results of Jensen's Inequality and the Cauchy-Schwarz Inequality, respectively: For a fixed value of k 3 , these restrictions give a wedge-shaped feasible region of ( k , k 2 ). We plot the sensitivities for k 3 = 20, 100, and 400 to display a range of possible parameter combinations. In the δ ≈ δ c case, calculating the partial derivatives is straightforward. To compute the sensitivities, we evaluate the partial derivatives at the epidemic threshold: δ = δ c . Table 1 shows the expressions for these sensitivities, and Fig. 3 shows corresponding plots. Clearly ∂w * ∂ k ≤ 0 and ∂w * ∂ k 2 ≥ 0, with more extreme values near the upper-right corner of the feasible region. For the δ >> δ c case, the partial derivatives (Table 2) all depend on a factor of 1/δ, so the choice of δ for computing sensitivities does not affect the relative magnitudes of the partial derivatives. For convenience, we select δ = 1.5. The sensitivity plots in (a) (b) Fig. 3 Sensitivities a ∂w * ∂ k and b ∂w * ∂ k 2 for the δ ≈ δ c approximation. White denotes regions of the ( k , k 2 ) plane outside of the feasible region. Sensitivities are evaluated at δ = δ c Fig. 4 show that ∂w * ∂ k ≥ 0, ∂w * ∂ k 2 ≤ 0, and ∂w * ∂ k 3 ≥ 0, with the greatest sensitivity near the curve k 2 2 = k 3 k , though the large magnitude appears to be due to the partial derivatives being undefined there. A significant observation from these sensitivities is that ∂w * ∂ k and ∂w * ∂ k 2 change signs depending on the regime considered. If the goal of an intervention is to reduce the size of the endemic equilibrium, near the epidemic threshold, this can be accomplished in principle by increasing k or decreasing k 2 , which will in effect increase δ c as well. This is intuitive, as an effort to push the system below the epidemic threshold would also decrease the endemic equilibrium for a fixed δ. However, in the δ >> δ c regime, the system is far from the epidemic threshold, and reducing the size of the endemic equilibrium can be accomplished by decreasing k or increasing k 2 . This suggests that containment and mitigation strategies that depend on altering the structure of the contact network may require different goals in terms of the moments of the degree distribution. Conclusion In this paper, we have analyzed the super compact pairwise model presented in Simon and Kiss (2016). A non-dimensional version of the model was considered, and a bifurcation analysis was performed, demonstrating that the SCPW and CPW models share an epidemic threshold. Moreover, we derived approximate formulas for the endemic equilibrium in two regimes: when the transmission/recovery ratio is near the epidemic threshold, and far away from it. While the asymptotic techniques used here are ad hoc, similar techniques may prove fruitful in other low-dimensional models of infectious disease spread on networks. However, an exact expression for the endemic equilibrium remains elusive. Before explaining the advantages of our approach, we acknowledge two limitations of our approximation. First, approximations of the endemic equilibrium for diseases between the two regimes are lacking. Second, while the examples of simulated networks show good agreement between the exact and approximate prevalence, we have not quantified the approximation error generally. As such, there may be types of networks for which our approximation of the endemic equilibrium is less accurate or inappropriate. Our approximation of the endemic equilibrium is very useful in providing a more detailed look into the interactions of the moments of the degree distribution as they relate to the size of an outbreak. This has implications for disease control measures, particularly those that work by altering the contact network structure. Our results suggest that for SIS-type diseases, strategies to contain (near the epidemic threshold) or mitigate (far away from the epidemic threshold) an outbreak may require different goals. In the mitigation scenario where the prevalence is high, measures might be employed that decrease the first moment k of the degree distribution. In effect, this may mean initiatives aimed at reducing the number of contacts of individuals alone. On the other hand, in the containment scenario where the prevalence is low, decreasing the second moment k 2 may be efficient. When couched in degree distribution terms this goal is hard to conceptualize, but using probability generating functions (Newman et al. 2001) one can show that k 2 is the average number of first and second neighbors of nodes in the network. Thus, measures that reduce both the contacts of individuals and their partners are effective in this scenario. This suggests the importance of contact tracing. We note that the sensitivities also suggest that increasing k 2 in the high prevalence case and increasing k in the low prevalence case may lead to a reduction in the size of the endemic equilibrium, though it is not clear why from a biological perspective. Our results complement the findings of Eames and Keeling (2002), who observed that the effectiveness of two common control measures, screening and contact tracing, depend on the prevalence at the endemic equilibrium. Screening, which targets and treats individuals, is efficient when the prevalence is high. Contact tracing, which targets and treats individuals and their partners, if efficient when the prevalence is low. Unlike this paper, Eames and Keeling implement these measures through epidemiological parameters (rather than through changing network structure). In this way, our results can be viewed as a network-structure analog for their conclusions and confirm that control measures appropriate in a network setting can be found. Further work in this area may include investigating this phenomenon with alternative models of SIS diseases on networks. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Appendix A Bifurcation and Endemic Steady State We begin with Theorem 4.1 from Castillo-Chavez and Song (2004), referring to the specific conditions that will be relevant for this analysis. Consider a system of ODEs with a parameter φ : Assume that 0 is an equilibrium for all values of φ. Assume further that D is the linearization matrix of (A.1) around the equilibrium 0 and with φ = 0, and zero is a simple eigenvalue of this matrix with all other eigenvalues having negative real parts. Assume as well that this matrix has a nonnegative right eigenvector w and left eigenvector v corresponding to the zero eigenvalue. Let F k be the kth component of f and If a < 0 and b > 0, then when φ changes from negative to positive, 0 changes its stability from stable to unstable. Correspondingly, a negative unstable equilibrium becomes positive and locally asymptotically stable. We apply this theorem to (17)- (19), where the equilibrium occurs at w = x = z = 0. Moreover, we define a bifurcation parameter φ = δ − δ c , so φ = 0 corresponds to δ = δ c , and ∂ ∂φ = ∂ ∂δ . For consistency with previously established notation, we will treat δ as our parameter, with φ increasing through 0 as δ increases through δ c . The Jacobian given in (21) when w = 0, x = 0, z = 0, and δ = δ c is .4) and the characteristic polynomial is given by The left and right eigenvectors (v and w, respectively) corresponding to the eigenvalue To compute a and b, it is convenient to express (A.2) and (A.3) in matrix-vector form: where H 2 and H 3 are the Hessians of F 2 and F 3 , respectively, at 0. These Hessians are As k 3 > k , it follows that a < 0. The computation for b is simpler. We note that Finally, as a < 0 and b > 0, we conclude that as δ increases through δ c , a positive, asymptotically stable equilibrium emerges, which is the endemic equilibrium. Appendix B Asymptotic Approximations of the Endemic Equilibrium The full derivations of the approximations (38) and (49) are presented in this appendix.
6,851.6
2020-12-29T00:00:00.000
[ "Mathematics", "Medicine", "Environmental Science" ]
Machine learning for prediction of in-hospital mortality in lung cancer patients admitted to intensive care unit Backgrounds The in-hospital mortality in lung cancer patients admitted to intensive care unit (ICU) is extremely high. This study intended to adopt machine learning algorithm models to predict in-hospital mortality of critically ill lung cancer for providing relative information in clinical decision-making. Methods Data were extracted from the Medical Information Mart for Intensive Care-IV (MIMIC-IV) for a training cohort and data extracted from the Medical Information Mart for eICU Collaborative Research Database (eICU-CRD) database for a validation cohort. Logistic regression, random forest, decision tree, light gradient boosting machine (LightGBM), eXtreme gradient boosting (XGBoost), and an ensemble (random forest+LightGBM+XGBoost) model were used for prediction of in-hospital mortality and important feature extraction. The AUC (area under receiver operating curve), accuracy, F1 score and recall were used to evaluate the predictive performance of each model. Shapley Additive exPlanations (SHAP) values were calculated to evaluate feature importance of each feature. Results Overall, there were 653 (24.8%) in-hospital mortality in the training cohort, and 523 (21.7%) in-hospital mortality in the validation cohort. Among the six machine learning models, the ensemble model achieved the best performance. The top 5 most influential features were the sequential organ failure assessment (SOFA) score, albumin, the oxford acute severity of illness score (OASIS) score, anion gap and bilirubin in random forest and XGBoost model. The SHAP summary plot was used to illustrate the positive or negative effects of the top 15 features attributed to the XGBoost model. Conclusion The ensemble model performed best and might be applied to forecast in-hospital mortality of critically ill lung cancer patients, and the SOFA score was the most important feature in all models. These results might offer valuable and significant reference for ICU clinicians’ decision-making in advance. Introduction Lung cancer is the third most common malignancy and is reported the leading cause of cancer death in males and the second most common cancer in females, which taking up more than one-fifth of all cancer deaths worldwide [1][2][3]. Exceed 158,000 patients died from lung cancer in the United States in 2016, which accounted for 27% of all cancer deaths [4,5], the prognosis remains poor although improvement has been made in the therapy of lung cancer, the 5-year survival rate for all stages combined is only 15% [6,7]. Many lung cancer patients require admitted to intensive care unit (ICU) and respiratory failure requiring mechanical ventilation is the major reason for lung cancer patients being admitted to the ICU [8,9]. Although progressive improvement has been made to improve the prognosis in lung cancer patients admitted to the ICUs, the mortality rate remains extremely high, the mortality rate in lung cancer patients admitted to ICU was 43% and the in-hospital mortality is 60%, and the mortality rate is higher in patients with stage IV (68%) [10]. Currently, the lack of early prediction and risk stratification for in-hospital mortality is the main challenge for ICU clinicians. The decision regarding which groups of lung cancer patients admitted to the ICU at high-risk and would have poor prognosis is based on a complex suite of considerations including therapeutic options and the wishes of patients and their family. These critically ill lung cancer patients usually have poor long-term survival and high financial cost. Hence, it's necessary to explore risk prediction models to distinguish those at high-risk of critically ill lung cancer patients admitted to ICU. The development of artificial intelligence has led to a significant improvement in the predictive models used for estimating the risk of mortality in cancer patients. Machine learning (ML), a new type of artificial intelligence can transform measurement results into relevant predictive models, especially cancer models, based on the rapid development of large datasets and deep learning. Recently, ML have been shown to be effective in predicting lung cancer susceptibility, recurrence, and survival of malignant tumors [11][12][13]. However, there is still limited data relating to the in-hospital mortality risk prediction models using ML methods in patients with lung cancer in the ICU setting. Therefore, this study aimed to develop six ML algorithm models including logistic regression, decision tree, random forest, light gradient boosting machine (GBM), extreme gradient boosting (XGBoost), and an ensemble model to predict the in-hospital mortality among lung cancer patients admitted to ICU so that individual prevention strategies for critically ill lung cancer patients could be proposed to help clinicians to make therapeutic decisions. Moreover, we also intended to compared the six ML models and determined the best model for in-hospital mortality prediction in lung patients admitted to the ICU. Data source This retrospective study utilized information from the eICU Collaborative Research Database (eICU-CRD) [14] and the Medical Information Mart for Intensive Care-IV (MIMIC-IV version 1.0) database [15], eICU-CRD contains data of more than 200 thousand ICU admissions in 2014 and 2015 at 208 US hospitals while MIMIC-IV includes information of more than 70,000 patients admitted to the ICUs of Beth Israel Deaconess Medical Center in Boston, MA, from 2008 to 2019. Due to the data used in this study were extracted from public databases, it was exempt from the requirement for informed consent from patients and approval of the Institutional Review Board (IRB). All procedures were performed according to the ethical standards of the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. After finishing the web-based training courses (S1 Fig) and the Protecting Human Research Participants examination, we obtained permission to extract data from the eICU-CRD and MIMIC-IV database. Cohort selection Patients with one of the following conditions were excluded: (1) less than 18-year-old at first admission to ICU; (2) repeated ICU admissions; (3) more than 80% of personal data was missing. We randomly selected MIMIC-IV database as the training cohort and eICU-CRD database as the validation cohort. A total of 2,638 patients in the MIMIC-IV database assigned into the training cohort and 2,414 patients in the eICU-CRD database assigned into the validation cohort were finally included in this study, the detailed flowchart was shown in Fig 1. Date collection and outcomes Baseline characteristics and admission information: age, gender and body mass index (BMI) were calculated as described in previous studies. Comorbidities including hypertension, diabetes, chronic kidney disease, myocardial infarction, congestive heart failure, atrial fibrillation, valvular disease, chronic obstructive pulmonary disease, stroke, hyperlipidemia and liver disease were also collected for analysis based on the recorded ICD codes in the two databases. Charlson comorbidity index (CCI) was also included. In addition, severity scores including sequential organ failure assessment (SOFA) score, the oxford acute severity of illness score (OASIS), the acute physiology score III (APSII) were collected. Acute complications during ICU including acute heart failure, acute respiratory failure, acute hepatic failure and cardiac arrest based on ICD codes, acute kidney injury based on KDIGO guideline in 48 hours [16], sepsis based on sepsis 3.0 criteria [17] were also recorded. In addition, initial vital signs and laboratory results were also measured during the first 24 hours of ICU admission. The primary outcome was in-hospital mortality. Statistical analysis For all continuous covariates, the mean values and standard deviations are reported Categorical data were expressed as frequency (percentage). The Chi-square test or Fisher's test was appropriately performed to compare the differences between groups. The baseline characteristics were reported as a training cohort and validation cohort. The comparison of baseline characteristics was performed in R software (version 4.1.0). P < 0.05 was considered statistically significant. Modeling work were done using Python 3.6.4. Construction of in-hospital mortality predictive models Logistic regression, decision tree, random forest, and two gradient boosting decision trees, including LightGBM, and XGBoost, were adopted to construct prediction models. In order to improve prediction, an ensemble model was constructed, which applied staking strategy using random forest, LightGBM and XGBoost [18]. The prediction probabilities of the three models were input into a logistic regression model to produce a final prediction. Hence, six in-hospital mortality predictive models were developed using logistic regression, decision tree, random forest, LightGBM, XGBoost and ensemble models, which each used 100 full features for each time window. Furthermore, the top 10 important features derived from random forest, lightGBM, and XGBoost model were also analysis [18]. Performance evaluation To evaluate and compare the predictive accuracy of prediction by decision tree, random forest, LightGBM, XGBoost, ensemble model and logistic regression models. Each model was evaluated according to accuracy, recall, F1 score, and AUC (area under the receiver operating characteristic) curve [19]. SHAP analysis To further analyze the positive or negative effect of the important features identified for inhospital mortality prediction and investigate the relationship between, a shapely additive explanations (SHAP) analysis was performed using Python 3.7.0. The SHAP value is the assigned predicted value of each feature of the data [20]. Baseline characteristics A total of 5,052 patients were finally included in the present study, including 2,638 patients in the training cohort extracted from the MIMIC-IV database and 2,414 patients in the validation cohort extracted from the eICU-CRD database. There were 653 (24.8%) in-hospital death in the training cohort, and 523 (21.7%) in-hospital death in the validation cohort. Table 1 showed the baseline characteristics both in the training cohort and in the validation cohort. Model performance Six models, logistic regression, decision tree, random forest, LightGBM, XGBoost, and ensemble models were used to predict in-hospital mortality using all the features. As can been seen in Table 2, the traditional model logistic regression exhibited the worst predictive ability, followed by decision tree, random forest, XGBoost, LightGBM. And the ensemble model showed the best predictive ability with the highest accuracy (0.89), recall (0.80), F1 score (0.82) and AUC (0.92) in training cohort. And the results in the validation cohort similar to the results in the training cohort ( Table 2). In addition, we also performed ROC analysis to further confirm the in-hospital mortality predictive ability of these six models, as shown in Fig 2A and 2B, the logistic regression model depicted the worst predictive ability, followed by decision tree, random forest, XGBoost, LightGBM. And the ensemble model showed the best predictive performance both in the training cohort and in the validation cohort. Feature importance analysis To clarify the important features that impacts on model output, the feature importance analysis was conducted. The top 15 features derived from random forest, lightGBM, and XGBoost model were shown in Fig 3. In random forest model, SOFA score was the most influential feature, followed by albumin, OASIS score, anion gap, billirubin, mechanical ventilation, acute respiratory failure, APSIII score, length of hospital, BUN, WBC, respiratory rate, vasopressors usage and RDW, and these features also had important on random forest model ( Fig 3A). For lightGBM model, anion gap played the most important role in prediction in-hospital mortality, moreover, SOFA score, OASIS score, albumin, length of hospital, billirubin, WBC, platelet, BNU, heart rate, MCH, APSIII score, creatinine and MCV also plays important role in prediction ( Fig 3B). Furthermore, in terms of XGBoost model, SOFA score had the most influence on in-hospital mortality prediction, followed by anion gap, billirubin, OASIS score, albumin, white blood cell, bicarbonate, length of hospital, acute respiratory failure, RDW, temperature, creatinine, platelet, MCHC and BMI ( Fig 3C). Moreover, the feature importance analysis derived from random forest, lightGBM, and XGBoost model were also conducted in validation cohort in S2-S4 Figs. And the results were coincided with the result of the training cohort. SHAP analysis In order to manifest an overall positive or negative impact on model output, and to analyze the similarities and differences of important characteristics of critically ill lung cancer with different severities, the SHAP summary chart was used. As shown in Fig 4, SOFA score ranked the first in importance among the top 20 features of the XGBoost model, and the higher the SOFA score, the higher probability of in-hospital mortality development, indicating that SOFA score should be observed first in in-hospital mortality prediction. Taking the XGBoost model with excellent performance for predicting dead/survival using all features as an example, combined with the SHAP analysis method, a representative dead patient and a survival patient were selected to illustrate the effect of features on the prediction ability. As shown in Fig 5, for predicting dead patients, SOFA score plays a major positive role in the prediction results, the SHAP value of final model predicted for this patient is 0.96, which is beyond than 0, thus successfully predicting the patient as an in-hospital died patient. For predicting survival patients, anion gap plays a major positive role in the prediction results, SOFA score played a major negative role in predicting outcomes, the SHAP value of final model predicted for this patient is -1.23, which is less than 0, thus successfully predicting the survival patient. Discussion In this retrospective study, we developed and validated machine learning algorithms based on clinical features based on largely public database MIMIC-IV and eICU-CRD, to predict inhospital mortality of critically ill lung cancer patients. The lightGBM model exhibited the best performance for single model prediction, whereas the RF + ensemble model an ensemble model was constructed, which applied staking strategy using random forest, LightGBM and XGBoost exhibited the greatest AUC among the models we tested. Using advanced machine learning techniques, we could identify some important clinical features associated with in-hospital mortality such as SOFA score, anion gap, albumin, OASIS score and acute respiratory failure. These results have some implications and require further consideration. ICU-related in-hospital mortality for lung cancer is ranked highest among the solid tumors and the in-hospital mortality in lung cancer patients admitted to ICU is discrepancy according to the lung cancer stage. Previous studies reported that the ICU mortality of extensive or advanced lung cancer patients over 50%. Park et al. investigated patients in Korea who had been newly diagnosed with lung cancer between 2008 and 2010 and indicated that the in-hospital mortality was 58.3% in those advanced critically ill lung cancer patients [21]. In addition, Song et al. analyzed the advanced lung cancer patients, including stage IIIB or IV non-small cell lung cancer and extensive-stage small cell lung cancer, admitted to the ICU and found before and after 2011, the in-hospital mortality was 82.4% and 65.9% [22]. In this study, our result manifested a similar result to Adam et al. [23] report a 20% in-hospital mortality rate in stage I non-small cell lung cancer. This maybe due to the vast majority of the type of the lung cancer were primary but not metastatic, so the in-hospital mortality in the present study is lower than those with advanced critically ill lung cancer patients. Unfortunately, it is difficult for clinicians to identify patients at high risk of in-hospital death in the ICU. Therefore, developing and promoting reliable prediction models is particularly urgent for identifying these patients and providing them with timely and effective interventions to improve their prognosis. Currently, given the increasing applicability and effectiveness of supervised machine learning algorithms in predictive disease modeling, the breadth of research seems to progress PLOS ONE [24,25]. The well-known supervised learning classifiers, including support vector machine, random forest, convolutional neural network, and decision tree, have been gradually applied to clinical practice [26,27]. With the help of machine learning classification, it showed that the machine learning-assisted decision-support model has more advantages than the traditional linear regression model. In this study, we used six different machine learning methods (logistic regression, decision tree, random forest, LightGBM, XGBoost, and ensemble models) to build predictive models. Four popular metrics (ROC, F1 score, accuracy and recall) were used to evaluate the performance of these algorithms. There is no doubt that the results showed that the ensemble model (which combined random forest, LightGBM and XGBoost) achieved the best performance and predictive stability, which was consistent with previous reported [18]. Apart from this, lightGBM model achieved the best predictive performance. The lightGBM modeling is a novel technique that has been widely adopted in tumors survival prediction but not been widely adopted in critical care research [28,29]. Otaguro et al. evaluated data from patients who underwent intubation for respiratory failure and received mechanical ventilation in ICU and use three learning algorithms (Random Forest, XGBoost, and LightGBM) to predict successful extubation, the result demonstrated that lightGBM exhibited the best overall performance [30]. Moreover, Yang et al. adopted nine machine learning models to predict inhospital mortality in critically ill patients with hypertension and found that among nine machine learning models, the lightGBM model had the best predictive ability [31]. We employed visualization function in SHAP to find the effect of the specific value of each variable on model output. There are some factors contributing most including SOFA score, anion gap, albumin and so on. SOFA score is an useful tool to quantify the degree of organ dysfunction or failure present on ICU admission which has been widely used for in-hospital mortality prediction in the ICU settings [32][33][34][35]. And SOFA score was reported to exhibit better performance than other score systems in predicting infection-related in-hospital mortality in ICU patients, the higher the SOFA score, the higher the risk of in-hospital mortality [36]. Anion gap (AG) is commonly used to classify acid-base disorders and to diagnose various conditions. Recently, AG has been reported to associated with in-hospital mortality in ICU patients. Hu et al. indicated that AG was related to in-hospital mortality in intensive care patients with sepsis [37]. Moreover, Chen et al. demonstrated that AG could significantly predict ICU mortality for aortic aneurysm patients [38]. Hypoalbuminemia is almost associated with worse prognosis. And low albumin level was usually related to higher risk of in-hospital mortality in ICU settings [39]. Moreover, SHAP force plots of a dead and a survival patient (Fig 5) were selected to further verify the effect of features on the prediction ability and the results further confirmed the SOFA score, anion gap, albumin, etc. features have positive or negative effect on the output of these predictive models. We should acknowledge some limitations of this research. First, the retrospective and observational nature of our study may lead to inevitable selection bias. Second, the data used in this study were based on public databases MIMIC-IV and eICU-CRD, an external validation is required to prevent overfitting. Third, the data did not include any information on the pathologic and radiologic finding of lung cancer. We could not differentiate between small cell carcinoma and non-small cell carcinoma, the algorithm model is skewed because important medical information about molecular diagnosis. Conclusions In the present study, we applied six machine learning methods to predict in-hospital mortality in critically ill lung cancer patients. We demonstrated that the ensemble model achieved the best predictive performance and the lightGBM model exhibited the best performance for single model prediction. And the SOFA score, anion gap and albumin are the most important factors which impacted on the output of the machine learning models in predicting in-hospital mortality of critically ill patients with lung cancer. Our study obtained clinical feature interpretations to provide clinicians in ICU with some information for reference in clinical prognosis prediction. Ethics approval and consent to participate The study was ethically approved by an affiliated of the Massachusetts Institute of Technology (No.27653720). All patients-related information in the database is anonymous, so there is no need to obtain the informed consent of the patients. This study is described in conformity to the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement, and was managed to conform to the tenets of the Declarations of Helsinki.
4,512.2
2023-01-26T00:00:00.000
[ "Medicine", "Computer Science" ]
Study of the Influence of NanOx Parameters NanOx is a new biophysical model that aims at predicting the biological effect of ions in the context of hadron therapy. It integrates the fully-stochastic nature of ionizing radiation both at micrometric and nanometric scales and also takes into account the production and diffusion of reactive chemical species. In order to further characterize the new framework, we discuss the meaning and relevance of most of the NanOx parameters by evaluating their influence on the linear-quadratic coefficient α and on the dose deposited to achieve 10% or 1% of cell survival, D10% or D1%, as a function of LET. We perform a theoretical study in which variations in the input parameters are propagated into the model predictions for HSG, V79 and CHO-K1 cells irradiated by monoenergetic protons and carbon ions. We conclude that, in the current version of NanOx, the modeling of a specific cell line relies on five parameters, which have to be adjusted to several experimental measurements: the average cellular nuclear radius, the linear-quadratic coefficients describing photon irradiations and the α values associated with two carbon ions of intermediate and high-LET values. This may have interesting implications toward a clinical application of the new biophysical model. Introduction Hadron therapy is becoming an attractive modality for cancer treatment, as the exponential increase in the number of dedicated facilities built over the past decades shows. The favorable depth-dose profile of protons is mostly exploited to eradicate localized tumors situated close to organs at risk, while the high biological effectiveness makes carbon ion beams more adequate than the conventional radiotherapy modalities to treat radioresistant cancers. Such an efficiency in cell killing is quantified through the RBE (relative biological effectiveness), which is a complex function of multiple parameters related to the incident particles, the irradiation conditions and the intrinsic properties of the biological system. Determining the value of RBE for every scenario is a challenging task that requires modeling to comply with the demands of a clinical environment. Several solutions have already been developed (also specifically for protons, e.g., [1][2][3]), and a few are currently used in treatment planning [4][5][6][7]. However, the latter present some shortcomings that may limit their improvement: in the modified microdosimetric kinetic model, the nanometric scale is disregarded, and the Poissonian distribution relating cell survival to the total number of lethal damages is corrected in a second instance, as it is not adapted for high-LET ions; in the local effect model, the stochastic nature of the dose deposition is not taken into account at the nanoscopic level, and the use of the amorphous track structure results in conceptual incongruities [8,9]. In an attempt to overcome such drawbacks, we introduced a new biophysical model, NanOx, which allows one to calculate cell survival probability, taking into account the fluctuations in energy deposition at multiple scales and the production and diffusion of reactive chemical species. The description of the formalism, the results obtained for three cell lines, as well as the comparison with alternative models have previously been detailed in [10][11][12]. NanOx is based on a solid mathematical architecture, comprised of fundamental where P(K, D) is the probability to achieve K impacts with a delivered physical dose D and c K S c K is the average survival probability over all the configurations c K . NanOx attributes the process of cell death induction to the separate contributions of two types of biological events taking place at different spatial scales. The probability of cell survival, hence, may be decomposed into a factor due to the action of local lethal events (S L ) and one due to that of non-local events (S NL ): The two terms appearing in Equation (2) are assumed to be independent and associated with two sensitive volumes that are, a priori, different since they are related to different biological events. Local Lethal Events Local lethal events are lethal events induced by physico-chemical processes in a very localized volume (<100 nm), inside which the probability that two or more particle tracks deposit a significant specific energy may be neglected at clinical doses [13]. They may correspond to the formation of complex DNA lesions (e.g., unrepaired/misrepaired DNA double-strand breaks) that may, on their own, lead to cell death. Local lethal events are modeled by the inactivation of one among N nanometric targets located in the sensitive volume. Such inactivation is described as a function of an observable characterizing the radiation quality at the local scale; for the current implementation of NanOx, we opted for the restricted specific energy, whose distribution is estimated from the LQD (LiQuiD water) Monte Carlo code [14]. This quantity is computed considering the energy transfers that may lead to events relevant for the biological effects of radiation (e.g., ionizations, excitations and attachments of electrons) and disregarding the energy that simply causes the heating of the medium (e.g., molecular vibrations, interactions between electrons and water phonons, geminate recombinations). We postulate that the responses of local targets are independent and that the probability of cell survival with respect to local lethal events for a given configuration of local targets (c N ) and radiation impacts (c K ) is equal to the probability that no local target is inactivated: f represents the inactivation function, while c i ,c k z is the restricted specific energy deposited in the local target i with configuration c i (i.e., position and orientation) after one radiation impact with configuration c k . The introduction of an effective lethal function (ELF), defined by: allows one to reformulate Equation (3) into: provided that the number of targets (N) is large and that they are uniformly distributed. F( c i ,c k z) c i represents the average effective lethal function over all the configurations of local targets. Non-Local Events Non-local events are harmful, but cannot induce cell death on their own. They represent the accumulation and the interaction, at the microscopic cellular scale, of sublethal damages (e.g., DNA single-strand breaks), lesions in different cellular structures (e.g., mitochondria, nuclear and cellular membranes) and oxidative stress. In the current version of NanOx, we decided to associate non-local events with global events, which account for the production of chemical reactive species in the associated sensitive volume. Indeed, it has been proven that the latter induce a significant part of DNA sublethal damages [15,16] and are directly involved in cellular oxidative stress. In order to properly characterize the radiation at the global scale, we introduced two new observables, which are evaluated via the LQD, PHYCHEML and CHEM Monte Carlo codes [14,17,18]. The relative chemical effectiveness (RCE) is determined as the ratio of the restricted specific energies deposited in the sensitive volume (Z) by the reference radiation and by a given ion causing the same level of oxidative stress; for the practical implementation of the model, we considered as the reference radiation low-LET photons emitted from a 60 Co source, and we constructed RCE in terms of the chemical yields of OH radicals induced by such a reference radiation (G r ) and by the ion (G). Among all the radical species, OH molecules are characterized by the highest production rate and are considered the key reactants in DNA damage [19]. The chemical specific energy (Z), finally, corresponds to the specific energy rescaled by the factor RCE; for a configuration of radiation impacts c K , it is defined as: Cell survival probability with respect to global events may be modeled via the well-known linear-quadratic (LQ) expression, but in terms of the chemical specific energy: In Equation (7), C norm is a normalization factor ensuring that the average cell survival probability over all irradiation configurations leads to the experimental cell survival with respect to a reference radiation characterized by the LQ coefficients α r and β r . The "global parameters" appearing in the same equation have been defined as follows, in the current version of the model: α G is set as zero to allow for a separate modeling of the survival with respect to local lethal events; β G is instead derived from the β r parameter. It may be shown that: where coefficient η is the mean ratio between the restricted specific energy and the specific energy, estimated around 80% using the Monte Carlo code LQD. Core and Penumbra Approximation In order to simplify the implementation of the model and accelerate computer calculations, we exploited a feature that characterizes ion tracks: the presence of an inner core and an outer penumbra in which the energy deposition patterns are totally distinct. We designed the core as a parallelepiped with a 100-nm side centered along the ion track; this choice was motivated by the fact that, at therapeutic doses, the electrons reaching the penumbra may be represented, in a good approximation, in terms of the electrons excited and ionized by low-LET photons. Such considerations allowed confining the tossing of nanometric targets to the volume V c describing the intersection between the track core and the sensitive volume, hence reducing the computing time required for the simulation of local lethal events. It is out of the scope of this paper to prove that Equation (5) may be reformulated as follows: c k Z p is the restricted specific energy calculated in the volume V p associated with the penumbra region after one radiation impact with configuration c k ; the corresponding coefficient c k α p is set as the linear parameter describing the survival with respect to a reference radiation, α r . On the other hand, t k Z c is the average restricted specific energy calculated in V c over a large number of particles of type t k ; the coefficient c k α c is determined for each radiation quality via the effective lethal function. It is worth noting that the various indices appearing in Equation (9) reflect the asymmetry in the approach used to model local lethal events arising from the core and the penumbra regions. The core and penumbra approximation also allows one to express the cell survival probability with respect to global events in terms of the restricted specific energies and the chemical yields computed in the two complementary volumes ( c k Z c/p and t k G c/p ): Equation (10) is of particular interest when compared to Equation (7), since it clarifies the new concepts introduced to designate the chemical specific energy and summarizes the choices related to the current version of the model, as well. Further details on the formalism and on the choice of considering average observables for a given radiation type t k will be presented elsewhere. Model Parameters Several parameters have been introduced to estimate the cell survival probability in the description of the NanOx formalism. The modeling of local and global events is based on the definition of two critical cellular regions; in the current version of the model, both are assumed to correspond to the nucleus and are represented by the cylindrical volume V s . The modeling of local lethal events relies on the effective lethal function F, which is represented by an error function built via a data-driven procedure: The three free parameters are determined through a fit to experimental α(LET) data, which are specific to a given cell line. In particular, z 0 represents the restricted specific energy threshold above which DNA damage may induce cell death, and σ is the width of the increase; h, the height of the response attesting to the function's saturation, includes the total number of local targets (see Equation (4)). The simulation of z distributions is by definition based on the definition of convenient biological targets, which are defined as the cylindrical volumes V t . The modeling of global events also requires the introduction of additional parameters, which appear explicitly or implicitly in Equation (7): α G , β G and the time T RCE elapsed after the radiation impact. OH radicals diffuse, interact and recombine continuously with each other, so their concentration is a function of time. Materials and Methods We carried out a theoretical study on the influence of most of the parameters listed in Section 2.4 in the prediction of radiation-induced effects. In the current implementation of NanOx, we regarded as fixed both the size of local targets and the linear coefficient appearing in the description of cell survival probability with respect to global events, α G . The variation of such parameters will be the subject of future work. Cell Lines and Standard Values of the Parameters We considered three cell lines for which NanOx predictions have been already benchmarked against experimental data for photon, proton and carbon ion irradiations over a wide energy range, going from 0.8-266.4 MeV u −1 [10,12]. Human tumor cells from salivary glands (HSG) were chosen since head and neck cancers match the therapeutic indications for hadron therapy treatments, while normal Chinese hamster lung fibroblast (V79) and ovary (CHO-K1) cells were selected due to the large amount of data available in the literature. The analyzed cells were fairly radioresistant, the surviving fractions obtained after 2 Gy X-ray exposure being 0.42 for HSG, 0.65 for V79 and 0.58 for CHO-K1. Several experimental input data were used to set up the standard configuration of the parameters required to model each cell line. First, the sensitive volume was defined as a cylinder with a radius (R V s ) corresponding to the average nuclear size and a length (L V s ) that was set to 1 µm. We considered that the latter represents the lowest reasonable value that can mimic the thickness of the nuclei of flattened cells and chose it for simplicity due to the scarcity of experimental measurements. Second, measured α values corresponding to radiation qualities of different types and LET allowed constructing the effective lethal function. The best combination of the z 0 , σ and h parameters was determined via the Migrad minimization algorithm [20], using z 0 = 10,000 Gy, σ = 5000 Gy and h = 100,000 as initial conditions. Finally, as highlighted by Equation (8), β G was determined via the experimental β r coefficient. For each cell line, the choice of the reference radiation was based on the fact that both the linear and the quadratic components of a given photon irradiation were the closest to the average values over all the available publications. In the case of CHO-K1 cells, however, it was not possible to identify a measurement that corresponded to the "mean" (α r , β r ) pair, so we applied a correction factor to the β r coefficient to reproduce the average value. The remaining parameters were fixed for all the cell lines according to specific considerations: the local targets were defined as cylinders with radius and length equal to 10 nm to mimic approximately the extension of a DNA DSB [13,21,22] and also take the diffusion of reactive species into account [21,23]; α G was set as 0 to allow for a separate adjustment of local and global events; T RCE was set as 10 −11 s to represent the production of primary chemical reactive species [24]. Table 1 summarizes the standard values of the NanOx parameters chosen to characterize HSG, V79 and CHO-K1 cells. Table 1. Standard values of the parameters used to model the three cell lines with NanOx. R V s (resp. L V s ) represents the sensitive volume radius (length) and, similarly, R V t (resp. L V t ) the local targets radius (length). Outcomes NanOx's main outputs are survival fractions, but to simplify the analysis, it is convenient to consider three complementary quantities as a function of the ions' LET: • the LQ parameter α, which is extensively used both experimentally and theoretically to assess the effect of different radiation qualities; • D 10% , the dose deposited to achieve 10% of cell survival, which is often used in clinical contexts; • D 1% , the dose deposited to achieve 1% of cell survival, which is necessarily more sensitive to the shoulder effect than the previous endpoint. The LET values selected for the simulations correspond to the measurements available in the literature for each cell line; nonetheless, we will not show any experimental data in the sequel, since the goal of the present study is to clarify the relation between NanOx parameters and predictions. Sensitive Volume, V s In order to assess the impact of the sensitive volume on the model outcomes, we tested three different kinds of shape variations. Keeping a constant length, we let the radius vary, taking up the standard values associated with the other cell lines (4.9, 5.9, 7.0 µm). Similarly, for a fixed radius, we tested a significant increase of the sensitive volume thickness by setting it to the value of the radius itself; in this way, we took into account one of the highest reasonable L V s values in the case of flattened cells. Finally, we performed a compression or a distension along the track, keeping the same V s ; practically, we chose a value for the radius and derived the length accordingly, under the constant volume constraint (see Table 2). Effective Lethal Function We decided to study the sensitivity of the effective lethal function to a given input dataset instead of testing separate variations of the parameters z 0 , σ and h, which cannot be fitted independently. Reactive chemical species are produced 10 −12 s after the interaction between incident ions and biological matter and rapidly either are scavenged or induce some indirect DNA damages. At the preliminary stage, in order to assess how NanOx predictions are influenced by the evolution of OH radical yield with time, we estimated cell survival curves for different instants, up to 10 −7 s. Figure 2, representing HSG cells irradiated by photons and carbon ions of various energies, shows qualitatively that the amplitude of the shoulder increases with T RCE and that this phenomenon is particularly visible for low-LET ions at high dose values. To quantify more precisely the role of T RCE on cell survival modeling, we compare in the following the results obtained with the standard value of T RCE = 10 −11 s and with T RCE = 10 −8 s; we consider, indeed, that the latter value represents a significantly different scenario regarding the spatial distribution of OH radicals. [25]. In each graph, the probability of cell survival was calculated by assigning a different value to T RCE . Quadratic Coefficient for the Reference Radiation, β r To test the impact of β r on NanOx predictions, we considered different variations for each cell line according to the experimental dispersion found in the literature. We excluded the extreme and rare values and took into account the measurements that reproduced the extension of the cloud of experimental data, being very distant from the average. Table 5 illustrates the choice of the standard and of the varied β r for the three cell lines under study. Results and Discussion We stress again that in order to assess the effect induced by the variation of one single parameter at a time, we considered two different outcomes calculated by NanOx as a function of LET: on the one hand, the linear coefficient α, which is widely adopted both experimentally and theoretically to quantify the biological effect of ions; on the other, D x% , the dose deposited to achieve x% of cell survival, which is more relevant to clinicians. We set x = 1 for the parameters essentially affecting the shoulder of cell survival curves, and x = 10 for all the others. Sensitive Volume, V s The geometry of the sensitive volume is determined from experimental data, which possibly may not exist for a specific cell line or vary significantly from one publication to another. It is therefore useful to survey how NanOx predictions depend on the sensitive volume radius, length and shape. Figure 3 shows the evolution of α and D 10% with LET for three different V s radii, provided the same standard thickness. The simulations describe the cellular responses to carbon ions and, in the case of V79 cells, also protons. We observe, first of all, that the influence of the V s radius is almost independent of the specific cell line. Indeed, when considering a common LET range (30 keV µm −1 < LET < 435 keV µm −1 ), the largest variations obtained on α and D 10% by shifting the V s radius from 4.9-7.0 µm are of the same order of magnitude for HSG (26.9% and 28.5%), V79 (36.5% and 33.1%) and CHO-K1 cells (32.7% and 23.4%). Moreover, we notice that the increase of the V s radius increases the cell killing efficiency for high-LET ions, while it does not play a role in the case of low-LET ions, whose radiation impacts are more numerous and more homogeneously distributed. To analyze the behavior of the linear coefficient, let us first recall that: Considering that D = F · LET · c, where F represents the beam fluence and c a conversion factor equal to 0.1602 Gy keV −1 µm 3 , we may actually take the low fluence limit into account. Since the probability that an incident particle hits the sensitive volume (P) corresponds to the product of the beam fluence and the sensitive volume area (F · V s /L), the cell surviving fraction may be expressed as: where S L,1 represents the survival with respect to one impact generated only by local lethal events. Equation (13) holds thanks to the null value of α G in the current implementation of the model and in the approximation that β GZ 2 is negligible. Let us develop S L,1 in a Taylor series, referring to the general definition for a configuration of impacts c K (Equation (9)); for very low fluence values, all the terms of second and higher order may be neglected. At this stage, we may examine the two extreme LET ranges in order to estimate the dependence of α on the sensitive volume radius. In the low-LET region: is much smaller than one. The fact that Equation (14) is independent of the sensitive volume explains why the α curves obtained with different V s radii superimpose in the low-LET region. For very high-LET values, on the other hand, the cell survival fraction to one impact would be approximately null, so one may derive: Hence, the variation of the sensitive volume radius in the overkill region is directly propagated to α. The explanation of the effect on D 10% is not straightforward, since it also involves parameter β, which is estimated via the cell survival probability with respect to global events (Equation (10)). The non-linear term appearing in the exponential complicates the calculations, and the low fluence approximation cannot be exploited in this case. We may, however, make some general considerations to infer the variations of the non-linear component of cell survival with respect to the V s radius in the extreme LET regions. Photon irradiations may represent the very low-LET values; in this case, the dependence of S G on the sensitive volume is expressed uniquely via the distribution of the restricted specific energy. However, it has been shown [31] that whenever V s has dimensions comparable to those of cell nuclei or greater, the restricted specific energy obeys a Gaussian-like distribution peaked at a value that is independent of the target volume. We can hypothesize that the same conclusion holds for low-LET ions, whose surviving fractions with respect to global events approach the ones of photons. It is not surprising, hence, that the D 10% curves obtained by varying the V s radius from 4.9 to 7 µm are superimposed in this case. On the other hand, in the limit of very high-LET values, the distribution of restricted specific energy is much more heterogeneous, and in this context, the influence of the sensitive volume radius becomes manifest. Figure 4 discloses the evolution of α and D 10% curves calculated with different values of the V s length and shows, at first sight, that the parameter under study has a negligible impact. In this case, the sensitivity study cannot be carried out simply by considering the maximum relative difference obtained for α and D 10% in a common LET range; indeed, different input variations on L V S were associated with the three cell lines (600% for HSG, 390% for V79 and 490% for CHO-K1). We opted therefore to plot ∆α/α (resp. ∆D 10% /D 10% ) as a function of ∆L V s /L V s and inferred that the points corresponding to the three cell lines were linearly correlated. We deduced, hence, that NanOx's sensitivity to the variation of the sensitive volume length is independent of the cell and extremely low: the maximum relative difference obtained for α (D 10% ) in the common LET range is 10.0% (7.3%) when L V s is made to vary from 1-7 µm. As a conclusion, we can state that the length of the cylinder representing the sensitive volume is a "second level" parameter, which may be fixed to the standard value for all the cell lines. In order to survey the effect induced by a deformation, the radius and the length were also made to vary under the constraint of the constant sensitive volume. Figure 5 shows that NanOx predictions are affected by the compression or distension of microtargets in the direction of the track. However, it is worth noting that this is almost uniquely due to the reduction or increase of the V s radius: the α and D 10% curves superimpose with the ones obtained with the same radius, but keeping the standard thickness of 1 µm, which were drawn for completeness, confirming the previous conclusion on L Vs . Since these variations were performed preserving the standard configuration of the effective lethal function, the total number and the density of nanometric targets in the sensitive volume is constant. Thus, for a given ion energy, the probability of inactivating a nanotarget is constant for a fixed V s radius, whatever the V s length. Note that switching, for instance, from a cylindrical sensitive volume to a spherical one would change the predictions; a sphere can be interpreted as a stacking of thin cylinders, the radii of which depend on the position along the ions beam. Finally, we stress that whenever the effective lethal function is optimized according to the sensitive volume geometry, the impact of such a geometry is strongly reduced. This procedure (not described in the paper) was performed for HSG cells and showed that even the prediction accuracy of the V s radius is not critical because of the compensation issued from the ELF re-tuning. This may have important implications in view of a clinical application of the model. Figure 3. α and D 10% predicted by NanOx for the standard configuration (red curve) and for the varied sensitive volume radii (blue and light blue curves) keeping a constant thickness, as detailed in Table 2. The full symbols correspond to the LET values for which the estimates were performed, while the lines are drawn just to guide the eye and do not represent a fit. . α and D 10% predicted by NanOx for the standard configuration (red curve) and for the varied sensitive volume length (blue curve) keeping the standard radius, as detailed in Table 2. The full symbols correspond to the LET values for which the estimates were performed, while the lines are drawn just to guide the eye and do not represent a fit. V79 Cells CHO-K1 Cells Figure 5. α and D 10% predicted by NanOx for the standard configuration (red curve), a different sensitive volume shape (blue curve) and radius (light blue curve), as detailed in Table 2. The full symbols correspond to the LET values for which the estimates were performed, while the lines are drawn just to guide the eye and do not represent a fit. Effective Lethal Function The determination of the effective lethal function depends on the set of experimental data used to fit its parameters. It is important to know, in particular to comply with the demands of the clinical environment, if reducing this set of experimental data would severely degrade the quality of the predictions. Figure 6 illustrates α and D 10% obtained with NanOx by optimizing the effective lethal functions of the three cell lines according to the standard dataset and to a subset of three experimental points. We observe that, globally, the determination of an alternative set of ELF parameters in a "clinician-oriented scenario" has almost no impact on NanOx predictions for intermediate and high-LET ions. This is, of course, intrinsically related to the method used to constitute the alternative dataset, which aims at achieving an accurate description of the Bragg peak optimizing the treatment in the tumoral region. With an input dataset constituted by the experimental α value of photons and two carbon ions with LET in the ranges (55-75) keV µm −1 and (150-200) keV µm −1 , the modeling of the biological effects induced by low-LET ions, on the contrary, is less satisfying. Some discrepancies are visible, in particular for V79 cells, for which we proved that the proportion of local lethal events arising from the core region is greater than for the other cell lines. Let us recall that only the description of this kind of event relies on the effective lethal function; as detailed in Section 2.3, the coefficient t k α c is determined thanks to the ELF, while c k α p is approximated by α r . Depending on the purpose, therefore, one should prefer a certain kind of input measurements (e.g., low-LET ions to estimate the radiation-induced effects in the healthy tissues). In any case, these results look promising since they testify to the robustness of NanOx and underline the feasibility of a clinical application in a realistic context of a scarcity of experimental data. Besides, this study motivates some speculations regarding the role of the ELF parameters, even if it should be carried out with more cell lines to draw firm conclusions. Table 4 shows that relative variations as big as 50% and 98% for σ do not affect the output in a significant way; the width of the increase of the error function seems therefore to be a "second level" parameter. Changing the input dataset, instead, induces relative variations lower than 5% and 8% for z 0 and h; this suggests a larger relevance of these parameters in the modeling of cell survival with respect to local lethal events. Time of OH Radical Diffusion, T RCE The calculation of cell survival with respect to global events relies on the estimate of the chemical specific energy, which aims at representing the oxidative stress induced by the ionizing particles. In the current version of Nanox, this stress is induced by the production of OH radicals in the sensitive volume. Due to the process of diffusion and to the chemical reactions that take place, however, the concentration of OH radicals depends on the time interval separating the impact of the incident particles and the moment at which this concentration is considered. In order to estimate the impact of T RCE on NanOx predictions, the evolution of α and D 1% with LET was evaluated (see Figure 7) for two distant values of time: 10 −11 s and 10 −8 s. We observe that the linear component of the cell survival curve is not affected by T RCE . This implies that the study of D 1% allows one to focus essentially on the impact of time on the non-linear part of the cell survival. When switching T RCE from 10 −11 s to 10 −8 s, indeed, the increase of β becomes manifest in the low-LET range, and as a consequence, D 1% decreases. This effect is not exhibited by high-LET ions, for which the shoulder is less important. Considering a common LET range for the three cell lines, the extreme D 1% relative differences are found to be 13.1% for HSG cells, 12.4% for CHO-K1 and 15.4% for V79 cells. These results highlight two main conclusions: the impact of T RCE on NanOx predictions is limited and almost independent of the cell line. Such arguments motivate the idea of fixing time to a convenient value, reducing the number of free parameters. We performed, hence, a further analysis to investigate the evolution of D 1% as a function of time. Figure 8 shows this observable from T RCE = 10 −12 s-T RCE = 10 −7 s: the curves associated with various carbon ions are almost constant with respect to time, except for a slight increase, which is observable for low-LET ions and T RCE > 10 −10 s. As a consequence, from now on, we consider that the time T RCE represents a "second level" parameter, which may be fixed to the standard value of 10 −11 s for all the cell lines. This time 'tick' corresponds to the primary production of radicals just after the very fast reactions involving the chemical species that are almost in contact. The recombination process is more important for the high-LET ions, as the concentration of ionizations and molecular excitations is higher in that case. V79 Cells CHO-K1 Cells Figure 6. α and D 10% predicted by NanOx for the standard configuration (red curve) and optimizing the effective lethal function with only three data points (blue curve). The full symbols correspond to the LET values for which the estimates were performed, while the lines are drawn just to guide the eye and do not represent a fit. Quadratic Coefficient for the Reference Radiation, β r It is known that the measurement of the quadratic coefficient β is characterized by a high variability. Since this experimental value for the reference radiation, β r , affects the cell survival fractions calculated by NanOx for every radiation type, it is of great interest to examine the effect of the uncertainty related to it. Figure 9 shows α and D 1% calculated as a function of LET with the standard β r , and with a value chosen in a range that reproduces the cloud of experimental data corresponding to each cell line. By comparing the curves obtained for the two observables, it is possible to deduce that this parameter almost exclusively affects the non-linear component of cell survival and is especially important for low-LET ion irradiations. However, to evaluate NanOx's sensitivity to its variation accurately, one should take into account the very different β r shifts which were associated with the three cell lines in order to reproduce the realistic dispersion of the experimental measurements (8.5% for HSG, 85% for V79 and 12.5% for CHO-K1). We plotted the output ∆D 1% /D 1% as a function of the input ∆β r /β r and realized that the values corresponding to the three cell lines display the same linear relation. This result underlines that, while the nominal value chosen to characterize β r is by definition cell line specific, the impact of the variation of such a parameter is almost independent of the cell line. Furthermore, D 1% shows a weak sensitivity to the dispersion of the experimental β r measurements. Table 5. The full symbols correspond to the LET values for which the estimates were performed, while the lines are drawn just to guide the eye and do not represent a fit. Towards a Clinical Application The possibility to fix the value of L V s and β r for any cell line without degrading the result, and the study on the minimal dataset required to fit the effective lethal function, proved that NanOx requires only a few experimental data to predict cell survival probability to ion irradiations. This opens promising perspectives for its potential clinical application, such as the optimization of a personalized therapy. In the future, we may hypothesize determining the average nuclear radius of tumoral cells on a biopsy sample and to irradiate the latter with photons and two carbon ion beams of intermediate and high-LET; this would allow one to measure α r , β r and the two α coefficients for the carbon ions. With these five input items, NanOx could be employed to optimize the individual patient prescription, when implemented in a treatment planning system. This promising scenario incites us to perform further studies in order to test the model predictions in conditions that are closer to the clinical context. For this reason, we plan to extend our research field to spread-out Bragg peak irradiations and to other cell lines matching the therapeutic indications for hadron therapy treatment. On the other hand, we intend to consider non-tumoral cells to evaluate the normal tissue damage and early responding tissues characterized by a high α/β ratio, since for the considered cell lines, the latter varies from 3.2-5.5 Gy. Conclusions This work provides a detailed discussion of the sensitivity of NanOx predictions to most of its parameters. Each of them is made to vary independently of the others, and the effect is assessed via the analysis of three different outcomes for HSG, V79 and CHO-K1 cells in response to proton and carbon ion irradiations. This study demonstrates that in the current version of NanOx, the prediction of the biological effect of ions for a wide LET range may be based on only five parameters characterizing a given cell line. The cellular region where the biological damage is supposed to be achieved both at local and global scales is entirely described in terms of the sensitive volume radius, R V s . The modeling of local lethal events taking place at a nanometric scale relies on the three parameters defining the ELF: while σ, the width of the increase, may be significantly changed without altering the considered endpoints, z 0 and h, respectively representing the function's threshold and amplitude of the response, seem more critical since they considerably affect NanOx output. Finally, it is possible to reproduce the shoulder appearing in the experimental cell survival curves for a variety of ions using a single parameter, the quadratic coefficient associated with photon irradiations, β r . The sensitivity analysis highlights that the time of OH radical diffusion and the length of the cylindrical sensitive volume may be fixed for any of the considered cell lines to some standard values. This work also sheds light on the input data required to calculate cell survival probability with respect to ion irradiations. In particular, the experimental evolution of the linear parameter with LET may be retrieved in a good approximation thanks to an ELF optimized with only three α values measured for photons and carbon ions of energy between (8-12) MeV u −1 and (25-40) MeV u −1 . This result opens interesting perspectives from a clinical point of view, since NanOx predictions rely only on a few experimental measurements: the average cellular nuclear radius, the linear quadratic coefficients describing photon irradiations and the α values associated with two carbon ions with intermediate and high-LET values.
9,146.2
2018-03-21T00:00:00.000
[ "Environmental Science", "Medicine", "Physics" ]
Radical enhancement of molecular thermoelectric efficiency There is a worldwide race to find materials with high thermoelectric efficiency to convert waste heat to useful energy in consumer electronics and server farms. Here, we propose a radically new method to enhance simultaneously the electrical conductance and thermopower and suppress heat transport through ultra-thin materials formed by single radical molecules. This leads to a significant enhancement of room temperature thermoelectric efficiency. The proposed strategy utilises the formation of transport resonances due to singly occupied spin orbitals in radical molecules. This enhances the electrical conductance by a couple of orders of magnitude in molecular junctions formed by nitroxide radicals compared to the non-radical counterpart. It also increases the Seebeck coefficient to high values of 200 μV K−1. Consequently, the power factor increases by more than two orders of magnitude. In addition, the asymmetry and destructive phonon interference that was induced by the stable organic radical side group significantly decreases the phonon thermal conductance. The enhanced power factor and suppressed thermal conductance in the nitroxide radical lead to the significant enhancement of room temperature ZT to values ca. 0.8. Our result confirms the great potential of stable organic radicals to form ultra-thin film thermoelectric materials with unprecedented thermoelectric efficiency. Introduction By 2030, twenty percent of the world's electricity will be used by computers and the internet, much of which is lost as waste heat. 1 This waste heat could be recovered and used to generate electricity economically, provided materials with a high thermoelectric efficiency could be identied. [2][3][4] Despite several decades of development, the state-of-the-art thermoelectric materials 5 are not sufficiently efficient to deliver a viable technology platform for energy harvesting from consumer electronics or on-chip cooling of CMOS-based devices. 2,6 The efficiency of a thermoelectric device is proportional to a dimensionless gure of merit 7,8 ZT ¼ S 2 GT/k, where S is the Seebeck coefficient, G is the electrical conductance, T is the temperature and k ¼ k el + k ph is the thermal conductance 9 due to electrons k el and phonons k ph . Therefore low-k, high-G and high-S materials are needed. However, this is constrained by the interdependency of G, S and k. Consequently, the world record ZT is about unity 5,10 at room temperature in inorganic materials 11 which are toxic (e.g. PbTe 12 ) and their global supply is limited (e.g. Te). 13 An alternative solution is to use organic molecular scale ultra-thin lm materials. In molecular scale junctions, electrons behave phase coherently and can mediate long-range phase-coherent tunneling even at room temperature. [14][15][16][17] This creates the possibility of engineering quantum interference (QI) in these junctions for thermoelectricity. Sharp transport resonances are mediated by QI in molecular structures. 18 This could lead to huge enhancements of G and S provided the energy levels of frontier orbitals are close to the Fermi energy (E F ) of electrode. This is evident from high power factor (S 2 G) obtained by shiing E F close to a molecular resonance in the C60 molecular junction using an electrostatic gating. 6,19 However, using a third gate electrode is not desirable in a thermoelectric (TE) device because a TE device is expected to generate power but not to consume it through the electrostatic gating. An alternative solution would be to design molecular structures such that the energy level of frontier orbitals is pushed toward the Fermi energy (E F ) of the electrode. In what follows, we demonstrate that this can be achieved using stable organic radicals. 20 The single lled orbital in radicals has a tendency to gain or donate an electron and move down in energy; therefore, its energy level has to be close to the E F of the electrode. structures with unprecedented thermoelectric efficiency. Fig. 1 shows the molecular structure of 2,2 0 -bipyridine (BPy) and 2,2 0bipyridine functionalized with tert-butyl nitroxide radical (BPyNO) cores connected to two thiobenzene anchors through acetylene linkers. BPyNO radicals have been demonstrated to be stable under ambient conditions with no decomposition for several months. 21 In order to further enhance the stability of the molecular lm formed by a massively parallel array of BPyNO, suitable encapsulation similar to that applied for 2D materials 22 can be applied. BPy is a conjugated molecule and its highest occupied molecular orbital (HOMO) is extended over the molecule (Fig. 2a). The highest occupied spin orbital (HOSO) for majority spins of BPyNO is localized on the NO fragment and neighbouring phenyl ring (Fig. 2a). Spin density calculation (see Methods) reveals that this is due to the localization of majority spins on nitroxide radicals (Fig. 2b). Note that a-HOSO (highest occupied spin orbital), a-LUSO (lowest unoccupied spin orbital), b-HOSO and b-LUSO may be referred to also as spin-up HOMO, spin-up LUMO, spin-down HOMO and spin-down LUMO, respectively. To study transport properties of junctions formed by BPy and BPyNO between the gold electrodes, we obtain material specic mean-eld Hamiltonians from the optimised geometry of junctions using density functional theory (DFT). 23 We then combine the obtained Hamiltonians with our transport code 7,24 to calculate the transmission coefficient 7 T e (E) for electrons traversing from the hot electrode to the cold one ( Fig. 1) through BPy and BPyNO (see Computational methods). T e (E) is combined with the Landauer formula 7 to obtain the electrical conductance. At low temperatures, the conductance G ¼ G 0 T e (E F ) where G 0 is the quantum conductance and E F is the Fermi energy of the electrode. At room temperature, the electrical conductance is obtained by the thermal averaging of transmission coefficients calculated using the Fermi function (see Computational methods). Fig. 2c shows the transmission coefficient T e (E) for electrons with energy E traversing through the BPy and BPyNO junctions. The red curve in Fig. 2c shows T e for BPy. The room temperature electrical conductance of the BPy junction is ca. 4 Â 10 À4 G 0 at DFT Fermi energy (E ¼ 0 eV). The electron transport is mainly through the HOMO level because of the extended HOMO state (see Table S1 of the ESI †). Furthermore, due to the charge transport between sulphur atoms and gold electrodes, in molecular junctions formed by thiol anchors, transport occurs to be through the HOMO state. 25 Since the electronic structure of BPyNO is spin polarised, we compute the total Due to quantum interference between the transmitted wave through the backbone and reected wave by the singly occupied orbital of the pendant group, a Fano-resonance forms. This is shown by the simple tight-binding model in Fig. 3b. When a pendant orbital is attached to the one level system (Fig. 3a), two resonances are formed due to the backbone and pendant sites. The resonances are close to the energy levels of these orbitals. The resonance due to a-HOSO is close to E F in BPyNO (shown also with the grey region in Fig. 2c). The BPyNO radical has a tendency to gain (see Table S3 of the ESI †) an electron or share its electron (e.g. with a hydrogen atom to form -O-H) and minimize its energy. Fig. 4 shows the spin orbitals of the BPyNO molecular core and molecular orbitals of BPyNO with a hydrogen atom attached to oxygen to form the non-radical counterpart of BPyNO. When the hydrogen atom is detached Fig. 1 Molecular structure of a thermoelectric device where stable organic radical and non-radical molecules are placed between two hot and cold gold electrodes. Molecules consist of 2,2 0 -bipyridine (BPy) and 2,2 0 -bipyridine functionalized with tert-butyl nitroxide radical (BPyNO) cores connected to two thiobenzene anchors through acetylene linkers. from the core, the HOMO level of the non-radical BPyNO splits into two a-HOSO and b-LUSO states and moves up in energy. The conductance of BPyNO is ca. 3 Â 10 À3 G 0 at DFT Fermi energy. Due to the new resonance transport through majority spins (see spin density plots in Fig. 2b), the conductance of BPyNO, on average, is about an order of magnitude higher than that of BPy around DFT Fermi energy. This is even higher closer to the resonance. This new resonance not only enhances the electrical conductance signicantly, but also has a large effect on the room temperature Seebeck coefficient S (Fig. 2d). Note that S is proportional to the slope of the electron transmission coefficient T e evaluated at the Fermi energy (Sav ln T(E)/vE at E ¼ E F ). 4,7 As a consequence of the sharp slope of a-HOSO resonance in BPyNO close to E F , the Seebeck coefficient increases 4 times compared to that of BPy and reaches high values of ca. +200 mV K À1 in BPyNO. The sign of S is positive as a consequence of HOSO dominated transport in BPyNO. 26 The heat is transmitted by both electrons and phonons. 3 Fig. 2e shows the thermal conductance due to electrons obtained from T e in Fig. 2c (see Computational methods). The heat transport due to electrons is higher in BPyNO but its absolute value is very low in the range of 0.6-1.5 pW K À1 compared to other molecular junctions. 3,18 In order to calculate thermal conductance due to phonons, we use material specic ab initio calculation. We calculate the transmission coefficient 7 of phonons T p (u) with energy ħu traversing through BPy and BPyNO from one electrode to the other. The thermal conductance due to phonons (k p ) then can be calculated from T p (u) using a Landauer like formula (see Computational methods). Fig. 5a shows the phonon transmission coefficient T p (u) for BPy and BPyNO junctions. Clearly T p is suppressed in BPyNO compared to that of BPy for two reasons. First, the nitroxide radical makes the molecule asymmetric. Secondly, it reects transmitting phonons through the BPy backbone. Consequently, the width of the resonances decreases. 7 This is also conrmed by the simple tight binding model in Fig. 3c. Furthermore, some of the vibrational modes are suppressed e.g. modes at 6 meV, 9.5 meV and 13 meV (see movies in the ESI † that show the visualization of modes at these frequencies for both BPy and BPyNO). These two effects combined lead to a 3 times lower phonon thermal conductance in BPyNO (Fig. 5b). T p is suppressed in BPyNO such that the electron and phonon contributions to the thermal conductance become comparable. We obtain the total room temperature thermal conductance of ca. 4.5 pW K À1 in BPyNO. The thermal conductance is dominated mainly by phonons in BPy leading to a total room temperature thermal conductance of ca. 6 pW K À1 . From the obtained G, S and k, we can now compute the full thermoelectric gure of merit 7 ZT as shown in Fig. 5c. ZT enhances signicantly in the nitroxide radical functionalized junction (blue curve in Fig. 5c) compared to that of the parent BPy (red curve in Fig. 5c). A room temperature ZT of ca. 0.8 is accessible in the BPyNO radical for a wide energy range in the vicinity of E F . This is 160 times higher than room temperature ZT ¼ 0.005 of BPy at E F . Molecules are expected to show a high Seebeck coefficient because they pose sharp transport resonance features, thanks to their well separated discrete energy levels. However, a relatively small Seebeck coefficient has been measured in molecules so far. 3 Among them, C60 shows the highest Seebeck coefficient of about À18 mV K À1 to À20 mV K À4 . This leads to a power factor in the range of 0.03 pW per molecule. There is no thermal conductance measurement of C60 but using the predicted value, 27 a low roomtemperature ZT of 0.1 is expected. The challenge in exploiting quantum interference in molecules for thermoelectricity lies in controlling the alignment of the molecular levels and moving quantum interference induced resonances close to the Fermi level of the electrodes. Resonance transport close to the Fermi level through spin orbitals that we propose is a generic feature of stable organic radicals which can be utilised to overcome this challenge and enhance the thermoelectric efficiency of molecular junctions. The massively parallel array of BPyNO in self-assembled monolayers can then be formed to create ultra-thin molecular lms with high ZT to convert waste heat to electricity. Conclusions In this paper, we demonstrated for the rst time that the thermoelectric gure of merit of junctions formed by the nitroxide stable radical enhances signicantly from ca. 0.005 in the parent BPy to 0.8 in the daughter BPyNO. This enhancement is a generic feature of radicals because they create resonances close to the Fermi energy of the electrode. This ground breaking strategy can be utilized to design molecular junctions and ultrathin lm thermoelectric materials for efficient conversion of waste heat to electricity or on-chip cooling of CMOS-based technology in consumer electronic devices. Geometry optimization The geometry of each structure studied in this paper was relaxed to a force tolerance of 10 meVÅ À1 using the SIESTA 23 implementation of density functional theory (DFT), with a double-z polarized basis set (DZP) and the Generalized Gradient Approximation (GGA) functional with Perdew-Burke-Ernzerhof (PBE) parameterization. A real-space grid was dened with an equivalent energy cut-off of 250 Ry. To calculate molecular orbitals and spin density of gas phase molecules, we employed an experimentally parameterised B3LYP functional using Gaussian g09v2 (ref. 28) with a 6-311++g basis set and tight convergence criteria. Electron transport To calculate the electronic properties of the junctions, from the converged DFT calculation, the underlying mean-eld Hamiltonian H was combined with our quantum transport code, Gollum. 24 This yields the transmission coefficient T e (E) for electrons of energy E (passing from the source to the drain) via the relationship S e † L,R (E)) describes the level broadening due to the coupling between le L and right R electrodes and the central scattering region, S e L,R (E) is the retarded self-energy associated with this coupling and G R e ¼ (ES À H À S e L À S e R ) À1 is the retarded Green's function, where H is the Hamiltonian and S is the overlap matrix obtained from the SIESTA implementation of DFT. The DFT+S approach has been employed for spectral adjustment. 7 Phonon transport Following the method described in ref. 7 and 8 a set of xyz coordinates were generated by displacing each atom from the relaxed xyz geometry in the positive and negative x, y and z directions with on each atom were then calculated and used to construct the dynamical matrix D ij ¼ K ij qq 0 =M ij where the mass matrix 0 j for i s j were obtained from nite differences. To satisfy momentum conservation, the K for i ¼ j (diagonal terms) is calculated from ) describes the level broadening due to the coupling to the le L and right R electrodes, S p L,R (u) is the retarded self-frequency associated with this coupling and G R p ¼ (u 2 I À D À S p L À S p R ) À1 is the retarded Green's function, where D and I are the dynamical and the unit matrices, respectively. The phonon thermal conductance k p at temperature T is then calculated from k p ðTÞ ¼ ð2pÞ À1 ð N 0 ħuT p ðuÞðvf BE ðu; TÞ=vTÞdu where f BE (u,T) ¼ (e ħu/k B T À 1) À1 is the Bose-Einstein distribution function and ħ is reduced Planck's constant and k B is Boltzmann's constant. Thermoelectric properties Using the approach explained in ref. 7, the electrical conductance G ¼ G 0 L 0 , the electronic contribution of the thermal conductance k el ¼ (L 0 L 2 À L 1 2 )/hTL 0 and the Seebeck coefficient Fermi-Dirac probability distribution function f FD ¼ (e (EÀEF)/kBT + 1) À 1, T is the temperature, E F is the Fermi energy, G 0 ¼ 2e 2 /h is the conductance quantum, e is the electron charge and h is Planck's constant. The full thermoelectric gure of merit ZT is then calculated using ZT(E F ,T) ¼ G(E F ,T)S(E F ,T) 2 T/k(E F ,T) where G(E F ,T) is the electrical conductance, S(E F ,T) is the Seebeck coefficient, and k(E F ,T) ¼ k el (E F ,T) + k ph (T) is the thermal conductance due to the electrons and phonons. Data availability The input les to reproduce simulation data can be found at https://warwick.ac.uk/nanolab.
3,869.6
2020-01-26T00:00:00.000
[ "Physics" ]
Transient voltage stresses in MMC–HVDC links – impulse analysis and novel proposals for synthetic laboratory generation To evaluate and optimise insulation coordination concepts for state of the art high-voltage direct current (HVDC) transmission systems, appropriate test voltage shapes are required for laboratory imitation of occurring stresses. While especially transient voltages in the monopolar modular multilevel converter (MMC)–HVDC links show an extensive deviation from commonly applied switching impulse shapes, this study focusses on the analysis of over-voltages subsequent to direct current pole to ground faults. Additionally, novel methods for synthetic laboratory test voltage generation are proposed. Based on simulated transients occurring during fault scenarios in different symmetrical monopolar ±320 kV MMC–HVDC schemes, curve fitting, and related analysis techniques are used in order to compare simulated over-voltages with standard test voltage shapes. Moreover, these techniques further allow the identification of novel relevant impulse characteristics. Subsequently, design considerations for the generation of non-standard impulses based on single-stage circuits are derived and discussed. Those synthetically generated voltages may, later on, provide the basis for future investigations on related dielectric effects caused by those non-normative over-voltages. Introduction High-voltage direct current (HVDC) transmission is currently a vital part of the area of electric energy transmission technology. While modular multilevel converters (MMCs) have been steadily established due to their superior characteristics [1], insulation coordination aspects -especially if rated voltages increase even further -require additional investigations. Until now, applicable switching impulse (SI) test voltage ratings for MMC-HVDC monopolar configurations are not yet fully standardised compared with high-voltage alternating current [2,3] and line commutated converter-HVDC schemes [4]. Nevertheless, recent research activities and provided case studies in the field of transient system simulations show a rising interest in corresponding overvoltage shapes [5][6][7][8][9]. While the latter research activities are mostly related to cable stresses, associated impacts on air clearance calculation [10] at the converter direct current (DC) busbar are still rare. Even though calculation methods (e.g. [11]) based on normative standard impulses, such as lightning impulses and SIs, provide first steps towards insulation strategies, no general standard exists for voltage source converters [12]. Under consideration of [10], major differences between standard SI and occurring over-voltages are evident. To evaluate associated consequences for insulation coordination concepts, transient voltage stresses at the converter DC busbar in MMC-HVDC schemes need to be evaluated, analysed and compared with normative impulses in greater detail. Furthermore, besides theoretical influence analysis based on accessible test data results, novel proposals to generate nonstandard laboratory test voltage waveforms need to be derived. Transient simulation of MMC-HVDC transmission schemes To cover the variety of available transmission technologies and different corridor lengths in recent MMC-HVDC projects, different monopolar schemes are investigated in this contribution as visualised in Fig. 1. The following paragraphs provide a brief overview of the selected technical system parameters, on the different scenarios and load flow set points as well as on implemented system protection loops. An appropriate simulation time step is selected as slow-front transient events shall be evaluated [4]. System modelling Converter and system modelling are of significant importance to derive feasible results in transient time domain investigations. According to [13], a consensus on implementation possibilities has been defined, where the selected classification (Types 1-7) is related to the underlying study purpose. Within the context of this simulation framework, either Type 3 for single sub-module related faults or Type 4 (computationally improved) Thévenin equivalent representations [14,15] for other system faults are utilised. Further in depth analysis related to insulated-gate bipolar transistor (IGBT) and diode modelling is elaborated in [16], where appropriate conformity of non-linear and simplified (on-/off-resistance) power electronic device representations is identified. As a remark it should be noted that switching actions of power electronic devices (e.g. changing diode conduction states) require appropriate software-specific electromagnetic transients (EMT) solving techniques in addition to suitable modelling approaches. Besides, frequency dependent XLPE cable and overhead line (OHL) models are considered as short circuit faults and the following sub-module blocking events are investigated. Especially an appropriate representation of a travelling wave phenomenon is inevitable to determine transient voltage overshoots with an acceptable accuracy at the DC clamps. In addition, surge arrestor columns have a severe impact on the system overvoltage level as well as on shape and need to be modelled carefully. Due to the fact that slow-front transients are in the scope of this contribution, a non-linear V-Icurve with regard to this type of stimulus has been selected and is implemented accordingly. This leads to the related SI protective level of 512 kV (1.6 pu) @ 1 kA discharge current. The detailed V-I characteristics related to a 30/60 μs current shape are attached within Table 1, while other relevant system parameters are summarised in Table 2. Transmission scenarios Besides basic design, the selected transmission technology as well as different power transfer set points influence shape and severity of over-voltages due to inherently different system conditions. To address these differences appropriately, in total eight scenarios are further evaluated. Hereby, varying fault resistances (between 1 m Ω − 30 Ω ) during a DC pole to ground fault at the positive pole of MMC 1, as depicted in Fig. 1, are investigated. The directly affected station MMC 1 is a DC current controlled converter, MMC 2 at the opposite end acts as the DC voltage regulating station. Additionally, both converters obtain a reactive power reference of +300 MVA (cap.). An overview of all scenarios is provided in Table 3. System protection To derive feasible over-voltages, appropriate system behaviour after fault inception is required. Therefore, the implementation of protection loops at both converter stations of the MMC-HVDC link is an essential aspect to be considered. In contrast to ideal (non-delayed) converter blocking subsequent to arbitrary DC faults, results considering current and voltage thresholds, protection delays and measurement-quantity-triggered module blocking provide more realistic results in terms of accuracy. Especially the latter issue leads to time-shifted travelling wave phenomena affecting the transmission system. This has a significant influence on transient voltage peaks, their instant and maximum level. This contribution considers both a simple converter arm overcurrent and a DC pole to ground voltage imbalance detection concept. The scheme has been initially proposed and described in [10]. While the overcurrent threshold is related to technical boundaries of state of the art IGBTs (maximum allowed peak current 2.7 kA), the voltage imbalance criterion is related to a deviation from normal operation conditions (imbalance > 50 kV). A brief overview of this scheme and relevant parameters is shown in Fig. 2. Transient simulation results Based on the underlying simulation framework introduced in Section 2, results obtained using PSCAD™-EMTDC™ with a simulation time step 5 μs for the OHL and 25 μs for the different cable schemes are presented in the following paragraphs. Besides a discussion of main characteristics like general shape, rate of rise and remaining steady state levels, results for the different transmission scenarios are additionally compared amongst each other. Scenario OHL 150 km For an OHL transmission setting with a total length of 150 km, transient over-voltages at the healthy pole of MMC 1 are shown in While especially recent offshore MMC-HVDC links for wind farm connection are relatively short regarding the transmission distance, which is reflected in the last scenario cable 50 km shown in Fig. 4c, the visible initial voltage post-fault step nearly disappears. Generally, due to reduced wave travelling times, occurring converter-individual effects become blurred and are more challenging to be differentiated based on the obtained transients. System behaviour tends towards the initial OHL response in terms of general shape, as overall system capacitance continues to decrease for a shorter cable length. Evaluation methods Insulation coordination which either follows well-established standards known from AC [2,3] or is based on guidance for line commutated HVDC converters [4] aims for the determination of withstanding voltages (U w ). These withstand voltages are related to their corresponding representative voltages and over-voltages (U rp ) determined by system analysis. During the converter design stage focusing on transient phenomena, the determination of representative voltages and over-voltages is inevitably linked with fault scenario analysis, as shown in Section 3. To evaluate and compare those results, derived and applied methods which allow a simulation data reduction (SDR) followed by different overvoltage approximations concepts and related evaluations are presented in the following. Simulation data reduction In a first step, maximum occurring over-voltages based on an extensive set of simulations for each scenario need to be determined. Therefore, data is condensed into one worst-case voltage-time curve u SDR t for t flt ≤ t ≤ t sim, max . This voltage time curve consists of the maximum absolute voltage per time step and scenario. As a major difference compared with [10], different power flow configurations (sub-scenarios a and b) are not considered separately, aiming in the derivation of a more generic voltage time curve in dependence on transmission technology (IDs: O-150, C-150, C-100, C-50). Derived methods for over-voltage approximation Influences of SI voltages superimposed on a DC pre-stress are investigated in [17,18]. Here it is concluded that for practical external air insulation influences of DC pre-stress may be disregarded, if instead impulses having the combined amplitude U SI + U DC are considered. As concluded in [10], voltage shapes differ significantly from any normative SI, therefore a deeper analysis of impulse shapes is required. Estimation of SI amplitudes (SI estimation): Voltage time behaviour of normative SI is described using the solution for the differential equation of single-stage equivalent circuits for impulse voltage generation [19] where U A instead of (U 0 /K) ⋅ (1/ α 2 − α 1 ) is used. For normative SI 1/α 1 yields to 1/α 1 = 3155.0 μs and 1/α 2 = 62.48 μs, respectively. Instead of using simplified peak amplitudes only, (1) allows the determination of U A using least-squares solution techniques for non-linear curve-fitting problems. Generally, Levenberg-Marquardt algorithm may be chosen, whereas solution techniques using the trust-region-reflection (TRR) method also achieve similar results. For this purpose, the TRR method is chosen. Double exponential impulse (DEI) estimation: With the objective of improving the approximation by a DEI on simulated data, the degree of freedom in (1) is increased allowing the approximation of U A , α 1 and α 2 . Resulting exponential impulses u DEI contain fault related transients and steady state DC voltage prior to the fault. Superimposed DEI estimation: Known from HVDC cable tests [20] and motivated by previous considerations presented in [10], superimposed testing is considered as a promising approach for laboratory imitation of obtained over-voltages and for identification of relevant overvoltage impulse parameters. Therefore, DC steady-state operational voltage prior to the fault is subtracted and DEI estimation is carried out in order to obtain u SDEI . This approximation is especially meaningful as it allows the determination and discussion of relevant circuit parameters for a laboratory imitation of simulated over-voltages utilising superimposed double exponential impulses (SDEIs). Derived methods for evaluation of overvoltage approximation To evaluate the goodness of derived overvoltage approximations and to highlight major differences between considered scenarios, different criteria are used. For an assessment of the voltage curve approximation over an evaluation time window t flt ≤ t ≤ t eval the method uses the basic idea of voltage-time areas. The overall peak voltage approximation is rated using parameter q fit, max , whereas parameter q fit, A is used to quantify overall voltage approximation In the case of SI estimation u i = u SI t , in the case of DEI estimation u i = u DEI t and in the case of SDEI estimation u i = u SDEI t + U DC t flt are considered. Besides this, amplitude comparison uses q fit, max introduced as focusing on obtained maximum voltages with the same underlying restrictions as for (2). Derived methods for quantification of transmission technology influences The quantification of influences related to chosen transmission technologies is carried out based on the evaluation of suitable voltage time approximations. For this aim, parameters such as time to peak t p and time to half t 50 ratios are used. Parameter q p allows time to peak, q 50 time to half and factors q A, ∞ and q A, eval voltage time curve area comparison. Quantities follow each using suitable overvoltage approximations for u i, e t , t p, e , t 50, e as the enumerator and for u i, d t , t p, d , t 50, d as the denominator. Time parameters for time to half values are obtained using the numerical solution of (1). Over-voltage analysis This section is separated into four parts. First, obtained results for overvoltage approximation using different fitting horizons are shown and associated impacts are discussed. This results in the selection of a suitable evaluation time window for further analysis. Second, the evaluation of the voltage curve approximation is presented. Third, differences in chosen transmission technologies are investigated. Last, key results for overvoltage approximation and analysis are summarised. Over voltage approximation Considered curve fitting horizons are closely linked to potential post fault measures within the system. Initially, as previously shown in [10], subsequent to t flt = 1.45 s no further overvoltage reduction measures till t sim, max = 1.84 s are assumed. In contrast, assuming feasible AC circuit breaker opening delays prior to related DC de-energisation measures yield reduced overvoltage durations and therefore to a shorter curve fitting horizon. The reduced horizon is, in this case, limited to t sim, red = t flt + 40 ms, corresponding to two AC grid cycles after the fault. For the purpose of identifying influences related to additional delays, e.g. due to fault detection or AC breaker restrikes, additional cases are considered using t sim, red, 60 = t flt + 60 ms and t sim, red, 100 = t flt + 100 ms. Related impulse parameters are shown in Tables 4 and 5. Results of overvoltage approximation for 150 km OHL and 150 km cable transmission based on SDR voltage time curves using the reduced curve fitting horizon t flt ≤ t ≤ t sim, red = t flt + 40 ms are presented in Fig. 5. From the first impression, results appear similar to those presented in [10] using the increased curve fitting horizon (t flt ≤ t ≤ t sim, max = t flt + 390 ms) but show greater deviations if impulse parameters shown in Tables 4 and 5 are taken into account. In the case of cable transmission, time to half values decrease and impulse amplitudes increase when the chosen fitting horizon is shortened. Furthermore, a slight reduction of time to peak values in case of SDEI is observed, whereas a reduction in horizon length leads to a slight increase of time to peak values for DEI impulses. Considering OHL transmission (O-150) sporadic discrepancies on those observations arise which are related to the chosen curve fitting methods in the least square sense. Nevertheless, it is obvious, if no countermeasures are considered, time to half values are tremendously long. Especially in the case of cable transmission technology, this is leading to values being up to 18 times larger for DEI and SDEI estimation. Whereas, if a reduced fitting horizon of t flt ≤ t ≤ t sim, red is used, time to half values are reduced from several seconds to milliseconds. Therefore, it is considered more reasonable to focus on obtained curve parameters taking into account the reduced curve fitting horizons related to overvoltage reduction measures. Time to peak values is less affected by post fault behaviour. Furthermore, it is evident for SDEI impulses that time to peak decreases with a reduction of cable length, whereas the shortest time to peak is found for OHL. Besides this, SDEI impulses clearly show that time to half values is minor affected by cable length, but A quantification of the voltage curve approximation is carried out in the next section. Evaluation of over-voltage approximation To evaluate the goodness of the overvoltage approximation, methods introduced in Section 4.3 are applied leading to results shown in Table 6. The evaluation window boundary is equalised to t eval = t flt + 40 ms in order to guarantee a consistent investigation of all voltage curve approximations independent of the chosen curve fitting horizon. It is found that voltage time areas q Fit, A are for SI within the range of about q fit, A = 16.55%. Therefore, voltage time areas are significantly smaller in the case of SI approximation compared with simulated over-voltages. Furthermore, occurring SI peak voltages q fit, max = 167.7% are leading to peak stresses being roughly 1.7 times higher compared with obtained simulation results. Approximations of q fit, A and q fit, max are found unaffected for SI approximation if either extended or reduced curve fitting horizon is chosen. This is related to the aspect that only parameter U A is determined and the overall duration of normative SI compared with chosen curve fitting horizon and evaluation window is rather short, causing minor effects on related results. Changes are observed for DEI and SDEI approximation methods as the degree of freedom for the underlying optimisations is significantly increased. Voltage curve approximation is utilising curve fitting methods which are carried out in the least square sense. Therefore, from a mathematical perspective, it is a causal consequence that those overvoltage approximations using equal curve fitting horizon and evaluation window will obtain superior results in terms of voltage time areas q fit, A , whereas, this is not necessarily valid for evaluation of peak voltage approximation q fit, max . These findings are noticeable if q fit, A and q fit, max for DEI and SDEI impulses are considered. As the value chosen for t eval is identical with t sim, red , approximations using the reduced curve fitting horizon are leading to a perfect reconstruction of the voltage time area. Contrary, if the extended curve fitting horizon is considered all evaluation parameters are smaller, as discussed above. Evaluation value q fit, max indicates that representation of simulated over-voltages is less accurate if DEI approximation is chosen. Good results are obtained using SDEI approximation utilising the reduced curve fitting horizon (t flt ≤ t ≤ t sim, red ). Therefore, it can be concluded that an increased curve fitting horizon will lead to longer time to half values but in a less accurate representation of peak voltages. Besides this, for the determination of relevant impulse characteristics of over-voltages in MMC-HVDC links, suitable evaluation windows, and curve fitting horizons are required. As already mentioned in Section 5.1, it is considered more reasonable assuming suitable post fault measures leading to reduced overvoltage durations and related curve fitting horizons. With the intention of analysing and imitating over-voltages during DC pole to ground faults, SDEI impulses are capable of representing the voltage time area. However, for an accurate representation of peak insulation stresses the peak voltage approximation (parameter q Fit, max in Table 6) demands for additional safety margins. This especially gains importance if SDEI impulses are considered as a test voltage waveform, or are used in future investigations related to associated dielectric effects. Quantification of transmission technology influences Based on derived voltage time approximations it is now possible to address aspects caused by different transmission technologies. Therefore, parameters introduced in Section 4.4 are used in conjunction with overvoltage approximations based on SDEI. In this case, t p , t 50 , u i, e and u i, d used in (4)-(6) refer to the superimposed impulse voltages, which are applied to constant DC voltage U DC t flt . For determination of q A, eval based on (7), impulse data u i, e and u i, d are set as u i = u SDEI t + U DC t flt . The overall transmission technology comparison matrix, focussing on the reduced curve fitting horizon and SDEI approximation, is shown in Table 7. Additionally, Fig. 6 summarises SDEI voltages for all four scenarios. It is found that voltage time areas for t flt ≤ t ≤ t eval are nearly unaffected by chosen transmission technology as parameter q A, eval ≃ 1. If overall voltage time area is taken into account, voltage time areas are less affected by cable length but by transmission technology. Parameter q A, ∞ indicates that voltage stresses associated with voltage time areas are reduced in the case of OHL to ∼73-74% compared with cable transmission. Similar observations are enabled focusing on time to half. The parameter q 50 is nearly independent of cable length but tremendously affected by transmission technology. Time to half is in the case of a 150 km cable 1.54 times larger than for a 150 km long OHL. Peak voltages Û DC + I , see Table 5 (values in bold), are found within the same range with the highest values occurring for OHL systems. This value is 4.4% larger than for the shortest cable transmission (50 km). Considering only peak voltages Û I of the superimposed impulses, this effect is slightly more severe as values occurring for OHL systems are 11.1% larger. In the case of time to peak, indicator q p allows the identification of cable length and transmission technology influences. OHL has the shortest t p which is compared with a C-150 ∼31 times larger. Besides this, a reduced cable length is always accompanied by a reduced time to peak. Indicative values for extended curve fitting horizon and SDEI can be found in [10] and for the sake of closed-form representation in Table 8. Deviations are mainly due to the chosen SDR method, which combines both power flow configurations. Summary of results Based on yet presented results, several key aspects are found. Laboratory test voltage generation Under consideration of presented results in Section 5, it is considered most promising to use superimposed impulses for a synthetical approximation of simulated over-voltages within a high voltage laboratory. This entitles an evaluation of the influences on discharge and material behaviour and their related causes on the dielectric strength in the case of non-standard impulses. To discuss related influences and challenges for the design of test circuits indicative considerations based on single-stage equivalent circuits are derived. A description of considered design rules will be given in Section 6.1. In the subsequent section, aspects related to the realisation of superimposed test setups are presented and associated consequences are discussed. A comparison between voltage approximations derived in Section 5 and those voltage time curves obtained using circuit simulation is presented in Section 6.3. Consideration and design rules for single-stage equivalent circuits Focusing on impulse voltage generation, circuit parameters as shown in Fig. 7, may be obtained following [19,21] leading to simplified (8)- (11). If Marx generators are used, their related single-stage equivalent circuit needs to be determined in the first place It is found that parameters α 1 and α 2 identified during voltage approximation are vital for solving the equations above and are therefore presented in Section 6.2.1. Besides this, values for C 1 and C 2 are required. More accurate efficiency calculation is achieved, if instead of (10) is used. Test circuit design is usually carried out under various limitations, such as available laboratory equipment, space considerations and relevant electrical parameters of the device under test (DUT). Due to this, circuit design will be discussed within this contribution in a more general manner. For a realistic circuit parameter estimation, a capacitance of 750 nF with a maximum voltage of 100 kV per stage of Marx-generator is assumed. Besides this, the degree of simplified efficiency following (10) is set to a minimum of η = 85%. Superposition of an impulse voltage on a DC voltage This section describes the necessities for impulse voltage generation. It is followed by additional considerations which need to be taken into account for superimposed tests. Impulse voltage generation: The highest amplitude of superimposed voltage approximation is in combination with the claimed degree of simplified efficiency demanding for at least three stages of a Marx generator. This is resulting in a maximum capacitance of C 1 = 250 nF and leads to C 2 = 44. 2 nF. Table 9 lists the overall curve parameters and their related circuit parameters. To guarantee the validity of the equations above, the assumption R 1 ≫ R 2 ≫ R 3 is used. Large values for R 1 are required in order to minimise any further charge process of C 1 and C 2 during the tail of the impulse. Instead of using large values for R 1 , it may also be technically considered switching off the AC supply which feeds the diode rectifier prior impulse triggering as depicted in Fig. 8. Superposition of impulse voltage on DC voltage: To separate DC voltage generation from impulse voltage generation and vice versa, additional components such as blocking capacitors and protection resistors are required. Those components and additional measuring devices need to be considered in circuit design as well. An overview of the overall circuit for synthetic overvoltage imitation is provided in Fig. 8. Following [22], values are set for the blocking capacitor C 4 = 1000 pF and for impulse measuring using a capacitive voltage divider C 3 = 500 pF. These additional components affect the circuit behaviour. Especially the blocking capacitor in combination with the DUT will act as a capacitive voltage divider limiting the maximum peak voltage. Assuming that this voltage divider reduces the single-stage circuit efficiency by another η VD = 85%, this is limiting maximum capacitance of the DUT to C 5 = 176 pF. To keep the efficiency of the Marx generator and its calculated parameters, as shown in Table 9, the value for C 2 needs to be corrected to C 2 * following: Overall efficiency for the peak amplitude at the DUT may, in this case, be approximated using η ges = η ⋅ η VD = 72.3%. To obtain more accurate values for efficiency and consequently for the relevant impulse capacitor loading voltage U DC, L calculation follows (11) and takes the efficiency of the additional capacitive voltage divider into account as shown in Table 10. To obtain circuit parameters for a simulation of the DC voltage generation, similar assumptions as chosen for the Marx circuit are used, leading to C 6 = 187.5 nF. Based on [22], for measuring and protection devices values of R 4 = 2.5 MΩ, R 5 = 1100 MΩ and R 6 = 500 kΩ may be chosen. If these parameters are considered for the depicted circuit design retroactive effects on DC generation cannot be excluded. In this case, the resulting impulse voltage at the DUT will be heavily affected due to the resistive voltage divider is given by R 4 and R 5 . Bearing information from Table 9 in mind, the ratio (U DC t flt /Û I ) ≃ 1.85, …, 2.02 is found. Therefore, the same ratio has to be chosen for R 5 /R 4 , resulting in a realisation using values of R 5 = 1100 MΩ and R 4 = 550 MΩ. If instead R 4 = 2.5 MΩ and R 5 = 5 MΩ are chosen, an impact on time to half values will be observed, as an additional and non-negligible discharge path for C 5 has to be considered. Instead of taking the ratio of the resistive voltage divider into account, whilst avoiding a further deenergisation of C 5 , a diode stack behind R 4 provides a remedy. In this case, for a technical realisation, the impulse strength and parasitic capacitances of the used diodes need to be considered. Besides this, retroactive effects have always been considered in order to avoid any damage to the used DC generator. Results of circuit simulation and discussion Within this subsection results based on circuit simulation for a laboratory impulse generation of SDEI voltages are presented. This is followed by a discussion of the results and capabilities of the presented circuit. Results : For all simulations, the following parameters are used: C 1 = 250 nF, C 2 * = 43.6 nF, C 3 = 500 pF, C 4 = 1000 pF, C 5 = 176 pF, C 6 = 187.5 nF, R 1 = 1 MΩ, R 6 = 500 kΩ, R 7 = 1 MΩ. Besides this, all circuit components are considered as ideal including spark gaps (SGs). Overall three different realisations are discussed for scenarios O-150 and C-150. The first realisation (A) considers the use of a resistive voltage divider R 4 = 550 MΩ and R 5 = 1100 MΩ. For the second realisation (B), the same values are chosen for R 4 and R 5 but an additional diode after R 4 is used. The third implementation (C) follows [22] with R 4 = 2.5 MΩ and R 5 = 1100 MΩ, but is also extended with an additional Diode after R 4 . For O-150 simulations, impulse forming resistance values are set to R 2 = 285.6232 kΩ and R 3 = 0.4941 kΩ, whereas impulse capacitor loading voltage is set to U DC, L = 243.90 kV and a Table 9 Voltage waveform data (see Table 5), related exponents and calculated circuit parameters following (8) and (9) SDEI Curve fitting horizon t flt ≤ t ≤ t sim, red t p in μs t 50 in ms α 1 in 1/s α 2 in 1/s Û DC + I in kV Fig. 9. Discussion and further opportunities for the presented circuit: It is noteworthy that overall design considerations and aspects presented in Sections 6.2.1 and 6.2.2 lead to a good realisation of voltage waveforms accompanied by the derived novel impulse parameters. Associated nominal times to peak are ranging from 155 μs ≤ t p ≤ 4775 μs and time to half values from 58.4 ms ≤ t 50 ≤ 90.1 ms. Superior results are obtained, if R 4 is followed by a diode stack as considered for realisation (B). Table 11 presents the results and related errors for each parameter. Similar results are found for implementation (C), resulting for O-150 in Δ t p − C ≃ 0.6%, Δ t 50 − C ≃ 0%, Δ Û − C ≃ − 0.1% and for C-150 in Δ t p − C ≃ 0.4%, Δ t 50 − C ≃ 0.2%, Δ Û − C ≃ − 0.4%. If realisation (A) is considered associated errors are higher, especially considering the time to half values, leading for O-150 approximation to Δ t p − A ≃ − 1.1%, Δ t 50 − A ≃ − 15.1%, As a consequence of the associated errors related to realisation (C), it is found that the use of the diode stack is beneficial in order to keep the shape of impulses as close as possible to the desired voltage waveform. If the use of this additional diode is not applicable, a different supportive measure may be provided by the impulse generator. In Section 6.2.1, a further charge process of C 1 and C 2 was stated avoidable in order to stop an enlargement of the wave tail. This identified causality provides valuable benefits. A further charge process may act as a supportive measure to enlarge the wave tail, especially if the de-energisation caused by R 4 , R 5 (Section 6.2.2) or additional parallel resistances of used capacitors or currents due to surface contaminations are taken into account. Furthermore, the use of switching concepts besides commonly used air filled SGs may be broached in order to ensure the desired trigger operation. If commonly used air filled SGs cannot guarantee reliable operation without a self-extinguishing arc, compact pressure controlled gas insulated SGs may provide a remedy. Besides this, circuits based on semiconductors, as recently presented in [23], seem promising in the future. Those may provide additional benefits if electrode erosion or trigger problems are taken into account. After a brief conclusion, an outlook motivating further research aspects is presented in the following. Conclusion Transient overvoltage stresses in MMC-HVDC links have been simulated and investigated for DC pole to ground faults. As slow front transients do occur, a feasible non-linear surge arrestor V-Icurve needs to be selected. Furthermore, varying fault resistances (between 1 mΩ and 30 Ω), different load flow scenarios and appropriate post-fault system behaviour influences -caused by the system protection setting -are considered. To determine as well changes occurring in over-voltages associated with different transmission technologies, simulations are carried out for OHL and cable schemes. The analysis of obtained simulation results shows that normative test voltage waveforms such as SIs only allow a poor approximation of the occurring over-voltages. As occurring voltage shapes differ significantly from any normative SI, further analysis of the impulse shapes is carried out. It is concluded that double exponential voltages provide benefits for analysis and imitation of occurring over-voltages in MMC-HVDC schemes. Superior results are obtained if a double exponential voltage superimposed on a constant DC offset is considered. The use of curve fitting techniques allows the identification of several valuable overvoltage parameters. Time to peak significantly increases with increasing cable length and is the fastest for OHL transmission, whereas time to half is the shortest for OHL transmission technology and in the case of cable transmission larger. Peak voltages are found around the same range independent of the chosen transmission technology. To propose a laboratory imitation of those impulses, circuit design based on single stage impulse equivalent circuits is presented and related aspects for superimposed testing are derived and described. Those synthetically generated voltages may, later on, provide the basis for follow-up investigations on related dielectric effects caused by those non-normative over-voltages. Outlook The field of transient voltage stresses in MMC-HVDC systems is a vital area for high-voltage research. While solely monopolar configurations have been investigated within this initial study, ongoing considerations related to bipolar topologies and their different characteristics seem essential. Additionally, refinements regarding modelling accuracy related to superposed high frequent occurrences meanwhile or subsequent to converter blocking are desirable. Concentrating on test voltage application, the use of Marx generators or their related single stage equivalent circuits is one option, especially if in the first instance consequences for air insulation as mentioned in Section 4.2 and [10,11,24] are addressed. The presented concept may result in application challenges if the derived novel waveforms are applied on DUT with an increased capacitance. In this case, charging the DUT to peak voltage and utilising a defined ohmic path for a slow discharge process is considered beneficial for the generation of the relevant time to half values. The regard of a negative impulse superimposed on this positive decaying voltage may provide the relevant dynamics. Besides this, achieving time to half values whilst using a controllable DC supply or more general an appropriate voltage converter with sufficient output current is seen promising. For this purpose, utilising high-voltage transformers, as presented for SI generation in [25] may provide benefits for a future generation of presented novel impulse shapes. Moreover, the consideration of parasitic influences, as presented in [26] during circuit simulation prior to laboratory implementation is seen valuable. Future experiments are considered precious in order to evaluate the effects caused by those non-standard impulses on HVDC insulation. As accessible research data of air discharge mechanisms for impulses under consideration of DC pre-stress are only available for SI, those investigations are of high importance besides investigations on cables and their accessories. Furthermore, if analogies to already existing normative test voltages can be concluded, a more transparent way for distance estimation during the design stage of a converter, as mentioned in [10,11,24] will be enabled.
8,306.2
2018-01-26T00:00:00.000
[ "Engineering", "Physics" ]
Dual-Mode FPGA-Based Triple-TDC With Real-Time Calibration and a Triple Modular Redundancy Scheme This paper proposes a triple time-to-digital converter (TDC) for a field-programmable gate array (FPGA) platform with dual operation modes. First, the proposed triple-TDC employs the real-time calibration circuit followed by the traditional tapped delay line architecture to improve the environmental effect for the application of multiple TDCs. Second, the triple modular redundancy scheme is used to deal with the uncertainty in the FPGA device for improving the linearity for the application of a single TDC. The proposed triple-TDC is implemented in a Xilinx Virtex-5 FPGA platform and has a time resolution of 40 ps root mean square for multi-mode operation. Moreover, the ranges of differential nonlinearity and integral nonlinearity can be improved by 56% and 37%, respectively, for single-mode operation. To improve the linearity in FPGA-based TDC design, Kalisz et al. presented the calibration circuit to implement a TDC having a resolution of 200 ps and a measurement range of 43 ns in a QuickLogic pASIC FPGA device [18,19]. In reference [20], TDCs with a resolution of 65 and 46.2 ps including time calibration were implemented in an Altera FPGA device and a Xilinx FPGA device, respectively. Wang et al. used the command LOC and RLOC to specify the location of delay cells in Xilinx ISE tools [21], which enabled P&R to be performed automatically with EDA tools. Thus, the time-consuming process of manually performing P&R could be avoided. Wave-union TDCs improve the time resolution of FPGA-based TDCs, especially when ultra-wide bins (UWBs) occur in FPGA-based TDCs [22][23][24]. An averaging multiple delay line is used to smooth out large quantization errors [25] and thus improve the time resolution. However, an FPGA-based TDC with such a delay line has a complex architecture. In general, the measured integral nonlinearity (INL) and differential nonlinearity (DNL) are important metrics influencing the time linearity. Thus, several schemes have been presented to correct INL or DNL values by using time histograms [14,[26][27][28][29]. In this paper, a dual-mode FPGA-based triple-TDC is proposed to deal with the environment effect and improve the time linearity. Three separated tapped delay lines (TDLs) and their corresponding calibration circuits are used in the Xilinx Virtex-5 FPGA device [28,29]. Therefore, the proposed triple-TDC improves the temperature variation effect. The measurement results indicate that the triple-TDC achieves a time resolution of the 40 ps root mean square (RMS) for multimode operation. The triple modular redundancy (TMR) scheme [32] is used to improve the uncertainty in the FPGA device for single-mode operation, such as the UWB effect. The TMR triple-TDC can achieve a resolution of 35.5 ps RMS and improve INL and DNL values by an average of 56% and 37%, respectively. Tapped Delay Line TDC The TDL is a popular structure for FPGA-based TDC design because it has a simple structure and is easy to design. Figure 1 illustrates the architecture of the traditional TDL-TDC, which includes N delay buffers, N registers, and an encoder module for N-bin TDC. An FPGA is a suitable platform for TDL-TDC implementation due to its regular slice structure. However, the delay buffers manufactured for FPGA-based TDCs have a nonuniform delay. Figure 2 displays the time distribution of the delay buffer for the three separated TDLs, which is measured based on the code density method. The TDLs have a nonuniform delay time due to their different delay buffers. The calibration circuit presented in reference [29] improves the uniformity in the delay time of TDLs. Proposed Triple-TDC Design The proposed triple-TDC implements three separated TDLs with their corresponding calibration circuits. Figure 3 illustrates the architecture of the proposed triple-TDC, which consists of a mode-selection module, three TDLs and their corresponding calibration circuits, and a voter module. For the multiple-time conversion mode called as multi-mode, the mode-selection module passes three start and stop time signals to convert three time differences and the voter module bypasses the three digital codes. For the single-time conversion mode, a single start and stop time signal is broadcast to three TDLs, and the voter module converts a single time difference value according to the majority selection. In accordance with the concept of TMR, the proposed triple-TDC allocates three TDLs for every 20 vertical slice chains to obtain good diversity in a single FPGA chip. Figure 4 illustrates the layout of three TDLs in the Xilinx Virtex-5 FPGA chip. In comparison with the previous TDL-based TDC, the proposed triple-TDC has the following key advantages and disadvantages: Advantages: • Three TDLs can be layout into different locations of a single FPGA chip, thus the diversity of the delay cell can improve the uncertainty of time delay with the TMR technology. The time resolution can be improved. • The proposed triple-TDC provides dual operation modes for multi-channel and high-resolution applications. Disadvantages: • For the high-resolution application (single-mode), three-fold resources are used to improve the resolution. The resources overhead include area, power, and speed. Triple-TDC Implementation The proposed triple-TDC is implemented in a Xilinx XUPV5-LX110T ML505 FPGA evaluation board. The chip number is Xilinx XC5VLX110T-1FF1136 FPGA, and the EDA tool used is Xilinx ISE 14.7 tool. Three TDLs are allocated in the X40, X60, and X80 vertical slice chains with a high-speed carry cell called CARRY4 by using the LOC command for Xilinx ISE 14.7 tool, which is exclusively used by Xlinix FPGAs. A total of 512 delay buffers are employed for each TDL, and 128 cells are adapted after the calibration circuit, which covers the 6-ns conversion range. This is limited by the number of vertical slices and the delay time of the CARRY4 cell. Thus, the least-significant bit (LSB) value of the proposed triple-TDC is 6000/128 = 46.875 ps. Table 1 presents the resource usage of the proposed triple-TDC with TMR in the XC5VLX110T FPGA. A total of 4% and 9% of slice resources are used for slice registers and look-up tables (LUTs), respectively. A limited amount of resources are used in the proposed design. The operation flow char is shown in Figure 5. First, the calibration circuit will run the code density test to calibrate the linearity of these three TDLs. Then, the time conversion can be operated with the three-time conversion and high-resolution TMR time conversion for multi-mode and single-mode, respectively. Experimental Setup The calibration circuits employ the code density test for DNL calibration. Thus, the calibration cal_start and cal_stop signals are generated from two uncorrelated sources: (1) the on-board 100-MHz differential oscillator with an on-chip phase-locked loop (PLL) for generating the 166-MHz cal_start clock signal and (2) the on-board 200-MHz oscillator with another on-chip PLL for generating the 166-MHz cal_stop clock signal. These two uncorrelated cal_start and cal_stop signals sent to calibrate three TDL of the proposed triple-TDC. The run-time conversion start and stop signals are generated from an Agilent 81130A instrument for multi-mode operation. Moreover, a constant-delay stop signal generated from the start signal is adapted in single-mode operation to measure the RMS of the triple-TDC with the TMR function. Figure 6 illustrates the experimental setup of the measurement environment. The conversion output is connected to the host of PC, and the Xilinx ChipScope Pro and Matlab tools will capture and analyze the conversion times. Experimental Results To demonstrate the performance of the proposed FPGA-based triple-TDC, 2 20 Table 2. Note that the results of TDL-TDC without calibration are measured from the same TDLs, as shown in Figure 3. Furthermore, the time resolution of the RMS values was measured using the constant-delay input. The histogram in Figure 8 indicates that the three TDCs had good RMS values, especially the first TDC, which achieved a time resolution of 3.2 ps RMS. [−7.2, 4.3] with the proposed TMR, respectively. As shown in Table 3, the proposed triple-TDC with TMR enhances the INL values by 37% and 22% and the DNL values by 56% and 9% compared with the traditional TDL-TDC and triple-TDC without TMR, respectively. Furthermore, a resolution of 35.5 ps RMS can be achieved with the TMR scheme, which enhances the resolution of the triple-TDC by 11.3% compared with that of the traditional TDL-TDC. Figure 9 illustrates the INL, DNL, and RMS values for the proposed triple-TDC with TMR. Conclusions A triple-TDC is implemented in the Xilinx FPGA platform with a real-time calibration circuit. The triple-TDC utilizes the three TDCs with different calibration circuits to deal with the environmental effect for multi-mode operation and employs TMR scheme to avoid the manufacturing effect in the FPGA chip for single-mode operation. The measurement results indicate that the proposed triple-TDC can achieve DNL and INL values superior to those of the traditional TDL-TDC. The proposed triple-TDC with TMR enhances the INL values by 37% and 22% and the DNL values by 56% and 9% compared with the traditional TDL-TDC and triple-TDC without TMR, respectively. Moreover, a high RMS time resolution is achieved. Consequently, the proposed FPGA-based TDC avoids the environmental and manufacturing effects and is recommended for multi-channel applications, such as PET and TOF-PET. Furthermore, the TMR scheme improves the time resolution for high-accuracy scientific applications.
2,100
2020-04-03T00:00:00.000
[ "Engineering", "Computer Science" ]
Cosmological stability in $f(\phi,{\cal G})$ gravity In gravitational theories where a canonical scalar field $\phi$ with a potential $V(\phi)$ is coupled to a Gauss-Bonnet (GB) term ${\cal G}$ with the Lagrangian $f(\phi,{\cal G})$, we study the cosmological stability of tensor and scalar perturbations in the presence of a perfect fluid. We show that, in decelerating cosmological epochs with a positive tensor propagation speed squared, the existence of nonlinear functions of ${\cal G}$ in $f$ always induces Laplacian instability of a dynamical scalar perturbation associated with the GB term. This is also the case for $f({\cal G})$ gravity, where the presence of nonlinear GB functions $f({\cal G})$ is not allowed during the radiation- and matter-dominated epochs. A linearly coupled GB term with $\phi$ of the form $\xi (\phi){\cal G}$ can be consistent with all the stability conditions, provided that the scalar-GB coupling is subdominant to the background cosmological dynamics. I. INTRODUCTION General Relativity (GR) is a fundamental theory of gravity whose validity has been probed in Solar System experiments [1] and submillimeter laboratory tests [2,3]. Despite the success of GR describing gravitational interactions in the Solar System, there have been long-standing cosmological problems such as the origins of inflation, dark energy, and dark matter. To address these problems, one typically introduces additional degrees of freedom (DOFs) beyond those appearing in GR [4][5][6][7][8][9][10]. One of such new DOFs is a canonical scalar field φ with a potential V (φ) [11][12][13][14][15][16][17][18][19][20][21][22]. If the scalar field evolves slowly along the potential, it is possible to realize cosmic acceleration responsible for inflation or dark energy. An oscillating scalar field around the potential minimum can be also the source for dark matter. The other way of introducing a new dynamical DOF is to modify the gravitational sector from GR. The Lagrangian in GR is given by an Einstein-Hilbert term M 2 Pl R/2, where M Pl is the reduced Planck mass and R is the Ricci scalar. If we consider theories containing nonlinear functions of R of the form f (R), there is one scalar DOF arising from the modification of gravity [23,24]. One well known example is the Starobinsky's model, in which the presence of a quadratic curvature term R 2 drives cosmic acceleration [25]. It is also possible to construct f (R) models of late-time cosmic acceleration [26][27][28][29][30][31][32], while being consistent with local gravity constraints. The Einstein tensor G µν obtained by varying the Einstein-Hilbert action satisfies the conserved relation ∇ µ G µν = 0 (∇ µ is a covariant derivative operator), with the property of second-order field equations of motion in metrics. If we demand such conserved and second-order properties for 2-rank symmetric tensors, GR is the unique theory of gravity in 4 dimensions [33]. In spacetime dimensions higher than 4, there is a particular combination known as a Gauss-Bonnet (GB) term G consistent with those demands [34]. In 4 dimensions, the GB term is a topological surface term and hence it does not contribute to the field equations of motion. In the presence of a coupling between a scalar field φ and G of the form ξ(φ)G, the spacetime dynamics is modified by the time or spatial variation of φ. Indeed, this type of scalar-GB coupling appears in the context of low energy effective string theory [35][36][37]. The cosmological application of the coupling ξ(φ)G has been extensively performed in the literature . Moreover, it is known that the same coupling gives rise to spherically symmetric solutions of hairy black holes and neutron stars [64][65][66][67][68][69][70][71][72][73][74][75][76][77][78][79][80]. The Lagrangian f (G) containing nonlinear functions of G also generates nontrivial contributions to the spacetime dynamics [81][82][83][84][85][86][87][88][89][90]. In Ref. [91], De Felice and Suyama studied the stability of scalar perturbations in f (R, G) gravity on a spatially flat Friedmann-Lemaître-Robertson-Walker (FLRW) background. In theories with f 2 , there is an unusual scale-dependent sound speed which propagates superluminally in the short-wavelength limit, unless the vacuum is in a de Sitter state (see also Ref. [92] for the analysis in an anisotropic cosmological background). We note that this problem does not arise for f (R) gravity or M 2 Pl R/2 + f (G) gravity. In Ref. [93], the same authors extended the analysis to a more general Lagrangian f (φ, R, G) with a canonical scalar field φ and showed that the property of the scale-dependent sound speed is not modified by the presence of φ. Taking a perfect fluid (radiation or nonrelativistic matter) into account in f (R, G) gravity, the cosmological stability and evolution of matter perturbations were studied in Refs. [94][95][96]. In Einstein-scalar-GB gravity given by the Lagrangian M 2 Pl R/2 + f (φ, G), where φ is a canonical scalar field, the problem of scale-dependent sound speeds mentioned above is not present. In this theory, the propagation of scalar perturbations on the spatially flat FLRW background was studied in Ref. [93] without taking into account matter. While the sound speed associated with the field φ is luminal for theories with f ,GG = 0, the propagation speed squared c 2 s arising from a nonlinear GB term deviates from that of light and it can be even negative. In Ref. [93], the authors discussed the possibility for satisfying the Laplacian stability condition c 2 s > 0. In the presence of matter, however, the stability conditions are subject to modifications from those in the vacuum. To understand what happens for the dynamics of cosmological perturbations during radiation-and matter-dominated epochs, we need to study their stabilities by incorporating radiation or nonrelativistic matter. In this letter, we will derive general conditions for the absence of ghosts and Laplacian instabilities in M 2 Pl R/2 + f (φ, G) gravity, where φ is a canonical scalar field with a potential V (φ). In theories where the scalar field φ is coupled to the linear GB term, i.e., f (φ, G) = ξ(φ)G, there is only one dynamical scalar DOF φ. In theories with f ,GG = 0, the Lagrangian f (φ, G) can be expressed in terms of two scalar fields φ and χ coupled to the linear GB term, where χ arises from the nonlinearity in G. Hence the latter theory has two dynamical scalar DOFs. To study the cosmological stability of f (φ, G) theories with f ,GG = 0, we take a perfect fluid into account as a form of the Schutz-Sorkin action [97][98][99]. We will show that the squared sound speed arising from nonlinear functions of G is negative during decelerating cosmological epochs including radiation and matter eras. To reach this conclusion, we exploit the fact that the propagation speed squared c 2 t of tensor perturbations must be positive to avoid Laplacian instability of gravitational waves. The same Laplacian instability of scalar perturbations is also present in M 2 Pl R/2 + f (G) gravity with any nonlinear function of G in f . We note that, in f (G) models of late-time cosmic acceleration, violent instabilities of matter density perturbations during the radiation and matter eras were reported in Ref. [100]. This can be regarded as the consequence of a negative sound speed squared of the scalar perturbation δG arising from the nonlinearity of G in f . Since δG is coupled to the matter perturbation δρ, the background cosmological evolution during the radiation and matter eras is spoiled by the rapid growth of δρ. Our analysis in this letter shows that similar catastrophic instabilities persist for more general scalar-GB couplings f (φ, G) with f ,GG = 0. This letter is organized as follows. In Sec. II, we revisit cosmological stability conditions in M 2 Pl R/2 + ξ(φ)G gravity with a canonical scalar field φ, which can be accommodated in a subclass of Horndeski theories with a single scalar DOF [101][102][103][104]. This is an exceptional case satisfying the condition f ,GG = 0, under which the Laplacian instability of scalar perturbations can be avoided. In Sec. III, we derive the background equations and stability conditions of tensor perturbations in M 2 Pl R/2 + f (φ, G) gravity with f ,GG = 0 by incorporating a perfect fluid. In Sec. IV, we proceed to the derivation of a second-order action of scalar perturbations and obtain conditions for the absence of ghosts and Laplacian instabilities in the scalar sector. In particular, we show that an effective cosmological equation of state w eff needs to be in the range w eff < −(2 + c 2 t )/6 to ensure Laplacian stabilities of the perturbation δG. Sec. V is devoted to conclusions. II. ξ(φ)G GRAVITY We first briefly revisit the cosmological stability in ξ(φ)G gravity given by the action where g is a determinant of the metric tensor g µν , η is a constant, X = −(1/2)g µν ∇ µ φ∇ ν φ is a kinetic term of the scalar field φ, V (φ) and ξ(φ) are functions of φ, and G is a GB term defined by with R µν and R µνρσ being the Ricci and Riemann tensors, respectively. For the matter action S m , we consider a perfect fluid minimally coupled to gravity. The action (2.1) contains one scalar DOF φ besides the matter field Ψ m . If we consider Horndeski theories [101] given by the action where we use the notations F ,X = ∂F/∂X and F ,φ = ∂F/∂φ for any arbitrary function F . Let us consider a spatially flat FLRW background given by the line element ds 2 = −dt 2 + a 2 (t)δ ij dx i dx j , where a(t) is a time-dependent scale factor. The perfect fluid has a density ρ and pressure P . The background equations as well as the perturbation equations in full Horndeski theories were derived in Refs. [103,[105][106][107]. On using the correspondence (2.4), the background equations of motion in theories given by the action (2.1) are where H =ȧ/a is the Hubble expansion rate, a dot represents the derivative with respect to t, and In the presence of tensor perturbations h ij with the perturbed line element ds 2 = −dt 2 + a 2 (t)(δ ij + h ij )dx i dx j , the second-order action of traceless and divergence-free modes of h ij was already derived in full Horndeski theories [103,106,107]. In the current theory, the conditions for the absence of ghosts and Laplacian instabilities arẽ whereq t andc 2 t are defined by Eqs. (2.9) and (2.10), respectively. Note thatq t determines the sign of a kinetic term of h ij , whilec 2 t corresponds to the propagation speed squared of tensor perturbations. For the scalar sector, we choose the perturbed line element ds 2 the flat gauge, where α and B are scalar metric perturbations. There is also a scalar-field perturbation δφ besides the matter perturbation δρ and the fluid velocity potential v. After deriving the quadratic-order action of scalar perturbations, we can eliminate nondynamical variables α, B, and v from the action. Then, we are left with the two dynamical perturbations δφ and δρ in the second-order action. In the short-wavelength limit, there is neither ghost nor Laplacian instability for δφ under the conditions [103,106,107] wherec s corresponds to the propagation speed of δφ, and w eff is the cosmological effective equation of state defined by The stability conditions for δρ are given by ρ + P > 0 and c 2 m > 0, where c 2 m is the matter sound speed squared. Under the stability condition (2.12) with η > 0, the scalar no-ghost condition (2.14) is satisfied. Let us consider the case in which contributions of the scalar-GB coupling are suppressed, such that In such cases, provided that η > 0, all the stability conditions are consistently satisfied. If the scalar-GB coupling contributes to the late-time cosmological dynamics, there is an observational bound onc t constrained from the GW170817 event together with the electromagnetic counterpart, i.e., −3 × 10 −15 ≤c t − 1 ≤ 7 × 10 −16 [108] for the redshift z ≤ 0.009. This translates to the limit which gives a tight constraint on the amplitude of ξ(φ). In this case, contributions of the scalar-GB coupling to the background Eqs. (2.5) and (2.6) are highly suppressed relative to the field density ρ φ = ηφ 2 /2 + V (φ) and the matter density. The bound (2.18) is not applied to early cosmological epochs including inflation, radiation, and matter eras. We note, however, that the dominance of the scalar-GB coupling to the background equations prevents the successful cosmic expansion history. This can also give rise to the violation of either of the stability conditions (2.12)-(2.15). Provided the scalar-GB coupling is suppressed in such a way that inequalities (2.17) hold, the linear stabilities are ensured for both tensor and scalar perturbations. We extend ξ(φ)G gravity to more general theories in which a canonical scalar field φ with a potential V (φ) is coupled to the GB term of the form f (φ, G). The action in such theories is given by where a matter field Ψ m is minimally coupled to gravity. It is more practical to introduce a scalar field χ and resort to the following action with the notation f ,χ = ∂f /∂χ. Varying the action (3.2) with respect to χ, it follows that So long as ξ ,χ = 0, we obtain χ = G. In this case, the action (3.2) reduces to (3.1). Thus, the equivalence of (3.2) with (3.1) holds for under which there is a new scalar DOF χ arising from the gravitational sector. Theories with f ,GG = 0 correspond to the coupling f = ξ(φ)G, in which case the cosmological stability conditions were already discussed in Sec. II. In ξ(φ)G gravity, we do not have the additional scalar DOF χ arising from G, so the term ξ ,χ in Eq. (3.4) does not have the meaning of f ,GG . Thus, the action (3.2) with the new dynamical DOF χ does not reproduce the action (2.1) in ξ(φ)G gravity. In the following, we will focus on theories with f ,GG = 0, i.e., those containing the nonlinear dependence of G in f . For the matter field Ψ m , we incorporate a perfect fluid without a dynamical vector DOF. This matter sector is described by the Schutz-Sorkin action [97][98][99] where the fluid density ρ is a function of its number density n. The vector field J µ is related to n according to the relation n = J µ J ν g µν /g, where u µ = J µ /(n √ −g) is the fluid four velocity. A scalar quantity ℓ in S m is a Lagrange multiplier, with the notation of a partial derivative ∂ µ ℓ = ∂ℓ/∂x µ . Varying the matter action (3.6) with respect to ℓ and J µ , respectively, we obtain where ρ ,n = dρ/dn. A. Background equations We derive the background equations of motion on the spatially flat FLRW background given by the line element 9) where N (t) is a lapse function. Since the fluid four velocity in its rest frame is given by u µ = (N −1 , 0, 0, 0), the vector field J µ has components J µ = (na 3 , 0, 0, 0). From Eq. (3.7), we obtain which means that the total fluid number N 0 is conserved. This translates to the differential equationṅ + 3Hn = 0, which can be expressed as a form of the continuity equatioṅ where P is a fluid pressure defined by P = nρ ,n − ρ. On the background (3.9), the total action (3.2) is expressed in the form Varying the action (3.12) with respect to N , a, φ, χ respectively and setting N = 1 at the end, we obtain the background equations of motion where q t = M 2 Pl + 8H(ξ ,φφ + ξ ,χχ ) , (3.18) . (3.19) We recall that the perfect fluid obeys the continuity Eq. (3.11). We notice that Eqs. (3.14)-(3.16) are of similar forms to Eqs. (2.5)-(2.7) in ξ(φ)G gravity, but the expressions of q t and c 2 t are different fromq t andc 2 t , respectively, because of the appearance of χ-dependent terms. These new terms do not vanish for ξ ,χ = 0, i.e., for f ,GG = 0. As we will show in Sec. IV, nonlinearities of G in f are responsible for the appearance of a new scalar propagating DOF δχ. B. Stabilities in the tensor sector We proceed to the derivation of stability conditions for tensor perturbations in theories given by the action (3.2). The perturbed line element including the tensor perturbation h ij is where we impose the traceless and divergence-free gauge conditions h i i = 0 and ∂ i h ij = 0. For the gravitational wave propagating along the z direction, nonvanishing components of h ij are expressed in the form where the two polarized modes h 1 and h 2 are functions of t and z. The second-order action arising from the matter action (3.6) can be expressed as where P can be eliminated by using the background Eq. (3.15). Expanding the total action (3.2) up to quadratic order in tensor perturbations and integrating it by parts, the resulting second-order action reduces to where (∂h i ) 2 = (∂h i /∂z) 2 . We recall that q t and c 2 t are given by Eqs. (3.18) and (3.19), respectively. To avoid the ghost and Laplacian instabilities in the tensor sector, we require the two conditions q t > 0 and c 2 t > 0, which translate to In f (G) gravity without the scalar field φ, tensor stability conditions can be obtained by settingφ = 0 andφ = 0 in Eqs. (3.24) and (3.25). We vary the action (3.23) with respect to h i (with i = 1, 2) in Fourier space with a comoving wavenumber k. Then, we obtain the tensor perturbation equation of motion where k = |k|. Since ξ = f ,χ = f ,G , the G dependence in f leads to the modified evolution of gravitational waves in comparison to GR. If the energy densities of φ and χ are relevant to the late-time cosmological dynamics after the matter dominance, the observational constraint on the tensor propagation speed c t arising from the GW170817 event [108] (|c t − 1| 10 −15 ) gives a tight bound on the scalar-GB coupling f (φ, G). Such a stringent limit is not applied to the cosmological dynamics in the early Universe, but the conditions (3.24) and (3.25) need to be still satisfied. IV. STABILITIES OF f (φ, G) GRAVITY IN THE SCALAR SECTOR In this section, we will derive conditions for the absence of scalar ghosts and Laplacian instabilities in theories given by the action (3.2). A perturbed line element containing scalar perturbations α, B, ζ, and E is of the form (4.1) For the scalar fields φ and χ, we consider perturbations δφ and δχ on the background valuesφ(t) andχ(t), respectively, such that where we will omit a bar from the background quantities in the following. In the matter sector, the temporal and spatial components of J µ are decomposed into the background and perturbed parts as where δJ and δj are scalar perturbations. In terms of the velocity potential v, the spatial component of fluid four velocity is expressed as u i = −∂ i v. From Eq. (3.8), the scalar quantity ℓ has a background part obeying the relatioṅ ℓ = −ρ ,n besides a perturbation −ρ ,n v. Then, we have Defining the matter density perturbation δρ ≡ ρ ,n a 3 δJ − N 0 (3ζ + ∂ 2 E) , (4.5) the fluid number density n has a perturbation [107,109] up to second order. The matter sound speed squared is given by Expanding (3.6) up to quadratic order in perturbations, we obtain the second-order matter action same as that derived in Refs. [107,109]. Varying this matter action with respect to δj leads to ∂δj = −a 3 n (∂v + ∂B) , (4.8) whose relation will be used to eliminate δj. In the following, we choose the gauge under which a scalar quantity ξ associated with the spatial gauge transformation x i → x i + δ ij ∂ j ξ is fixed. A scalar quantity ξ 0 associated with the temporal part of the gauge transformation t → t + ξ 0 can be fixed by choosing a flat gauge (ζ = 0) or a unitary gauge (δφ = 0). We do not specify the temporal gauge condition in deriving the second-order action, but we will do so at the end. Expanding the total action (3.2) up to quadratic order in scalar perturbations and integrating it by parts, the resulting second-order action is given by where q t and c 2 t are given by Eqs. (3.18) and (3.19), respectively, and Now, we switch to the Fourier space with a comoving wavenumber k. Varying the total action (4.10) with respect to α, B, and v, respectively, we obtain In the following, we choose the flat gauge given by to obtain stability conditions for scalar perturbations. We will discuss the two cases: (A) f (φ, G) gravity and (B) f (G) gravity in turn. In f (φ, G) gravity with f ,GG = 0, we can construct gauge-invariant scalar perturbations δφ f = δφ −φ ζ/H, δχ f = δχ −χ ζ/H, and δρ f = δρ −ρ ζ/H. For the gauge choice (4.17), they reduce, respectively, to δφ, δχ, and δρ, which correspond to the dynamical scalar DOFs. Note that the perturbation δχ = δG arises from nonlinearities in the GB term. We solve Eqs. (4.14)-(4.16) for α, B, v and substitute them into Eq. (4.10). Then, the resulting quadratic-order action in Fourier space is expressed in the form where K, G, M , B are 3 × 3 matrices, and X t = (δφ, δχ, δρ/k) . The leading-order contributions to M and B are of order k 0 . Taking the small-scale limit k → ∞, nonvanishing components of the symmetric matrices K and G are , (4.20) and To derive these coefficients, we have absorbed k 2 -dependent terms present in B into the components of G and used the relation C 1 = 3HC 3 − ηφ, anḋ The scalar ghosts are absent under the following three conditions K 33 = a 2 2(ρ + P ) > 0 , (4.23) det K = 3C 2 4 ηq t a 2 4(ρ + P )(3q t − M 2 Pl ) 2 > 0 . Under the no-ghost condition q t > 0 of tensor perturbations, inequalities (4.23)-(4.25) are satisfied for ρ + P > 0 , (4.26) η > 0 . (4.27) In the limit of large k, dominant contributions to the second-order action (4.18) arise from K and G. Then, the dispersion relation can be expressed in the form where c s is the scalar propagation speed. Solving Eq. (4.28) for c 2 s , we obtain the following three solutions c 2 s1 = 1 , (4.29) which correspond to the squared propagation speeds of δφ, δχ, and δρ, respectively. The scalar perturbation δφ has a luminal propagation speed, so it satisfies the Laplacian stability condition. For c 2 m > 0, the matter perturbation δρ is free from Laplacian instability. On using the background Eq. (3.15), the sound speed squared (4.30) can be expressed as 1 where w eff is the effective equation of state defined by Eq. (2.16). The Laplacian stability of δχ is ensured for c 2 s2 > 0, i.e., Since we need the condition c 2 t > 0 for the absence of Laplacian instability in the tensor sector, w eff must be in the range w eff < −1/3. This translates to the conditionḢ + H 2 =ä/a > 0, so the Laplacian stability of δχ requires that the Universe is accelerating. In decelerating cosmological epochs, the condition (4.34) is always violated for c 2 t > 0. During the radiation-dominated (w eff = 1/3) and matter-dominated (w eff = 0) eras, we have c 2 s2 = −(4 + c 2 t )/3 and c 2 s2 = −(2 + c 2 t )/3, respectively, which are both negative for c 2 t > 0. We thus showed that, for scalar-GB couplings f (φ, G) containing nonlinear functions of G, δχ is prone to the Laplacian instability during the radiation and matter eras. Hence nonlinear functions of G should not be present in decelerating cosmological epochs. Even if c 2 s2 is positive in the inflationary epoch, c 2 s2 changes its sign during the transition to a reheating epoch (in which w eff ≃ 0 for a standard reheating scenario). During the epoch of late-time cosmic acceleration, c 2 s2 can be positive, but it changes the sign as we go back to the matter era. Since δχ is coupled to δφ and δρ, the instability of δχ leads to the growth of δφ and δρ for perturbations deep inside the Hubble radius. This violates the successful background evolution during the decelerating cosmological epochs. The squared propagation speeds (4.29)-(4.31) have been derived by choosing the flat gauge (4.17), but they are independent of the gauge choices. Indeed, we will show in Appendix A that the same values of c 2 s1 , c 2 s2 , and c 2 s3 can be obtained by choosing the unitary gauge. We also note that the scalar propagation speed squared (2.15) in ξ(φ)G gravity is not equivalent to the value (4.29). As we observe in Eq. (2.15), the propagation of φ is affected by the coupling ξ(φ) with the linear GB term G. In f (φ, G) theory with f ,GG = 0, the new scalar field χ plays a role of the dynamical DOF arising from the nonlinear GB term. In this latter case, the propagation of the other field φ does not practically acquire the effect of a coupling with the GB term and hence c s1 reduces to the luminal value. 1 If we eliminate c 2 t by using Eq. (3.15), we can express Eq. (4.30) in the form (4.32) From this expression, it seems that the existence of the last term can lead to c 2 s2 > 0 even in the decelerating Universe. In the absence of matter (ρ = 0 = P ), this possibility was suggested in Ref. [93]. Eliminating qt instead of c 2 t from Eq. (4.30), it is clear that this possibility is forbidden even in the presence of matter. B. f (G) gravity Finally, we also study the stability of scalar perturbations in f (G) gravity given by the action In this case, there is no scalar field φ coupled to the GB term. The action (4.35) is equivalent to Eq. (3.2) with φ = 0, X = 0, V (φ) = 0, U = −f (χ) + χξ(χ), and ξ = f ,χ (χ). As shown in Ref. [103], this theory belongs to a subclass of Horndeski theories with one scalar DOF χ besides a matter fluid. In f (G) gravity, the second-order action of scalar perturbations is obtained by setting φ, δφ, and their derivatives 0 in Eqs. (4.11) and (4.12). We choose the flat gauge (4.17) and eliminate α, B, v from the action by using Eqs. (4.14)-(4.16). Then, the second-order scalar action reduces to the form (4.18) with 2 × 2 matrices K, G, M , B and two dynamical perturbations X t = (δχ, δρ/k) . (4.36) In the small-scale limit, nonvanishing components of K and G are given by , (4.37) The no-ghost conditions correspond to K 11 > 0 and K 22 > 0, which are satisfied for q t > 0 and ρ + P > 0. The propagation speed squared for δχ is where, in the last equality, we used the background Eq. (3.15) withφ = 0. The other matter propagation speed squared is given by c 2 s2 = G 22 /K 22 = c 2 m . Since the last expression of Eq. (4.39) is of the same form as Eq. (4.33), the Laplacian instability of δχ is present in decelerating cosmological epochs. In Ref. [100], violent growth of matter perturbations was found during the radiation and matter eras for f (G) models of late-time cosmic acceleration. This is attributed to the Laplacian instability of δχ coupled to δρ, which inevitably occurs for nonlinear functions of f (G). V. CONCLUSIONS In this letter, we studied the stability of cosmological perturbations on the spatially flat FLRW background in scalar-GB theories given by the action (3.1). Provided that f ,GG = 0, the action (3.1) is equivalent to (3.2) with a new scalar DOF χ arising from nonlinear GB terms. Theories with f ,GG = 0 correspond to a linear GB term coupled to a scalar field φ of the form ξ(φ)G, which belongs to a subclass of Horndeski theories. To make a comparison with the scalar-GB coupling f (φ, G) containing nonlinear functions of G, we first revisited stabilities of cosmological perturbations in ξ(φ)G gravity in Sec. II. In this latter theory, provided that the scalar-GB coupling is subdominant to the background equations of motion, the stability conditions of tensor and scalar perturbations can be consistently satisfied. In Sec. III, we derived the background equations and stability conditions of tensor perturbations for the scalar-GB coupling f (φ, G) with f ,GG = 0. Besides a canonical scalar field φ with the kinetic term ηX and the potential V (φ), we incorporate a perfect fluid given by the Schutz-Sorkin action (3.6). The absence of ghosts and Laplacian instabilities requires that the quantities q t and c 2 t defined by Eqs. (3.18) and (3.19) are both positive. In terms of q t and c 2 t , the background equations of motion in the gravitational sector can be expressed in a simple manner as Eqs. (3.14) and (3.15), where the latter is used to simplify a scalar sound speed later. In Sec. IV, we expanded the action in f (φ, G) gravity with f ,GG = 0 up to quadratic order in scalar perturbations. After eliminating nondynamical variables α, B, and v, the second-order action is of the form (4.18) with three dynamical perturbations (4.19). With the no-ghost condition q t > 0 of tensor perturbations, the scalar ghosts are absent for η > 0 and ρ + P > 0. The sound speeds of perturbations δφ and δρ have the standard values 1 and c m , respectively. However, the squared propagation speed of δχ, which arises from nonlinear GB functions in f , has a nontrivial value c 2 s2 = −(2 + c 2 t + 6w eff )/3. Since the positivity of c 2 s2 requires that w eff < −(2 + c 2 t )/6, we have w eff < −1/3 under the absence of Laplacian instability in the tensor sector (c 2 t > 0). This means that the scalar perturbation associated with nonlinearities of the GB term is subject to Laplacian instability during decelerating cosmological epochs including radiation and matter eras. The same property also holds for f (G) gravity with f ,GG = 0. We thus showed that a canonical scalar field φ coupled to a nonlinear GB term does not modify the property of negative values of c 2 s2 in the decelerating Universe. During inflation or the epoch of late-time cosmic acceleration, it is possible to avoid Laplacian instability of the perturbation δχ in f (φ, G) gravity with f ,GG = 0. However, in the subsequent reheating period after inflation or in the preceding matter era before dark energy dominance, the Laplacian instability inevitably emerges to violate the successful background cosmological evolution. We have shown this for a canonical scalar field φ, but it may be interesting to see whether the same property persists for the scalar field φ arising in Horndeski theories and its extensions like DHOST theories [110,111]. While we focused on the analysis on the FLRW background, it will be also of interest to study whether some instabilities are present for perturbations on a static and spherically symmetric background in f (φ, G) gravity with f ,GG = 0. The latter is important for the construction of stable hairy black hole or neutron star solutions in theories beyond the scalar-GB coupling ξ(φ)G. These issues are left for future works.
7,631.4
2022-12-20T00:00:00.000
[ "Physics" ]
HYBRID GENE SELECTION METHOD BASED ON MUTUAL INFORMATION TECHNIQUE AND DRAGONFLY OPTIMIZATION ALGORITHM S a r a h G h a n i m M a h m o o d Corresponding author Master in Mathematics Sciences, Assistant Lecturer* E-mail<EMAIL_ADDRESS>R a e d S a b e e h K a r y a k o s Master in Mathematics, Assistant Lecturer* I l h a m M . Y a c o o b Master in Mathematics Application Sciences, Assistant Lecturer* *Department of Mathematise College of Education University of AL-Hamdaniya Erbil road, Al-Hamdaniya District, Nineveh, Iraq, 41006 One of the most prevalent problems with big data is that many of the features are irrelevant. Gene selection has been shown to improve the outcomes of many algorithms, but it is a difficult task in microar­ ray data mining because most microarray datasets have only a few hundred records but thousands of variables. This type of dataset increases the chan ces of discovering incorrect predictions due to chance. Finding the most relevant genes is generally the most difficult part of creating a reliable classifica­ tion model. Irrelevant and duplicated attributes have a negative impact on categorization algorithms’ accuracy. Many Machine Learning­based Gene Selection methods have been explored in the lite­ rature, with the aim of improving dimensionality reduction precision. Gene selection is a technique for extracting the most relevant data from a series of datasets. The classification method, which can be used in machine learning, pattern recognition, and signal processing, will benefit from further develop­ ments in the Gene selection technique. The goal of the feature selection is to select the smallest subset of features but carrying as much information about the class as possible. This paper models the gene selection approach as a binary­based optimization algorithm in discrete space, which directs binary dragonfly optimization algorithm «BDA» and veri­ fies it in a chosen fitness function utilizing preci­ sion of the dataset’s k­nearest neighbors’ classifier. The experimental results revealed that the proposed algorithm, dubbed MI­BDA, in terms of precision of results as measured by cost of calculations and clas­ sification accuracy, it outperforms other algorithms Introduction In recent years, molecular biology and genetics research has evolved away from studying individual genes and toward exploring the entire genome. DNA microarray is one of these techniques for measuring the expression levels of thousands of genes in a single experiment, making it ideal for comparing gene expression levels in tissues under various situations, such as healthy versus sick tissues [1]. Gene selection is frequently used to preprocess the original gene set for subsequent analysis because many genes in the original gene set are irrelevant or even redundant for a specific discriminant problem. Gene selection can improve the classifier's generalization capacity and minimize the computing complexity of the learning operation, according to discriminant analysis. Gene selection, according to biologists, results in more compact gene sets, which lowers diagnostics costs and makes it easier to comprehend the roles of linked genes [2]. In the high-dimensional space of a small number of observations, comparing gene expression profiles and picking those that are best related with the examined forms of data is a difficult issue in pattern recognition, which can be tackled utilizing specialized data mining approaches [3]. Despite the rapid advancements in this subject, there is always a need for further understanding and research development. Also, feature selection has been extensively studied and applied in the fields of data mining and machine learning [4]. A function, also known as an attribute or variable, is a process or system property that has been calculated or constructed from the original input variables in this context [5]. The aim of selecting feature is to locate the perfect set of features with «k-features» that produces the least generalization error, or alternatively, to find the best subset of features with k features that produces the least generalization error [6]. Literature review and problem statement They discovered that filter-based gene selection, which selects the most useful features from a gene dataset for a more accurate diagnosis, produces better results, but the chosen set is not the best subset because the work was only to reduce the genes we suggest to work on swarm algorithms can be added to find the best subset of genes [7]. They use two classifiers to compare the mRMR-ReliefF selection algorithm to ReliefF, mRMR, and other feature selection approaches, using seven different datasets. Naive Bayes and SVM. The authors propose RFACO-GS, a hybrid filter wrapper-based gene selection algorithm based on the ReliefF algorithm and the improved ACO process. Using multiple public gene expression datasets, the experimental results show that the suggested methodology is very successful in lowering the dimensionality of gene expression datasets and choosing the most significant genes with high classification accuracy. The algorithm cannot ideally balance the size of the subset of specific genes and the classification accuracy in all high-dimensional gene expression datasets, and the suggested method has a severe disadvantage in providing sufficient biological interpretations of genes picked for cancer classification. As a result, more research into the aforementioned issues would be beneficial in building a gene expression data classification [8]. The researchers used our IGIS+algorithm to select genes from ten microarray data sets. They compare the performance of our proposed approach to that of previous gene selection techniques in terms of classification accuracy, number of selected genes, and number of envelope evaluations needed, but the results were only compared with KNN, and it is known that it is a traditional method. If the test was done with more modern methods, or the traditional method was also hybridized, the comparison would be classified according to our opinion [9]. A research study was also given that built a modular bioinformatics methodology that leverages publicly available human transcriptomics data to produce a score for each gene that indicates the overall relevance of each gene in representing transcriptional diversity, correlation with other genes based on expression profiling, and known pathway annotation using publicly accessible human transcriptomics data, perhaps if the genes contain some mutations, as they do not appear in publicly available human transcriptomics data that the researcher used here, we see if he used a correction mechanism for the genes that have a mutation to return them to their original form and work on it [10]. In this study, two novel binary variations of the GOA method were developed and used to FS problems. The first method is based on transfer functions, whereas the second strategy uses a unique mechanism that repositions the current solution by considering the position of the best solution thus far. The suggested binary GOA (especially BGOA-M) has strengths among current FS algorithms and is worthy of attention for tackling tough FS problems, according to the results, debates, and analyses, three algorithms were compared, but there is an algorithm that we believe will give good results if compared with the proposed algorithm, which is the bat algorithm, as it can also work on binary data [11]. To solve the FS difficulties, an asynchronous binary SSA technique with numerous update criteria was presented in this paper. The statistical results show that the suggested TCSSA3 is superior in dealing with feature space exploration and exploitation for the vast majority of datasets. The idea of asynchronous tuning of the major parameter of the SSA with distinct leading salp for different areas of the salp chain was advantageous in mitigating the possible shortcomings of the conventional algorithm, according to the discussions and analyses of the results. If the optimal number of update rules were also determined with the same algorithm, then the algorithm could be employed in more than one direction [12]. The performance of several feature selection techniques was investigated in this work utilizing two different datasets. The findings revealed a considerable performance difference across feature selection algorithms when employing datasets with varied amounts of features, with accuracy percentages varying from 10 to 20 %. Furthermore, the benefits of filter feature selection strategies should not be overlooked. In order to forecast student performance, the outcome of feature selection might have been examined through confusion and, better yet, the amount of mixed feature selection algorithms in student data sets might have been limited [13]. In our paper we used mutual information technique and binary dragonfly optimization algorithm «MI-BDA» to improving the selection of genes. The «BDA» method and Mutual Information «MI» were used in this study to acquire subsets of features through two main phases: the first is to utilize the MI algorithm to define the characteristics affecting the data classification process by relying on an objective function. The BDA approach is used in the second phase to minimize the amount of characteristics found by the MI approach. The proposed algorithm's findings have shown efficiency and efficacy by achieving higher classification accuracy while using less features than standard approaches. The aim and objectives of the study The aim of the study is to extract the most relevant data from a collection of datasets. Further refinements to the feature selection technique would have a positive impact on the classification process, which can be used in a variety of applications including machine learning, pattern recognition, and signal processing. To achieve this aim, the following objectives are accomplished: -improving the method of selecting gene to suggest a new improving method based on algorithms for selecting the best genes; -the Gene selection approach in discrete space is modeled as a binary-based optimization algorithm, directing BDA and using the accuracy of the k-nearest neighbors classifier on the dataset to verify it in the chosen fitness function, in which the hybrid approach between the binary dragonfly algorithm and mutual information approach is shown. Materials and methods In this section, the methods used in conducting the research are presented and the work of each method is explained, as well as how to link and hybridize between the two methods. Initially, information theory was developed to find fundamental limits on data compression and efficient communication [14]. Entropy is a crucial measure of knowledge in this theory. It has been commonly used in many fields because it is incapable of quantifying the variance of random variables and effectively scaling the volume of data shared by them. In order to preserve continuity, we will only address finite random variables with discrete values [15]. Let X is a random variable with discrete values, Entropy H(X) can be used to calculate its uncertainty, which is characterized by: The density function of probability X is where p(x) = = pr(X = x). Remember that the entropy does not depend on real values, but rather on the likelihood of random variable distribution. Similarly, mutual entropy H(X, Y), X and Y are the same as Y and X. The reduction of vector uncertainty is referred to as conditional entropy. If the variable y is defined and the others are known, the conditional H(X/Y) of X entropy with respect to Y is: Where the probabilities of the future X are p(x/y) given Y. As a result of this description, if X depends entirely on Y, then H(X/Y) is zero. This implies that when Y is understood, no more knowledge is needed to explain X. Otherwise, H(X/Y) = H(X) suggests that understanding Y would do little to observe X. A definition called mutual information I(X; Y) is defined as a quantification of how much information is exchanged by two variables X and Y: , log , . If X and Y are closely related, the value of I(X; Y) will be very high; otherwise, the value of I(X; Y) will be zero, meaning that the two variables are totally unrelated. It's also possible to rewrite I(X; Y) as I(X; Y) = H(X)-H(X/Y). When Z is known, analogously, mutual conditional information of X and Y, referred as I(X; Y/Z) = H(X/Z)-H(X/Y, Z), refers to the total of knowledge that X and Y have in common. That is, I(X; Y/Z) means that Y offers knowledge about X that is not already found in Z [14,15]. The dragonfly optimization algorithm was inspired by dragonflies. It is a swarm intelligence technique for estimating the best solution (global) to a given optimization problem [11,16]. The mathematical models and dragonfly swarming behavior are depicted as follows [17]. The term «separation» refers to a strategy used by individuals to prevent colliding with their neighbors. This action is built mathematically, as in (5): where x -the current position; X j -the adjoining j-th of the position of x; N -the neighborhood's height. The orientation depicts the velocity of the individuals in relation to other individuals. This is a mathematically constructed action, as shown in (6): the individual neighborhood's speed is represented by V j , and the size of the neighborhood by N. Individuals' propensity to congregate in the neighborhood's mass center is referred to as cohesion. This action is mathematically modeled in the same way as (7) [18]: X is the current location, X j is the j-th neighborhood X position, and N is the height of the neighborhood [19]. The (8) model is used to model the food attraction: X + is a food source's location, and X is the current individual's location: where X denotes an enemy's location and X denotes the position of the actual individual. To solve optimization issues in the algorithm, the dragonfly optimization algorithm (DA) used two simple vectors: the vector phase and the location of the vector. The move's vector as shown: r is a random where r ∈[ , ] 0 1 and T X t Δ + ( ) 1 is determined as in (12). The following is a pseudo-programming for the Binary Dragonfly. Optimization Algorithm: Generate the initial population of DA, X j & ΔX j , j=1, …, N. Generate an initial value A, a, and c. Find the fitness function of each search agent. While (t<Max iter ). For each DA Calculate the A, C & S by Eq. (5) to (7). Update the E & F by (8) & (9) and the main coefficients. A Proposed Hybrid Algorithm Overview The hybrid system MI-BDA uses the mutual knowledge technique dependency technique as an elementary stage to obtain a collection of genes, in which the genes of a data are organized according to their value in classification accuracy (from highest to lowest). After organizing and defining the genes, the BDA is used to select a subset of pre-selected genes using the MI technique. The «Binary Dragonfly Algorithm» (BDA) is an acronym for «Binary Dragonfly Algorithm.» genes are calculated by selecting the gene that corresponds to a value of one and ignoring the gene that corresponds to a value of zero from a vector of binary values (consisting of one and zero) that is formed at random and has the same length as the genes vector. As an example, in Fig. 1, consider the following: No selected genes Selected genes Fig. 1. An exemplification of the genes in Binary Drag To achieve classification precision, BDA uses the KNN classifier which then applies the following methods to the fitness function [20,21]: where AC denotes classification accuracy, G q denotes the selected function, G p denotes the entire dataset's features, and w 1 denotes the corresponding random parameter to AC weight. The proposed MI-BDA framework's pseudocode can be seen below: End for Set t = t+1 End Return the best position. The optimal genes. End Result of hybrid algorithm (MIBDA) MI-BDA has been applied in three different classification datasets for verification of the proposed algorithm (DLBCL, Prostate and Ovarian). All data sets that were used are binary from [22,23]. 1. Dataset's description and average feature selection In this section we show in Table 1 the dataset we used in our search. In Table 1, we have three datasets that contain (77, 102, 253) samples and every sample contains (7,129,12,600,15,154) features, respectively. After we choose our data set, we use our suggested method to select genes, as shown in Table 2. Her in Table 2 we explained the feature selection for our method MI-BDA and BDA. 2. The experimental effects At the end we compare our suggest method (MIBAD) by other method in Table 3. At the end we show in Table 3 our result for training datasets and testing datasets for two methods MI-BDA and BDA. Tables 2, 3 show that the hybrid algorithm MI-BDA achieved better classification accuracy and chose less features than the BDA algorithm, resulting in a reduction in the cost of the calculations that the algorithm needs during the implementation phase, where the MI-BDA algorithm's training and testing dataset achieved the preferred result. The accuracy of the research dataset in dataset 2 is 94.1818 percent by MI-BDA, which is higher than 90.1905 percent by BDA. Discussion of the research results of hybrid algorithm (MIBDA) In this paper, researchers used a hybrid algorithm that selects genes in two phases. The MI method was used in the first stage, it only produced a subset of the gene, whereas the BDA method was used in the second stage to minimize the gene generated in the first stage, so you can see in Table 3 when we showed in training dataset according to MI-BDA, the accuracy of the study dataset in dataset 1 is 91.619 percent, which is greater than BDA's 88.1159 percent, for dataset 2 is 92.8141 percent for MI-BDA but in BAD is 65.4441, we can see the difference between the normal method with our method that MI-BDA given more better effective result. In the fitness function, the dataset subsets were evaluated using the K-nearest neighbor (KNN) classifier. MI-BDA, a proposed hybrid algorithm, was compared to BDA, with MI-BDA demonstrating superior classification accuracy and performance across three datasets. A subset of the selected genes was obtained according to the improved algorithm and the experimental results showed that the proposed algorithm, which we refer to as MI-BDA, outperforms other algorithms in terms of the accuracy of the results represented in the cost of the calculations and the accuracy of classification. You can find ideas for developing and implementing a solution strategy as well as hybrid algorithms that can be used in conjunction with a genetic algorithm and other heuristic analysis methods in the following study topics. Conclusions 1. The dataset was separated into 80 % training groups, with the total number of data utilized being a 30 % test group from datasets. 2. An improvement has been made to the gene selection method, the number of gene we got it in MI-BDA is 8.8 and in BDA is 2950 from 7129 features in dataset 1, this means that our proposed method succeeded in finding the best partial set of features. Our hybrid algorithm «MI-BDA» compared with «BDA» by using three data set «DLBCL, Prostate, Ovarian», where the algorithm showed high competence in terms of gene selection, and the accuracy of the research dataset in datasets is by MI-BDA, is higher than by BDA.
4,411.4
2021-01-01T00:00:00.000
[ "Computer Science" ]
Do Cells use Passwords? Do they Encrypt Information? Living organisms must maintain proper regulation including defense and healing. Life-threatening problems may be caused by pathogens or an organism’s own cells’ deficiency or hyperactivity, in cancer or auto-immunity. Life evolved solutions to these problems that can be conceptualized through the lens of information security, which is a well-developed field in computer science. Here I argue that taking an information security view of cell biology is not merely semantics, but useful to explain features of cell signaling and regulation. It also offers a conduit for cross-fertilization of advanced ideas from computer science, and the potential for biology to inform computer science. First, I consider whether cells use passwords, i.e., precise initiation sequences that are required for subsequent signals to have any effect, by analyzing chromatin regulation and cellular reprogramming. Second, I consider whether cells use the more advanced security feature of encryption. Encryption could benefit cells by making it more difficult for pathogens to hijack cell networks. Because the ‘language’ of cell signaling is unknown, i.e., similar to an alien language detected by SETI, I use information theory to consider the general case of how non-randomness filters can be used to recognize (1) that a data stream encodes a language, rather than noise, and (2) quantitative criteria for whether an unknown language is encrypted. This leads to the result that an unknown language is encrypted if efforts at decryption produce sharp decreases in entropy and increases in mutual information. A fully decrypted language should have minimum entropy and maximum mutual information. The magnitude of which should scale with language complexity. I demonstrate this with a simple numerical experiment on English language text encrypted with a basic polyalphabetic cipher. I conclude with unanswered questions for future research. amplification, memory, modularity, feedfoward and other motifs, which are reviewed by Krakauer and colleagues 2 , Uda and Kuroda 3 , Mousavian and colleagues 4,5 , Walterman and Klipp 6 , Azeloglu and Iyengar 1 , and Antebi and colleagues 7 . Cell networks can become dysfunctional through somatic mutation, chemical injury, infection, or other processes, that achieve varying degrees of control over the network 8 . Here, I begin to consider these processes through the lens of information security, which as far as I can determine is not common. This is notable for its stark contrast to human telecommunications, where cybersecurity is of paramount importance 9 . In an elegant and trenchant examination of theoretical biology, Krakauer and colleagues argue "before we can look for patterns, we often need to know what kinds of patterns to look for, which requires some fragments of theory to begin with 10 ." Therefore, I propose fragments of theory for information security in cells for the community to begin to hunt for patterns and test predictions. By explicitly incorporating information security concepts into thinking about biological systems, several outcomes are possible in general: (1) distinctions without differences: rephrasing familiar concepts of immunity and regulation in terms of information security adds no value; (2) crossdisciplinary fertilization occurs as information security concepts are imported into biological theory; (3) new information security knowledge arises from examination of biological systems. Recent studies on network controllability provide one framework for examining information security in biochemical networks [11][12][13][14][15] . In this essay, a different perspective is taken to analyze whether cells use passwords and encrypt information. Immune Systems and biological security The evolution of immune systems and self-defense against injury and mutation are major innovations in the history of life on earth [16][17][18] . By total volume, life on earth has its largest habitat in the deep ocean with an abundance of bacteriophages, suggesting that evolution leads to a proliferation of simple life forms, with consciousness as a kind of statistical accident 19 . Singlecelled and multi-cellular organisms evolved a wide-variety of defense systems, often dichotomized into innate and adaptive systems 20 . These systems can be conceptualized more generally to include protective mechanisms against both external and internal damage. The connection between external and internal injury is seen in the study of viruses, which led to insights in cancer biology and the discovery of oncogenes 17 . Organisms developed the ability to recognize self from non-self and destroy xenobiotic material. However, not all foreign genetic material is completely destroyed, because it can increase fitness, e.g., antibiotic resistance plasmids 20,21 . On the intracellular level, bacterial defense mechanisms include blocking receptor binding (surface modification), genome injection (superinfection exclusion), viral replication (restriction modification, CRISPR-Cas, and prokaryotic Argonaute), and abortive infection (programmed cell death) 21 . Similar mechanisms exist in eukaryotic cells, including, RIG-like receptor proteins that recognize RNA 16 , xenophagy 22 , advanced intracellular nucleic acid recognition systems and other cell-autonomous mechanisms 23 . In plants, sophisticated DICERs defend against retroviruses 24 . Similarly, pathogens use a variety of mechanisms to co-opt, hijack, and counteract host defenses [25][26][27][28] . Mutations leading to oncogenes reprogram signaling networks 29 . All of these attacks and counter-attacks involve changes in signaling and regulatory networks, and therefore, changes in information. Information security in computer science Information security has been critically important for millennia, with the Caesar substitution cipher being a prominent early example 30 . (The cipher works by shifting each letter of the English alphabet by 3, i.e., A->C, B->D,...,X->A 30 ) Computer viruses achieved notoriety in 1987 when the Brain, Lehigh, and April Fool viruses came to worldwide attention 31 . Hackers achieved infamy and also contributed to the advancement of information technology 32 . Information security depends on the use of passwords for system access and encryption to alter information so that its meaning is obfuscated 33 . Development of secure encryption systems, e.g., the RSA asymmetric public key cryptography, was an essential innovation in the history of the internet 33 and must constantly evolve to meet new threats 9 . Steganography is an altogether different approach that conceals the existence of information, e.g., writing with invisible ink, and appears to have had played less importance in the history of information technology than cryptography 33 . Attacks on encrypted systems can involve interception, modification, fabrication, or interruption of information 33 . There has been considerable work in adapting biomolecules for use in information security in human telecommunications using biosteganography 34 where information is invisible and molecular cryptography, where synthetic biology is used to re-engineer molecules to decode and encode information 35 . Despite obvious parallels in the world of computers, less explicit attention appears to have been paid to theoretical descriptions of cells in terms of their native information security systems, prompting me to ask: Do cells use passwords? Do they encrypt information? Information systems in cells Individual cells have a variety of sophisticated information systems. They encode information through the genetic code, which utilizes double-stranded complementary base pairing to provide built-in error correction, which is a type of backup or repair security system. At the proteome level, cells can greatly expand on the genetic code with a few hundred different post-translational modifications in various combinations, that give rise to numerous proteoforms 36 , which form components of signaling and regulatory networks. Somatic recombination in immunoglobulins and T-cell receptors can vastly increase protein variants in certain cell types 37 . Interactions of these macromolecules form networks that store and transmit information 6 . There is a context specificity to many signaling pathways, including TGF-beta and AKT, which means that cells respond differently to pathway activation depending on the cell type 38,39 . Many intracellular signaling pathways do not match one receptor to a single ligand, but instead use multiple receptors and ligands that interact combinatorially 40 , or use combinations of numerous nuclearreceptor cofactors to regulate activity 41 . Therefore, genetic, epigenetic, transcriptomic and proteomic variation gives rise to a large repertoire of interacting components. These mechanisms are present in complex multicellular organisms, where advanced regulation is needed to control differentiation 42 and also in bacteria for quorum sensing 2 . Cancer has been shown to involve rewiring cellular networks by oncogenes and therefore, in some sense, these represent alterations in information transmission and compromised security 29,43 . Cells can be reprogrammed through microRNAs and gene regulatory networks in cancer to oncogenic states with distinct metabolism 44 . Similarly, viruses can substantially rewire signaling and regulatory networks to hijack cellular machinery for viral benefit 45 . In the early days of cancer research, similarities between the two systems caused the scientific community to think that viruses cause cancer, and studies into viral biology provided insights into cancer 17,46 . Both pathogenic and pathological processes involve hijacking cellular networks. In multicellular organisms, combinations of histone modifications give rise to varying chromosomal accessibility and epigenetic states, which are read, written, and erased by chromatin modifiers 47,48 . This epigenetic regulation is capable of encoding memory at the singlecell level 49 . Redundancy and correlation among epigenetic marks, transcription factors, and coregulators provides a system of information compression to specific cell state 50 . For example, ligand identity can be encoded as pulsatile (DLL1-Notch1) or sustained (DLL4-Notch1) to induce opposite cell fates. In the adult human body, several hundred distinct cell types exist in "cell states", some of which can be dynamically reprogrammed from one state to the next using sophisticated perturbations [51][52][53] . The language used to describe these cellular properties (code, encode, read, write, memory, erase, reprogram, compression, rewire) points to their aspects as information systems. Do cells use passwords? Password authorization systems allow access based upon entry of a correct code out of many possible entries. They can be viewed conceptually as an initiation sequence of signals without which the system will not respond to subsequent signals. Typically, passwords function as a logical AND operation, i.e., each character must be entered correctly to allow system access. However, a logical AND gate is not strictly required. For example, a bouncer at a nightclub may listen for the password "more cheese" but accept partial matches, such as "more these" or "Moishe's". I consider whether there is an evidence for the existence of passwords, i.e., an initiation sequence of signals without which the system will not respond to subsequent signals using the example of transcription factor-chromatin accessibility. Organization of chromatin into highly compact, inaccessible regions, and open, accessible regions appears on its face to be a form of cellular information security because some genes are reprogramming. Do cells encrypt information? If cell signaling networks use encryption, how might we know? Put another way, if we do not know the underlying language, i.e., the unencrypted information, how can we recognize encrypted information? To explore this question, several concepts from information theory are useful. The Shannon entropy is defined as 55 : where H is the entropy in bits, defined as the expected information of a distribution of random variables X. The entropy can be thought of as how predictable the next character in a transmitted message is. A message that is purely random characters and therefore, not meaningful language, will have the highest entropy 55 . Considering only the 26 letters in the English alphabet, the maximum entropy is log 2 (26)=4.7 bits. Shannon analyzed words of size N up to 8 letters and found the entropy of the English language to be roughly 2.3 bits per letter, a 50% reduction over random 56 . The English alphabet could eliminate the letter c with either k or s without any meaningful effects. Moreover, English text can be re-coded and stored in smaller file sizes without loss of information (lossless compression) using sophisticated algorithms 55 . Entropy provides a limit on lossless compression 55 . A related concept to entropy is Zipf's law, which states that a word's probability is inversely proportional to its rank and has been found in English language phrases, and also other fields, e.g., city sizes, firm sizes, and neural activity 57 . (2) A large number of explanations has been proposed for why Zipf's law exists, which are reviewed by Piantadosi 58 . Purely random texts do not follow Zipf's law 59 . Salge and colleagues found that Zipf's law emerges through minimization of communication inefficiency and direct signal cost 60 . Williams and colleagues found that Zipf's law held more generally for phrases in English than words, which is intriguing because phrases are "the most coherent units of meaning in language 61 ." Language has additional structure that can be captured through analysis of pairwise and higherorder interactions 62 . One measure of association is mutual information 6 . It can be defined between two sets of variables X and Y, e.g., adjacent letters in the English alphabet as where H(X,Y) is the joint entropy between the X and Y, which is defined as When X and Y are statistically dependent, the joint entropy H(X,Y) is lowest and the mutual information is maximized. Doyle and colleagues describe the search for extraterrestrial intelligence (SETI) as fundamentally applying Zipf's law and higher-order information-entropic filters to received sources of electromagnetic radiation 63 . Cell signaling and gene expression have been shown to pass both of these non-randomness filters 6,64 . These non-random filters can also be applied to any sort of data stream to check if it is non-random. If a simple substitution cipher is applied to an unknown language, the frequency distributions of letters, words, and phrases do not change, and therefore, given enough text would be recognizable as language, although perhaps untranslatable. For a more complex cipher, e.g., a polyalphabetic cipher, the entropy will increase and frequency distributions will deviate from Zipf's law. In other words, if SETI receives a long stream of an alien communication that is encrypted by relatively simple methods, its non-randomness filters should recognize it as a language. If the alien language is encrypted with a polyalphabetic cipher, which was subsequently decrypted, the plaintext would have lower but non-trivial entropy. A quantitative test for whether a text is encrypted is whether there is a decryption, such that: Where d is a decryption out the set of all possible decryptions D, E is the decrypted plaintext, and MI is the mutual information in the decrypted plaintext, e.g., the mutual information in adjacent letters, and H is the entropy of the decrypted plaintext, e.g., per letter. In other words, a signal stream is encrypted if a decryption can be found, such that the entropy is minimized and the mutual information is maximized. To demonstrate this, I provide a simple numerical example. The text of Jane Austen's novel Pride and Prejudice was downloaded from the Gutenberg project 65 , processed and cleaned of special characters in the R programming language using the textclean package 66 , and encrypted with a simple polyalphabetic substitution cipher of 0,+1,+2. Figure 1A shows the frequency distributions of adjacent letters in the plaintext. Figure1B shows how the frequency distributions of adjacent letters in the encrypted text result in an increase in entropy. The Entropy R package was used to compute entropy per letter and mutual information for adjacent letters 67 . Figure 1C shows how applying varying levels of decryption using several different methods results in changing entropy per letter and mutual information of adjacent letters. As the text is decrypted more completely, the entropy per letter decreases and the mutual information per pair of adjacent letters increases. Complete decryption produces a maximum of this mutual information and a minimum of entropy. Therefore, we can begin to look for patterns that may involve encryption in very rich data of cell signaling by applying this quantitative criterion. Conclusions and open questions Evolutionary potential is vast and a complex interplay among environmental change, ecosystems, speciation, niche diversification, extinctions, and innovation have shaped life on earth 68,69 . Considering how rapidly passwords and encryption evolved in human telecommunications, it is natural to ask whether they are used in nature by cells. This theoretical exploration suggests that cells may use passwords to lock-in cell state, which must be unlocked through the right
3,721.2
2018-10-03T00:00:00.000
[ "Computer Science", "Biology", "Philosophy" ]
Bayesian Modeling for Differential Cryptanalysis of Block Ciphers: A DES Instance Encryption algorithms based on block ciphers are among the most widely adopted solutions for providing information security. Over the years, a variety of methods have been proposed to evaluate the robustness of these algorithms to different types of security attacks. One of the most effective analysis techniques is differential cryptanalysis, whose aim is to study how variations in the input propagate on the output. In this work we address the modeling of differential attacks to block cipher algorithms by defining a Bayesian framework that allows a probabilistic estimation of the secret key. In order to prove the validity of the proposed approach, we present as case study a differential attack to the Data Encryption Standard (DES) which, despite being one of the methods that has been most thoroughly analyzed, is still of great interest to the scientific community since its vulnerabilities may have implications on other ciphers. I. INTRODUCTION Among the many different encryption methods adopted by the modern systems, algorithms operating on fixed-length blocks of bits are still one of the most popular. The strength of these methods is constantly being studied by means of approaches that aim to assess their robustness to specific attacks, or the presence of vulnerabilities to generic threats. In this context, differential cryptanalysis is one of the most effective and relevant approaches. The idea at the basis of differential cyrptanalysis is to evaluate how any change in the plaintext impacts the ciphertext. Then, the results of the analysis can be used to estimate the set of the most probable keys. In this paper we present a Bayesian framework for modelling differential attacks to block cipher algorithms; in particular, given the importance of the Data Encryption Standard (DES) in the design of many block cipher algorithms, a case study focused on the cyrptanalysis of DES is addressed. The Data Encryption Standard (DES) [1] was the first symmetric cipher heavily adopted all over the world and it The associate editor coordinating the review of this manuscript and approving it for publication was Xiali Hei . was the most used cipher up to the beginning of 2000s. Deep analyses of DES led to the definition of several cryptanalysis techniques, and many results achieved for DES are also valid for the wider class of block ciphers. Today, the limited size of the secret key adopted by DES (56 bit) and the computational power of modern computers entail that DES is not considered secure for ciphering sensitive data. Nevertheless, DES is still widely adopted in various scenarios, such as those characterized by low security requirements, if resource-constrained devices are required to implement security mechanisms, or when huge amount of data have to be protected. The authors of [2], for instance, propose the adoption of DES to ensure privacy in a graduate project management system. Similarly, the need to protect a large amount of data while keeping the computational costs low moved the authors of [3] to choose DES for data encryption in an ERP. DES is often exploited to protect data exchanged between Internet of Things devices, which are characterized by severe resource requirements [4], [5], [6]. DES could also be employed as a tool for providing companies with proper data protection policies that represent a fair trade-off between security goals and computational costs [7]. Moreover, DES is a building block of Triple DES [8], [9], VOLUME 11, 2023 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ a solution adopted to overcome the limitations imposed by the DES key size. Thus, it is interesting to investigate the vulnerabilities of DES also for possible implications on other block ciphers. Several works in the scientific literature identified and analyzed some of the main vulnerabilities of DES, through the definition of new cryptanalysis techniques. One of these approaches is the differential cryptanalysis [10], a chosen plaintext attack designed for iterated cryptosystems, which analyzes how the difference between two plaintexts propagates in the resulting ciphered texts when using the same key. The differential cryptanalysis focuses on the S-Box, the unique non-linear component of DES, and allows to reduce the computational cost in comparison with an exhaustive key search. Differential Cryptanalysis has been adopted also to perform attacks to other symmetric cyphers, such as AES [11]. Moreover, several machine learning approaches have been adopted in recent years to improve differential cryptanalysis, or to provide a new perspective on it. The authors of [12], for instance, proposed the adoption of neural networks to attack DES, and evaluated the performance by using different network structures. In [13], several metaheuristics, such as genetic algorithms and simulated annealing, are exploited to formulate a differential attack on DES. The experiments performed on a DES reduced to six rounds demonstrates the suitability of the approach. The authors of [14] and [15] relied on deep neural networks to design a differential distinguisher to attack different block ciphers based on the Feistel network. We propose an original formalization of the differential cryptanalysis based on the adoption of Bayesian Networks (BN), a probabilistic graph model framework that uses Bayesian inference to perform probability computations. We aim to describe the statistical behavior of S-Boxes when a pair of plaintexts, with a given difference, is provided for ciphering. The diagnostic inference enabled by BNs, allows a probabilistic estimation of the secret key, by considering the difference between plaintexts and the difference between the corresponding ciphered texts. Such formalization, preliminary described in [16], eases the definition of an algorithm for attacking the DES, based on the differential cryptanalysis. The paper is organized as follows. In Section II, a brief description of DES is provided, in order to introduce the adopted notation. Section III describes some related works presented in the literature. In Section IV, the original formulation of the differential cryptanalysis is introduced. Section V describes the proposed Bayesian model of the DES differential cryptanalysis. Finally, Section VII states our conclusions. II. THE ADOPTED DES NOTATION DES is a symmetric cipher which transforms a 64-bit plaintext P in a 64-bit ciphertext T . Such mapping is parameterized by a 64-bit key, reduced to a 56-bit key because of the use of 8 parity bits. It is an iterated cipher based on the Feistel scheme, which processes plaintext through a series of transformations named rounds, as showed in Fig. 1. The encryption process consists of 16 rounds, which are preceded by an initial permutation and followed by the corresponding inverse permutation. Each round is parameterized by a 48-bit subkey S K X ∈ Z 48 2 , depending on the round X and the initial key K . At each round X , the 64-bit input is divided into two parts, left and right, which are processed separately. The right part becomes the left one of the next round without any further processing. Both halves are processed according to the Feistel scheme, in order to produce the right part of the next round, as showed in Fig. 2. In particular, for each round X = 2, . . . , 16, the following equations hold; where F, named Feistel function, determines the non-linear behavior of DES. The Feistel function implemented by DES (see Fig. 3) is defined as follows: where E, S and P represent respectively the expansion function, the substitution performed by the S-box and the permutation function. Since the only non-linear component of the DES F-function is the S-box, it constitutes the main contributor to DES security. One of the properties of S-Box is the uniform distribution of the probability of producing a given output. Nevertheless, authors of [10] have shown that, taken two different inputs for a given S-Box characterized by some known difference, then the probability distribution of the difference between the corresponding outputs is not uniform. Differential cryptanalysis [10] exploits such a vulnerability in order to reduce the computational effort for determining the secret key, and the same idea underlies the approach described in the present work. III. RELATED WORK As previously mentioned, a deeper comprehension of S-Box behavior could make the whole cipher more vulnerable. Many works in the literature analyze properties of S-Boxes in order to find DES vulnerabilities and to define design criteria for strong block ciphers. Authors of [17] analyze properties of S-Boxes with respect to the statistical distributions of produced output and the statistical dependence of output bits given the knowledge of one or more input bits. In [18] some general criteria to design S-Boxes are discussed. Authors analyzed both static and dynamic properties. Static properties impose that partial information about input and output does not reduce the uncertainty of unknown input or output, and guarantee the maximum output uncertainty. Dynamic properties impose that partial information about changes in input and output does not reduce the uncertainty of unknown inputs or outputs. Authors stated that the uncertainty should not be reduced when the attacker has information about the past history of S-Boxes processing. Other studies indicate that the latest approaches [19], [20], [21], also known as strong S-Boxes, are vulnerable due to the adoption of fixed point or reverse fixed point, which can be an exploitable weakness in cryptography. Authors of [22], for example, address the exploitable weakness of fixed point and reverse fixed point contained in many S-Boxes. Then, they designed a S-Box construction algorithm based on ICQM that eliminates the weakness through backtracking. On the basis of the properties discussed so far, many cryptanalysis methods were proposed in the literature to violate S-Boxes. An algebraic approach is proposed in [23], which defines the set of criteria to determine the set of non-linear algebraic constraints which describes the I/O relationship of S-Boxes. Exploiting this set of constraints, the whole cipher is described as a system of multivariate non-linear equations, that can be solved through the algorithm proposed in [24]. It should be noted that the equations representing S-Boxes are exact, i.e. not approximated. On the contrary, the author of [25] proposed a linear approximation of S-boxes and DES, which is valid with some probability. This method is an example of stochastic attack. Instead of focusing on the behavior of a single S-Box, authors of [26] focus on the probabilistic behavior of pairs of adjacent S-Boxes. They found that input bits of two adjacent S-Boxes are strictly related by some bits of the key, due to the expansion phase. Thus, the probability distribution of the output of these two adjacent S-Boxes, conditioned on key bits is not uniform. On the basis of such vulnerability, authors proposed an attack with computational complexity comparable to the exhaustive key search. Authors of [10], which propose the differential cryptanalysis, studied how input differences affect the resulting output difference. Their attack traces differences through the transformations, discovers where the cipher exhibits nonrandom behavior, and exploits such properties to recover the secret key. Another interesting work discussing the differential cryptoanalysis is presented in [27]. Here, the authors study the propagation of differences from round to round to find specific differences which propagate with relatively high probability. The cryptanalysis technique is applied to DES reduced to i-rounds, with i ∈ [3,8] and, for each, the differentiation between wrong and right pairs is made to get relevant key bits and retrieve the secret key. IV. ORIGINAL FORMULATION OF DIFFERENTIAL CRYPTANALYSIS The vulnerability at the basis of the differential cryptanalysis [10] originates from the non-uniform distribution of the difference between two outputs, given the difference between VOLUME 11, 2023 FIGURE 4. Notation adopted for describing differential cryptanalysis. two inputs, for different keys. Nowadays, it is a technique adopted to breach many reduced-round block cyphers, such as SPECK [28], LEA [29], GIFT [30], and Midori64 [31]. This section summarizes the original formulation of the differential attack, with the notation showed in Fig. 4. Let S EX and S * EX be two outputs from the expansion function at round X , S IX and S * IX the following two inputs to the S-Box S(·), and S OX = S(S IX ) and S * OX = S(S * IX ) the resulting outputs from the S-Box. The differences between S-Box inputs and outputs are obtained through the bitwise xor and are indicated as follows: The vulnerability exploited by the differential attack is that the probability distribution of the difference between two outputs, conditioned by the difference of the two corresponding input, i.e., p(S ′ OX |S ′ IX ), is not uniform. This characteristic makes S-Boxes weak from a dynamic point of view, according to analysis proposed in [18]. Let's consider N pairs of output from the expansion function characterized by the same difference. As showed in Fig. 4, the relationship between each pair of outputs from the expansion function, the subkey, and the S-Box inputs is expressed by the following equations: Consequently, the difference between S IX and S * IX is equal to the difference between S EX and S * EX : Thus, given the knowledge of the expanded pairs (S EX , S * EX ), it is also known the difference between S-Box inputs, i.e., S ′ IX , without knowing separate values. This knowledge does not allow to foresee the difference between S-Box outputs. Indeed, due to the non-linear behavior of S-Boxes is not obvious that two input pairs with the same difference produce the same output difference; on the contrary many values for the output difference are possible. The critical point is that only some output differences are possible starting from a given input difference, and the probability distribution of these values is not uniform. For each pair (S EX , S * EX ), it is possible to observe the following output pair (S OX , S * OX ), and to compute the differences between inputs and between outputs, i.e., S ′ IX and S ′ OX , according to Eq. 3 and 4. Moreover, it is possible to select the set of possible keys which can produce the observed differences, by exploiting the equation S KX = S EX ⊕ S IX . Thus, each pair (S EX , S * EX ) produces a set of candidate keys, and the true secret key belongs to the intersection of these sets. Consequently, it is necessary to repeat this evaluation until such intersection is a singleton. The logic behind the differential cryptanalysis attack can be described through the following simplified pseudocode: V. BAYESIAN NETWORKS MODELS Bayesian networks (BN) [32] are a graph-based formalism capable of expressing probabilistic cause/effect relationships between random variables. Such framework is adopted in machine learning for performing probabilistic inference. In this work, we model through BNs the statistical dependence driven by the secret key between input differences and output differences, as found in [10], and we exploited it to determine the secret key. In the graphic model adopted by BNs, nodes represent random variables and directed links represent the cause/effect dependence between two nodes. BNs allow to represent the joint probability distribution of several variables through a set of conditioned probability distributions, each associated to a link, and a set of a priori probability distributions, for nodes without antecedents. In this section, we will present the BNs which model a single S-Box, the Feistel function and the whole DES, and then we will present the algorithms for attacking such elements through exact inference, and analyze their computation complexity. We will prove that the exact inference for attacking the whole DES has a high computation cost, and consequently we will propose an algorithm based on approximate inference. A. SINGLE S-BOX ATTACK For the construction of the BN for attacking the S-Box, the original notation reported in [10] is adopted. It is useful to recall that the S-Box consists in a set of eight S-Boxes, indicated as Si(·) with (i = 1, . . . , 8), each of which accepts 6 bits as input and produces 4 bits as output. So, the input to a S-Box can be considered divided into eight 6-bit blocks. According to the adopted notation, (Si EX , Si * EX ) indicate the two i-th 6-bit blocks of two different outputs from the expansion function, Si KX indicates the i-th 6-bit block of the subkey, (Si IX , Si * IX ) represent the two inputs to the i-th S-Box Si(·), Si ′ IX represents the difference between the two inputs to the i-th S-Box, and finally Si ′ OX indicates the difference between the two 4-bit outputs from the i-th S-Box. The probabilistic inference exploits the known value of some random variables, named evidence, and infer the probability distribution of a set of unknown random variables, named target nodes. Since the differential cryptanalysis exploits a chosen plaintext attack, i.e. a circumstance where the adversary is capable to trigger the encryption of arbitrary messages and to observe the corresponding plaintext-ciphertext pair, the set (Si EX , Si * EX , Si ′ IX , Si ′ OX ) constitutes the evidence and the key blocks Si KX represent the target nodes. In order to build the BN we complied with the following assumptions: • The Si KX , Si EX and Si * EX variables are not influenced by other random variables, thus they are represented as nodes without antecedents; their a priori probability distribution is considered as uniform. • The input to the i-th S-Box, Si IX , depends only on Si EX and Si KX , according to Eq. 4, which are the sole parents of the Si IX node (analogously for Si * IX ). thus the Si IX and Si * IX nodes are the only parents of the Si ′ OX node. The resulting BN, named SBox-BN, is showed in Fig. 5. The full definition of the BN requires the formalization of (i) the a priori probability distributions for nodes without parents and (ii) the conditioned probability distributions for other nodes. Let us represent as δ n (X ) the Kronecker delta applied to a n-bit string, taking value one if and only if all bits of its argument are equal to zero. Then, the probability distributions of the SBox-BN are expressed as follows: 2 , because of the hypothesis of uniform distribution; • p(Si IX = si ix |Si EX = si ex , Si KX = si kx ) = = δ 6 (si ix ⊕ si ex ⊕ si kx ), ∀si ix , si ex , si kx ∈ Z 6 2 , because of Eq. 4; ox ∈ Z 4 2 and si ix , si * ix ∈ Z 6 2 , because of Eq. 3. The flow of the probability distributions through the Bayesian Network depicted in Fig. 5 is summarized in Fig. 6, where the three plots show the most significant distributions within the SBox-BN. The probabilities of all nodes at level 0, e.g., Si EX , are uniformly distributed (see the plot in the upper left corner); that is all outcomes are equally likely with a probability of 1/2 6 . The two nodes at level 1, as well as their child Si ′ IX , are characterized by a distribution in which the probability of most configurations is zero, while the remaining possible hypotheses have constant probability values. The 3D-plots in Fig. 6 represent this probability; the axes refer to the variables involved in the probability distribution equation, while the points indicate where the probability assumes nonzero values. By observing the plot for Si ′ IX , it can be noticed that only a subset of keys, characterized by an extremely regular pattern, is retained over all possible combinations. The bottom right plot shows the probability distribution of the node Si ′ OX , which is characterized by the lack of a regular patterns because of the non-linearity introduced by the S-Box. Under such BN model, given the two outputs si ex and si * ex from the expansion function and the corresponding output difference si ′ ox from the S-Box at the round X , the most probable secret key corresponds to the greatest conditioned probability among keys that produce si ′ ix = si ex ⊕ si * ex as input difference and si ′ ox as output difference, as follows: (7) By applying rules for manipulating probability expressions in BNs, it is possible to obtain the explicit formulation of such conditioned probability: where η 1 is a normalization factor which makes 1 the sum of all terms of the probability distribution, and σ 1 is the set of all (Si IX , Si * IX ) pairs obtained through the XOR of the possible secret key with the given input evidence: It is worth noting that, since Si IX and Si * IX are restricted to a single value, the sum in Eq. 8 corresponds to a single value, as expressed by the following equation: For the sake of brevity, we omitted the detailed proof, that nevertheless can be found in [33]. In order to narrow down the set of possible keys, it is possible to evaluate a non-normalized version of Eq. 10, by ignoring the normalizing factor η 1 . Indeed, the probability distributions describing the SBox-BN are expressed through the Kronecker delta; thus, Eq. 10 can provide only two values: 0 for all keys that have been excluded, and a constant value η 1 for all keys that are still possible. Such a value can be Algorithm 1 -prob_key_SBox_attack -Algorithm for Computing the Probability That a Key Block Is Correct by Attacking the i-Th S-Box Data: i: the index of the selected S-Box : a set of multiple evidences = si ex , si * ex , si ′ ix , si ′ ox ; Result: p: the array of 2 6 values, representing the non-normalized probability distribution over the set of possible key blocks. begin p ← new array [2 6 ]; for k ix = 0 : determined by imposing that the sum of all the residual probabilities is equal to 1. However, since the purpose of Eq. 10 is merely to identify the residual set of keys, the computing of a specific value for η 1 is irrelevant. The sets of possible keys obtained by attacking a S-Box with two different evidence sets may be different. Since the true secret key belongs to each of these sets, their intersection is never void. With a sufficient quantity of data, by performing multiple attacks with different evidences, the repeated intersection of the obtained key sets produces the singleton containing only the secret key. The assumption of the independence of the evidence sets allows to express the probability distribution of the secret key conditioned by all the evidence sets as the product of the probability conditioned by each single evidence set: where η 2 is a normalization factor and is the set of multiple evidences: In order to find the most probable key, it is possible to evaluate the not normalized version of Eq. 11 and Eq. 8 as described by the Algorithm 1. B. FEISTEL FUNCTION ATTACK The same approach of the previous section can be generalized in order to attack a single instance of the Feistel function, by analyzing its output (named Y X in Fig. 3). Under the hypothesis of chosen plaintext attack, it is possible to select a pair of inputs to the Feistel function, Z X , Z * X ∈ Z 32 2 , and then 4814 VOLUME 11, 2023 observe the difference between the corresponding outputs, Y ′ X , obtained according to the following equation: where P, S and E are respectively the permutation function, the substitution performed by the S-box, and the expansion function. By observing that S IX = E(Z X ) ⊕ S KX and S * IX = E(Z * X ) ⊕ S KX , and by exploiting the linearity property of the permutation function, it is possible to obtain the following system of equations: Each variable in such system can be considered as a random variable, and their relationships can be represented through the BN showed in Fig. 7, named Feistel-BN. Its probability distributions are expressed as follows: 32 , ∀z x , z * x ∈ Z 32 2 , because of the hypothesis of uniform distribution; • p(S KX = s kx ) = 1 2 48 , ∀s kx ∈ Z 48 2 , because of the hypothesis of uniform distribution; 32 2 , because of the first part of Eq. 14; , ∀z x ∈ Z 32 2 and ∀s ix , s kx ∈ Z 48 2 , because of the second part of Eq. 14; , ∀z * x ∈ Z 32 2 and ∀s * ix , s kx ∈ Z 48 2 , because of the third part of Eq. 14; , ∀s ix , s * ix ∈ Z 48 2 and ∀y ′ x ∈ Z 32 2 , because of the fourth part of Eq. 14; The goal of the attack on the Feistel function is to find the most probable set of keys, given the known evidence, obtained by maximizing the following likelihood: Albeit the construction of the probability distribution over a 48-bit key, by expanding Eq. 15, requires 2 48 steps, it is possible to reduce the computational complexity by exploiting the linearity of the P(·) and E(·) functions. Let us recall that the XOR between the output of the expansion function, E(·), and the secret key, is the concatenation of the inputs to eight S-boxes, and that the input to the permutation function P(·) is the concatenation of the outputs from the eight S-boxes, as expressed by the following equations: Then, the Feistel function can be violated by attacking each single S-Box and then by obtaining the full 48-bit key by concatenating the partial results: Thus, the actual computational cost for attacking the whole Feistel function is eight times the cost for attacking a single S-Box, since the following equation holds: The Algorithm 2 describes how to perform the attack. Its computational cost is dominated by the evaluation of the probability distribution for key blocks. Namely, the other VOLUME 11, 2023 components, i.e., the separation of the evidences in blocks and the composition of the whole probability distribution, may be easily optimized, although in the pseudocode they are described in an extended form for the sake of readability. C. DES ATTACK In the following we describe the BN which models the attack on the whole DES. We show that, differently from the attack on the Feistel function, it is not affordable to attack the complete DES through exact inference since the computational cost grows exponentially. We propose, hence, an algorithm for attacking DES through approximate inference. In the following description we neglect the initial and final permutations, since they do not affect the probabilistic analysis. Let P and P * be two plain texts input to DES, P ′ be their difference, and (L ′ , R ′ ) the left and right parts of P ′ , each constituted by 32 bits. Let us indicate the difference between the two outputs from DES as T ′ , and (l ′ , r ′ ) its left and right parts. Moreover, let us assume that the two plain texts are independently chosen. The relationships among variables involved in the first round of DES are described by the following equations: The difference between the inputs to the second round can be obtained by considering the variables involved in the first round, as follows: The iteration of such procedure leads to the formulation of the following system of equations, that expresses relationships among the variables involved in all the rounds of DES: These relationships are graphically represented by the Bayesian Network, named DES-BN, showed in Fig. 8. It is worth noting that the structure of the DES-BN is based on the simplifying assumption that subkeys are mutually independent, as also proposed in [10], since such assumption allows to simplify the evaluation of the BN conditioned probabilities. The goal of the attack on the whole DES, given a single evidence set = (P, P * , T , T * ), is to find the set of keys that maximizes the following likelihood: where N = 16 is the number of rounds. It is possible to prove that such likelihood can be expressed as follows: ; // Attack the Feistel function: p =prob_key_attack_Feistel( ); S K [X ] = argmax (p); // Update the evidence sets: for all i = (P, P * , T , T * ) ∈ ϒ do temp l = l; temp l * = l * ; l = r ⊕ F(l, S K [X ]); l * = r ⊕ F(l * , S K [X ]); r = temp l ; r * = temp l * ; udpate ← (P, P * , (l, r), (l * , r * )); Break 3-round DES through exact inference; The research of the optimal key by exploiting the Eq. 23, through a backward exact inference process requires an high computational cost, that makes infeasible such an approach. Instead, it is possible to exploit the forward inference in order to estimate the most probable difference propagation through different rounds, and then exploit a statistical sampling technique, as described in [32], to estimate the subkey for each round. In particular, for the last round N , the following relationships among variables hold: Algorithm 4 -sample_Z ′ X -Algorithm for Drawing a Set of Samples for the Z ′ X Variables of All DES Rounds, Given a Single Evidence If Z ′ N −1 were known, the best way to obtain the subkey S KN should be to compute the value of Y ′ N through Eq. 24, and then use the attack on the Feistel function of the last round, by using Z ′ N and Y ′ N as input. Unfortunately, such piece of information is not available, and its exact inference through the BN would be computationally too expensive. We propose to sample the DES-BN in order to estimate the most probable value of Z ′ N −1 by exploiting the structure of the DES-BN and the only exact information available, i.e., the given evidence. Given the estimated value of Z ′ N −1 it is possible to backwards iterate the same procedure for the remaining N −1 rounds until the construction of a probability distribution for all subkeys. Such attack is described by Algorithm 3. At each round, multiple evidences are exploited to attack the Feistel function, in order to find the most probable subkey. The algorithm to sample an objective node consists in sorting all nodes of the Bayesian Network according to its topology, and then sampling the probability distribution of all nodes that precede the objective node and finally sampling the objective node. In order to estimate the most probable value of Z ′ X , in a given round X , our algorithm starts from the exact knowledge of P and P * and follows all causal links in the path to the Z ′ X , drawing a random value for each unknown parent node. This procedure allows to obtain a possible value for Z ′ X . The iteration of such procedure produces a set of samples of Z ′ X , and by analyzing the resulting histogram it is possible to select the most frequent sample. With an adequate number of samples the histogram approximates the probability distribution of Z ′ X , thus the most frequent sample can be considered an approximation of the most probable value. This sampling strategy is described in Algorithm 4. VI. PERFORMANCE EVALUATION A first assessment of the performance of the proposed approach concerned the evaluation of the complexity of the four algorithms it consists of. The computational complexity of the algorithm to attack a single SBox (Algorithm 1) is O SBox = O (2 b | |), where | | is the number of exploited evidences, and b is the number of bits composing the key block accepted as input by one of the eight S-Boxes, i.e., b = 6. The evaluation of the probability distribution during the attack to the Feistel Function, according to Algorithm 2, has a complexity O Feistel = O(n s * 2 b * | |), where | | is the number of exploited evidences in the attack on a single S-Box, n s is the number of S-Boxes, and b is the number of bits of the key block used by one of the eight S-Boxes, i.e., n s = 8 and b = 6. The complexity of the sampling procedure (Algorithm 4) depends on the number of samples required to obtain the convergence of the probability distribution, i.e, M , and on the round to be sampled, i.e., X . The upper bound of such complexity is determined by considering the last round, i.e., X = N , as in the following equation: It is worth noting that the samples generated during the graph descent can be reused during the backtracking, thus obtaining a more efficient procedure than the expanded Algorithm 4. The computational cost for attacking the whole DES (Algorithm 3) is expressed by the following equation: where |ϒ| is the number of elements constituting the evidence set, M is the number of samples required by the sampling algorithm, N is the number of round of DES, and b si the number of input bits to a single S-Box. Since b = 6 and N = 16, it follows that 2 b and N 2 can be considered as constant. Consequently, the computational complexity can be expressed as follows: Such result is coherent with the expected complexity of a chosen plaintext attack, which directly depends on the number of plaintext-ciphertext pairs. Another set of experiments was run in order to find the number of plaintext-ciphertext pairs needed to attack each of the 8 S-boxes by means of the Algorithm 1. Tests were executed on a multi-core server equipped with 4 Intel Xeon 2.00 GHz by reporting the time-to-succeed (in milliseconds) when using 10 different random keys. Results, shown in Table 1, indicate that on average 3 plaintext-ciphertext pairs are needed to accomplish the attack on every S-box. It can be observed that the time-to-succeed (TTS) are in general very low, and no noticeable variations are evident as the random keys and the S-boxes vary. This aspect was further inspected by evaluating the average TTS (see Fig. 9) and the corresponding variance values, which are about 10 −3 ms for each experiment. Finally, we extended the experimental evaluation to four distinct versions of DES reduced to three, four, five, and six rounds, respectively. Results we obtained (see Table 2) are comparable with the performance of the original differential crypyanalysis [10], [34]. In particular, for each variation of DES, we considered the average execution time (Time), the number of chosen plaintext-ciphertext pairs (Texts), and the number of required samples obtained by the sample_Z ′ X algorithm (Samples). It is worth noticing that changing the number of available plaintext-ciphertext pairs significantly impacts on the number of samples required to accomplish the attack. The values reported in Table 2 are those that minimize the computational complexity of the whole attack (Eq. 25). This preliminary assessment leads us to conclude that the number of plaintext-ciphertext pairs required to attack a full 16-round DES is not lower than the threshold of 2 47 that exists for the standard differential attack approach. VII. CONCLUSION AND FUTURE WORK In this paper, we proposed a new formulation of differential cryptanalysis through Bayesian networks, a framework for performing probabilistic inference that is widely adopted in the field of machine learning. Exploiting such model we designed an algorithm for attacking DES through approximate inference on such Bayesian Network model. Our preliminary experimental evaluation, performed on a version of DES with a reduced number of rounds, showed that the proposed method is equivalent to the original differential cryptanalysis, with respect to required input data and convergence time. Beyond its effectiveness, the computational aspect represents the main limitation of the approach. Indeed, the Bayesian framework, in its current form, does not perform significantly better than other traditional cryptanalysis approaches. However, the formulation of the attack using Bayesian Networks gives several insights for improvement. To be more specific, we plan to evaluate more advanced forward sampling techniques, such as importance sampling, in order to verify the possibility to reduce the convergence time and to minimize the sample inputs. Furthermore, since multiple evidences are mutually independent, the reduction of the convergence time can be achieved by exploiting a massive parallel architecture. Finally, although the hypothesis of mutual independence of subkeys allows to reduce the computational cost, it introduces many contradictory hypothesis about subkeys of different rounds. In a future work we will investigate the introduction of a new BN model modeling the linear relationship among subkeys. Open Access funding provided by 'Università degli Studi di Palermo' within the CRUI CARE Agreement
8,666
2023-01-01T00:00:00.000
[ "Computer Science", "Mathematics" ]
The coordination of hip, knee and ankle joint angles during gait in soccer players and controls Background Clinical researchers are trying to unravel the impact of different training interventions on the kinematics of human gait. However, the effects of long-term training experience on the kinematics of a healthy gait pattern remains unclear. Here we assess the effect of long-term training experience on joint angle variability during walking. Methods Hip, knee, and ankle joint angles from fourteen soccer players and sixteen controls were acquired during treadmill and overground walking. Hip-knee coupling, knee-ankle coupling and coupling angle variability (CAV) of the right leg in the sagittal plane were assessed using a vector coding technique. Results Soccer players showed reduced hip-knee CAV during the mid-stance and terminal-stance phases and reduced knee-ankle CAV during the pre-swing phase of gait compared to the control group. In addition, soccer players less often used an ankle coordination pattern, in which only the ankle joint but not the knee joint rotates. Interpretation These findings show that soccer players had more stability in the ankle joint during the stance phase of the gait compared to the control group. Future studies can test whether these differences in the coordination of the ankle joint reflect the effects of long-term training on normal gait by comparing knee-ankle coupling and variability before and after exercise training interventions. Introduction It is well documented that exercise training improves cardiovascular function, stability, and strength (Sherrington et al., 2008). These training effects are particularly pronounced in older people or people with disabilities. The advantage of exercise training in improving the functional capacity of older adults have been recently demonstrated (Font-Jutglà, Gimeno, Roig, da Silva, & Villarroel, 2019;Xia et al., 2020). For instance, in multi-component exercise program, resistance training contributes most to overall enhancements of muscle strength during acute hospitalization (Sáez de Asteasu et al., 2020). Furthermore, exercise training has an important effect on movement rehabilitation in patients and can help them to resume their routine life. For example, virtual reality training could help to extend the balance and walking abilities of patients (Bang, Son, & Kim, 2016). Enhancement of daily life activity was one of the most important outcomes of this study and walking is a key activity that can help all groups achieve an independent life. Human walking pattern is unique compared to primate gait (Maurice Abitbol, 1988;Rodman & McHenry, 1980). To enable this unique ability some adaptations have taken place in the human body: during a normal walking, the head and center of gravity are lowest near toe-off and highest at mid-stance (Bramble & Lieberman, 2004), so that we can walk smoothly and highly efficiently for a long time. As human walking is a complex motion, it requires coordination between multiple body segments: Multiple lower limb joints and segments should move synchronously during normal gait (DeLeo, Dierks, Ferber, & Davis, 2004), and disruptions in coordination, either internal or external, may result in impaired gait. For example, weakness of one or a group of muscles can create an abnormal pattern and change the kinematics of gait (Galli et al., 2012). Likewise, trial-to-trial variability in movement pattern are associated with skill level or expertise (Wilson, Simpson, Van Emmerik, & Hamill, 2008), and optimal variability in movement patterns is a characteristics of healthy functioning (Harbourne & Stergiou, 2009). Gait variability can be quantified at several levels, such as kinematics and stride characteristics (Ulman, Ranganathan, Queen, & Srinivasan, 2019). The relative motion between the angular time series of two joints has been used to distinguish normal from disordered or symmetrical from asymmetrical gait patterns and has also been applied to assess how coordination in sports differs as a function of expertise (Glazier, 2006). It has been shown that coordination variability decrease when skilled athletes perform more consistently or better regulated (Wilson et al., 2008). Vector coding is a non-linear technique to quantify coordination and variability. It quantifies the continuous dynamic interaction between segments by detecting the vector orientation of the angleangle diagram relative to the horizontal (Hamill, Haddad, & McDermott, 2000;Sparrow, Donovan, Van Emmerik, & Barry, 1987). It is used to estimate the coupling angle variability (CAV) between body segments and the CAV can reveal changes in coordinative state between different groups (Heiderscheit, Hamill, & van Emmerik, 2002). Although the functional role of short-and long-term motor learning on movement variability has been investigated (Bartlett, Wheat, & Robins, 2007;Bradshaw, Maulder, & Keogh, 2007;Wu, Miyamoto, Castro, Ölveczky, & Smith, 2014), the effects of long-term training on the joint variability of the lower extremities during gait remain unclear. While training programs could differ between sports, walking often constitutes a considerable part of training programs, in particular for soccer players (Krustrup, Mohr, Ellingsgaard, & Bangsbo, 2005;Rampinini, Coutts, Castagna, Sassi, & Impellizzeri, 2007). Understanding the effects of long-term training on gait is important when applying rehabilitation training in different groups with and without exercise training experience. In this study, we used the vector coding technique and CAV to assess differences in coordination and variability of right leg joints in the sagittal plane between soccer players and controls. Longterm soccer training may change the control of general movements such as gait. As gait is an important part of soccer training program, we expect that soccer players show reduced lower extremity joint variability when they walk on a treadmill or overground compared to the control group. Being able to distinguish differences in joint kinematics between soccer players and controls may contribute to our understanding of the effects of long-term training on movement coordination. Participants Thirty male participants, fourteen soccer players (height: 175 ± 4 cm, mass: 70.2 ± 8.0 kg, age: 23 ± 5 years) and sixteen non-athletes (height: 175 ± 7 cm, mass: 79.2 ± 16.0 kg, age 24.7 ± 5 years) with no history of musculoskeletal injury gave written agreement to participate in the study. The athletes had at least seven years continuous soccer training experience and were members of the soccer team of University of Mazandaran. The control group reported that they did not have a history of doing regular exercise training or injuries that could affect their walking. All participants were students at the University of Mazandaran. The study protocol was approved by the Office of Research Ethics at the University of Mazandaran and prior to beginning the protocol, participants provided written informed consent. Experimental setup and procedure Participants wore comfortable walking shoes and performed treadmill walking (H/P COSMOS treadmill, Germany) and overground walking on a 10-meter walkway. Their preferred walking speed was recorded during the familiarization phase. Overground walking speed was recorded using a stopwatch and determined from the mean of three recordings while walking on the 10 m walkway. Treadmill walking speed was directly recorded by a monitor on the treadmill when a participant indicated they were walking at their preferred speed. All participants first completed overground walking and then treadmill walking (see Fig. 1). Data acquisition Kinematic data were recorded using a 3D inertial measurement unit (IMU) consisting of magnetometers, accelerometers and gyroscopes (Noraxon MyoMotion system, USA). We used five sensors to collect 3-dimensional kinematics data during walking: three sensors on thigh, shank, and foot of right leg and one sensor on lumbar spine to capture hip, knee, and ankle joint angles and foot switch data during walking (Mundt et al., 2019). A sensor was placed on lower thoracic (T12) to record lower spinal horizontal rotation during overground walking. All sensors were placed based on Noraxon MyoMotion setup and sampled at 100 Hz. Data analysis Three-dimensional hip, knee, and ankle angles were processed using a low-pass Butterworth filter with a cut-off frequency of 6 Hz (Winter, Sidwall, & Hobson, 1974). Participants had to turn at the end of the 10-meter walkway and continued walking for two minutes. To extract the data from the straight walkway, excluding turning points, we applied the method that we used previously (Yaserifar et al., 2021). Briefly, we collected the lower spinal horizontal rotation data to remove data during turning points and only analyzed data during straight line walking: The zero-crossings indicated the middle of the turning movement and we removed 1.5 s before and after each zerocrossing. The remaining data was used for further analysis. The gait cycles of right dominant leg were segmented using footswitch data for both overground and treadmill walking. Each cycle was defined by consecutive heel strikes and was time normalized and scaled to 100% of the gait cycle. Calculation of coupling angle and coupling angle variability Custom MATLAB (Version 2018A, USA) scripts were developed to estimate the coupling angles and CAV. The coupling angles and CAV were assessed in sagittal plane (Luc-Harkey et al., 2016). Reliable estimation of individuals' variability requires at least ten stride cycles (Hafer & Boyer, 2017), and we extracted data from the joint angles of fifty gait cycles during treadmill and overground walking. Angle-angle diagram were then created with the proximal joint on the horizontal axis and the distal joint on the vertical axis. Vector coding techniques was used to estimate the coupling angle and joint angle variability of fifty continuous gait cycles based on consecutive proximal and distal joint angles (Peters, Haddad, Heiderscheit, Van Emmerik, & Hamill, 2003). The coupling angle ( ) was calculated according to equations (1) and (2) for each time point (i) within the gait cycle by the vector orientation between two adjacent data points in time on the angle-angle diagram relative to the right horizontal (Needham, Naemi, & Chockalingam, 2014). The following conditions (3) were applied: The coupling angle was then corrected to obtain a value between 0 ⸰ and 360 ⸰ (Chang, Van Emmerik, & Hamill, 2008;Sparrow et al., 1987): As these angles are obtained from a polar distribution, the mean coupling angles ( ̅ ! ) must be extracted using circular statistics. So, & were calculated using the average of horizontal and vertical components (Batschelet, 1981;Hamill et al., 2000): We then corrected the average coupling angle to obtain a value between 0 ⸰ and 360 ⸰ (7) (Needham et al., 2014). Mean A ! were then categorized into one of four coordination patterns: in-phase, anti-phase, proximal phase and distal phase (Chang et al., 2008). When the coupling angles are 45º and 225º, the coupling is in-phase, and both joints rotate in the same direction, whereas if two joints rotate in the opposite direction, i.e. at 135º and 315º, this is considered anti-phase coupling. When coupling angles parallel the horizontal (A ! = 0º or 180º), there is rotation of the proximal joint but not the distal joint and this is considered the proximal-phase pattern. Finally, vertically directed coupling angles (A ! = 90º or 270º) indicate a distal-phase pattern, in which only the distal joint rotates (Chang et al., 2008). We determined the frequency with which each coordination pattern occurred at different phases of the gait cycle. The length of the mean vector was then calculated according equation (8), and the coupling angle variability ( ! ) was determined according to equation (9) (Batschelet, 1981). Statistical analysis A univariate analysis of covariance (UNIANCOVA) was used to compare the coupling angle, CAV, and coordination pattern frequency across groups and surfaces. In the model we included group (soccer players and control group) and surface (treadmill and overground) as fixed effect factors. As participants walked at their own preferred speed, walking speed was added as a covariate. The coupling angle, CAV, and coordination frequency were assessed at seven subphases of the gait cycle and we used the Benjamini-Hochberg procedure to adjust the p-values for multiple testing (Benjamini & Hochberg, 1995). The level of statistical significance was set at α = 0.05. Results Walking speed was higher in soccer players compared to controls and during overground walking compared to treadmill walking (Fig. 2), as reflected by the significant main effects of group (F1,56 = 10.7, Padj = 0.03) and surface (F1,56 = 159.3, Padj < 0.005). The interaction effect was not significant (P>0.9). Error bars show the standard deviation between participants. Mean joint angles for both groups are shown in Fig. 3. Although the pattern is very similar in both groups and surfaces, the ankle angle reveals some differences between conditions (Fig. 3C). In particular, the pattern of angular variations in the ankle joint of soccer players on the treadmill in the propulsion and swing phases appears to show more plantarflexion. Inter-segmental coordination was assessed based on the angle-angle plots of the hip-knee and knee-ankle pairs ( Fig. 4A and 4E). The coupling angle and CAV were computed from the vector orientation between two adjacent data points in time in the angle-angle diagram. At the beginning of gait cycle the coupling angle of the hip-knee pair starts in anti-phase coordination where there was knee flexion and hip extension during loading response phase (with hip and knee angles around 20 and 5 degrees, respectively). The coordination pattern then changed to a proximal-phase (hip) pattern in the control group during the mid-stance and terminal stance phases, whereas soccer players kept an in-phase coordination pattern during these phases (Fig. 4B, sections b and c). Both groups showed nearly the same trajectory of hip-knee coupling for the following phases of the gait cycle finishing with in-phase coordination (Fig. 4B, sections e and g). The coupling angle variability slowly decreased during the gait cycle with short increases mid-swing and before heel strike (Fig. 4C). Both groups showed similar coordination patterns during the gait cycle (Fig. 4D). The knee-ankle coupling revealed a different trajectory during the gait cycle (Fig. 4E): There was a knee flexion versus ankle plantar flexion at the beginning of gait cycle (with a knee and ankle angles around 5 and -4 degrees, respectively). Soccer players initiated the gait cycle with an antiphase coordination pattern (292.5º≤ ̅ ! <337.5º) whereas controls initiated it with a distal-phase (ankle) pattern (247.5º≤ ̅ ! <292.5º; Fig. 4F, a). There was a different coupling pattern across the second part of mid-stance and the first part of terminal stance, but both groups shared a similar anti-phase coordination pattern (112.5º ≤ ̅ ! <157.5º; Fig. 4F, b and c). Furthermore, soccer players showed more scattered knee-ankle coupling during pre-swing phase (112.5º ≤ ̅ ! <337.5º) (157.5º ≤ ̅ ! <292.5º; Fig. 4F, d) and less ankle coordination pattern compared to controls (Fig. 4H). Figure 5 shows the mean coupling angles and CAV of soccer players and controls during the different phases of the gait cycle. When comparing the CAV both groups, soccer players showed decreased hip-knee CAV during the mid-stance and terminal stance phases (F1,55 =14.7, Padj = 0.008 and F1,55 =18.2, Padj = 0.003, respectively), and decreased knee-ankle CAV during the preswing phase (F1,55 =12.1, Padj = 0.02) compared to controls (Fig. 4C, sections b and c, and Fig. 4G, section d). Furthermore, the knee-ankle coupling in soccer players revealed less often a distalphase (ankle) ankle coordination pattern (F1,55 =14.4, Padj = 0.008) during the gait cycle than controls (Fig. 3F). There were no significant differences for other sub-phases between groups or between surfaces. All statistical results of coupling and CAV and descriptive and statistical results of coordination pattern frequency are presented in Tables S1, S2 and S3, respectively. Rows A and B: hip-knee coupling angle and CAV, respectively. Rows C and D: knee-ankle coupling angle and CAV, respectively. Each row is divided into seven sub-phases of gait cycle. Blue dash line indicates toe-off and separates the stance phase (left side) and swing phase (right side). Each subplot shows mean and SD of coupling/CAV of athletes and non-athlete during treadmill (TR) or overground (OG) walking. Discussion We used a vector coding technique to compare the coordination of hip, knee and ankle joint angles from the right leg in the sagittal plan between soccer players and controls when walking overground or on a treadmill. We observed differences in CAV between groups in different phases of the gait cycle. Soccer players showed decreased hip-knee CAV during the mid-stance and terminal-stance phases and decreased knee-ankle CAV during the pre-swing phase compared to the control group. The mean coupling angle was used to distinguish between four coordination patterns: in-phase, anti-phase, proximal phase and distal phase. This revealed that the control group more often showed a distal-phase (ankle) coordination pattern for the knee-ankle pair compared to soccer players. These findings show that the vector coding technique can detect differences in joint coordination and variability during normal gait between soccer players and controls. Soccer players had lower hip-knee CAV during middle stance and lower knee-ankle CAV during the pre-swing phase compare to the control group. Recently, the functional role of variability in system dynamics has been discussed and that the traditional view of disordered movement as showing more variability has been questioned (Hamill, van Emmerik, Heiderscheit, & Li, 1999). They suggested a functional role for variability in lower extremity segment coupling in which symptomatic individuals applied joint actions within a very narrow range that led to less variability compared to healthy subjects. This raises the question why variability is lower in both pathological condition as shown in previous studies and in trained healthy individuals (soccer players) in our study. A lower CAV in symptomatic individuals may help them to minimize the pain during movement, but both groups in our study were functionally able to perform walking without pathological symptoms. While we are not aware of any study that has investigated CAV of the lower extremity in the gait pattern of soccer players, some studies have assessed the variability of kinematic variables during sports movement. For example, the least, intermediate and the most skilled triple jumpers exhibited U-shape curve in coordination variability during hop-step phase, as skill increased. The authors explained that high coordination variability may not be beneficial in the least skilled jumpers and suggested that variability decreases when jumpers are able to demonstrate more consistent or regulated performance (Wilson et al., 2008). Overall, less experienced individuals show more variability on a given task (Jarvis, Smith, & Kulig, 2014), the extend of which decreases as they learn and approach expert performances. Therefore, decreased CAV in soccer players may reflect adaptation towards optimal variability (Stergiou, Harbourne, & Cavanaugh, 2006) that improves gait stability (Hyun G Kang & Jonathan B Dingwell, 2008;Hyun Gu Kang & Jonathan B Dingwell, 2008;Owings & Grabiner, 2004) as a result of soccer training. However, we should be aware that the most experienced individuals may exhibit a degree of variability that allows them flexibility in dealing with perturbations and controlling balance, timing, and any other applicable factor (Wilson et al., 2008). Our findings also showed that soccer players showed less ankle coordination pattern (distal phase) compared to the control group and that this difference was most evident in the stance phase (Fig. 4). A knee flexion is essential for energy absorption and is relevant to joint degeneration during leading phase of gait (Childs, Sparto, Fitzgerald, Bizzini, & Irrgang, 2004). In addition, the ankle joint moments revealed that the greatest demands on the controlling muscles occurred during stance phase of gait (Hunt, Smith, & Torode, 2001). Indeed, the ankle joint muscles have a crucial function in stabilizing the foot when weight transfers onto the toes (Hunt & Smith, 2004). Therefore, the ability to coordinate the movement of the knee and ankle joints is important during the stance phase of gait. Soccer players should properly position their feet during soccer training/competition (Hawrylak, Brzeźna, & Chromik, 2021), therefore, gait motor control of soccer players may be unconsciously trained for a better foot position. According to the present results, controls tried to preserve greater ankle coordination pattern to transfer the body weight during stance phase. However, soccer players may show good coordination (anti-phase coupling) in knee-ankle coupling, where knee flexion versus ankle extension leads to better load absorption and weight control during stance phase (Fig. 4F). It should be noted that this study was conducted with a small sample size and assessed many outcome variables. We did use the Benjamini-Hochberg procedure to control the false discovery rate, but our findings should be replicated in future studies involving a larger sample to confirm that the stance phase is important part of gait cycle that is affected by long-term training (Kiriyama, Warabi, Kato, Yoshida, & Kokayashi, 2005). In addition, this study conducted using only male participants, while previous studies demonstrated gender-related differences in lower limb variability (Barrett, Noordegraaf, & Morrison, 2008;Kerrigan, Todd, & Della Croce, 1998). Females showed less variability in ankle transfer plan rotations compared to males at different speeds of treadmill running, and females also had greater hip flexion or knee extension before initial contact. Lastly, we did not assess kinetics and neuromuscular control mechanisms, which may reveal further insights into the mechanism underlying the observed difference in joint coordination. In addition to lower extremity kinematics, future studies should attend to other aspects of gait motor control, kinetics and electromyography, to reveal more accurate information of gait motor control. Therefore, extrapolating the conclusions from our study to other gender group (females) and larger group (participants number) must be done with caution. Conclusions In summary, we found that lower extremity joint variability and coordination during walking differs between soccer players and controls and may thus vary even among healthy young adults. This differences in coordination could be related to the physical fitness background of the participants. Soccer players showed lower hip-knee CAV during mid-stance and terminal stance phases and lower knee-ankle CAV during the pre-swing phase of gait, and also applied less ankle coordination pattern compared to the control group. We hypothesized that long-term soccer training could be one of the reasons for these differences, which may reflect adaptations to exercise training by reducing variability during the stance phase of gait. Future efforts should attempt to evaluate this hypothesis by examining the changes in joint coordination of the lower extremity before and after an exercise training intervention. Declaration of competing interest The authors declare no competing interests.
5,094.6
2021-09-24T00:00:00.000
[ "Medicine", "Engineering" ]
Outlier quantification for multibeam data This paper discusses the challenges of applying a data analytics pipeline for a large volume of data as can be found in natural and life sciences. To address this challenge, we attempt to elaborate an approach for an improved detection of outliers. We discuss an approach for outlier quantification for bathymetric data. As a use case, we selected ocean science (multibeam) data to calculate the outlierness for each data point. The benefit of outlier quantification is a more accurate estimation of which outliers should be removed or further analyzed. To shed light on the subject, this paper is structured as follows: first, a summary of related works on outlier detection is provided. The usefulness for a structured approach of outlier quantification is then discussed using multibeam data. This is followed by a presentation of the challenges for a suitable solution, and the paper concludes with a summary. Introduction Data analytics techniques such as data mining and machine learning can give valuable insights into the data. They allow rules that describe specific patterns within the data to be identified or can reveal hidden knowledge. Based on the analysis results, informed decisions can be made. The most time-consuming step in the analytics pipeline from processing raw data to discovering knowledge is data pre-processing. This step includes activities for data integration, data enhancement, data transformation, data reduction, data discretization and data cleaning. The reason for the time-consuming nature of this activity is usually the quality of the data (i.e. missing or incomplete entries). Some approaches to improving quality can be found in the litera- [18]. These approaches are usually based on detecting and filtering of outliers. In statistics, outliers are defined as "high measurements where the value is some standard deviation above the average" [5]. In data engineering, outliers, commonly referred to as "anomalies", refer to "something that is out of range". This can, on the one hand, point to insignificant data or, on the other hand, to interesting and useful information about the underlying system. Hence, distinguishing the essence of outliers in terms of undesired or unwanted behavior versus surprisingly correct and informative data is of particular interest for the quality of data analysis. The purpose of our work is to develop an outlier quantification framework making the analysis results explainable. As a use case, we selected ocean science (multibeam) data to calculate the outlierness for each data point. The benefit of outlier quantification is a more accurate estimation of which outliers should be removed or further analyzed. Fig. 1 shows, on the left, the convential process of outlier detection. The data is pre-processed and outlier techniques are interweaved in this step, resulting in analysis results such as clusters or patterns. The right-hand side of Fig. 1 shows a new approach to outlier detection. Outlier information is propagated through each step of the process from raw data to the analysis results in terms of meta-data annotations. Although plenty of approaches exist that classify, filter and remove outliers, the number of approaches for explainable outlier quantification is limited. To shed light on the subject, this paper is structured as follows: the next section summarizes related works on outlier detection. The useful- Fig. 1 Left-hand side: the process from raw data to clustering without outlier quantification. Right-hand side: the process with outlier quantification. Outliers are continuously annotated within the analytics pipeline ness for a structured approach of outlier quantification is discussed with a use-case scenario using multibeam data. Finally, the last section sketches challenges for a suitable solution and concludes the paper with a summary. Related work Existing outlier detection methods differ in the way they model and find the outliers and, thus, in the assumptions they rely on, implicitly or explicitly. In statistics, outlier detection is usually addressed by modelling the generating mechanism(s) of the normal data instances using a single or a mixture of multivariate Gaussian distribution(s) and measuring the Mahalanobis distance to the mean(s) of this (these) distribution(s). Barnett and Lewis [1] discuss numerous tests for different distributions in their classical textbook. As a rule of thumb, objects that have a distance of more than 3 · σ to the mean of a given distribution (σ denotes the standard deviation of this distribution) are considered as outliers to the corresponding distribution. However, we are not aware of any approach that continuously tracks the outlier scores and updates the values within the analytics pipeline. Problems of these classical approaches are obviously the required assumption of a specific distribution in order to apply a specific test. According to the data distribution, there are tests for univariate as well as multivariate data distributions, but all tests assume a single, known data distribution to determine an outlier. A classical approach is to fit a Gaussian distribution to a data set, or, equivalently, to use the Mahalanobis distance as a measure of outlierness. Sometimes, the data are assumed to consist of k Gaussian distributions and the means and standard deviations are computed data driven. However, mean and standard deviation are rather sensitive to outliers and the potential outliers are still considered for the computational step. Related to the outlier detection techniques, many different approaches exist that have less statistically oriented but more spatially oriented ways of modelling outliers, particularly using distances between data objects. These models consider the number of nearby objects, the distances to nearby objects and/or the density around objects as an indication of the "outlierness" of an object [2,10,12,13,15]. However, all these approaches rely implicitly on the assumption that a globally fixed set of features (usually all available attributes) are equally relevant for the outlier detection process. Outlier detection addresses the problem of discovering patterns in data that do not replicate the expected behavior. Although many approaches for outlier detection using supervised machine learning [6,8] or signal processing based methods [9,11] exist, the risk to unintendedly eliminate necessary signals if the sound data is unknown is present and a holistic approach is missing that combines different techniques, data distributions and tests and aim to provide a quantification. The related work analysis attempts to identify apparent trends towards outlier detection, different techniques to find outliers and filter them. The next section discusses a suitable use case for outlier quantification. Particularly, we discuss multibeam (bathymetric) data for seafloor classification. Use-case scenario Pre-processing of bathymetric data is a time consuming task. Due to new technologies for data acquisition, in which a fan-shaped bundle of acoustic beams ("multibeams") is repeatedly transmitted (each transmission being called a "ping") from the ship perpendicular to the direction of travel (see first image in Fig. 2), a huge amount of data is collected. Not only the amount of data increases, but the data is noisy and contains many outliers. Although the amount of data continues to grow, data processing steps, like outlier detection and filtering, are carried out manually by domain experts. This task is repetitive and subjective, so there is a need to ensure objectivity and a cleaning procedure which ensures traceability for outlier detection. In order to meet these goals, artificial neural networks (ANN), especially supervised machine learning (ML) methods, can be applied to reduce processing time and ensure objectivity and traceability. Figure 2 shows the pipeline for outlier quantification in multibeam data. As a suitable use case for outlier quantification multibeam bathymetry raw data of RV MARIA S. MERIAN during cruise MSM88 [19] with records in the Atlantic that took place between 2019-12-19 and 2020-01-14 could be used. The data were collected using the Kongsberg EM 122 system and cover an area of 153,121 square kilometres. Next, the following analytics pipeline can be applied for this data set. Multibeam data is saved as .all files. The depth amplitude goes from 5244 to 5840 m. Figure 3 shows the location of the survey in the Atlantic. Firstly, multibeam data is transformed into a generally readable comma-separated values (csv) format containing latitude, longitude and depth values. Additionally, the backscattering strength (BS) is calculated and added to the csv file. BS data is a measure of intensity of the acoustic return and is used to detect and quantify the bottom echoes, so several seabed types like coral reefs, seagrass, salt or mud can be taken into account. A prerequisite for supervised learning is the need for labelled data. So, for outlier detection a domain expert manually labelled all outliers in the collected data set. Each sounding thus receives an additional attribute and a flag is saved. The data set is 59.5 GB in size, so that the usual data processing steps cause high computational costs and the runtime for processing the data is very high. This challenge is described further in the next section. A moving window data pattern can be applied to the data for data selection. Moving window algorithms are datacentric, because the moving window changes position iteratively while being centered on one sounding. The red cross in Fig. 2 for data selection is the centre of the mov-ing window and the yellow plus signs are the points being selected to calculate the local neighbourhood. Only one parameter, which is the search radius around the sounding, is needed. Although the method is time consuming, the local neighbourhood calculation is representative and is suitable for detecting the local neighbourhood for each sounding. Local neighbourhoods are saved in an additional file. In order to train ML algorithms to automatically detect outliers in multibeam data, a proper description of Fig. 4 Definition of a beam and a ping presented in ping/beam view [16] K Fig. 5 Georeferential (spatial) view of multibeam data [7] the soundings is needed. We use the local neighbourhood, to calculate features for each sounding. These features are used for ANN training, so a trained network is generated, which is able to detect and flag outliers in multibeam data. Depending on the attributes which should be calculated for each local neighbourhood, the raw view, spatial view or sequential view is suitable. Multibeam data can be handled with a dual representation; a ping/beam view as time-series where data is stored in a matrix (see Fig. 4) or as an absolute georeferential view where each sounding is represented as a triplet containing latitude, longitude and depth values (see Fig. 5). Raw features are based on the raw data set collected on the MSM88 expedition like the BS or the depth values. Spatial features include attributes like a local outlier factor or the standard deviation for each local neighbourhood. The sequential view is suitable for bad ping detection. All calculated attributes are added to the csv file to gain metadata and a proper description of each sounding that can be utilized to train ML algorithms. These data and their associated description are used by ML algorithms for training, so in this use case these data are the basis to decide whether a sounding is an outlier or not. To evaluate this approach, MB-System can be applied to the dataset to automatically detect outliers with the implemented outlier detection methods. MB-System is an open source software package for the processing and display of bathymetry and backscatter imagery data derived from multibeam, interferometry and sidescan sonars. MB-System detects outliers with simple interpolation methods or by adoption of alternate values. Finally, all detected outliers by MB-System can be contrasted to the outliers detected with the presented supervised ML approach to verify the accuracy. Conclusion and research challenges This paper discusses the challenges of applying a data analytics pipeline for a large volume of data as can be found in natural and life sciences. To address this challenge, we attempt to elaborate an approach for the improved detection of outliers. We discuss an approach for outlier quantification for bathymetric data. The approach presented in this paper contributes to the concept of cross domain fusion (CDF) as follows. The data-driven pipeline presented in this paper aims to replace or complement the predominately used model-driven approach in the domain of seafloor classification. We are convinced that a data-driven approach can give more insights than traditional approaches do. For this, however, several challenges must be addressed in order to provide a solution. Challenge 1 Disciplines like natural and life-cycles have a large volume of data. This calls first for techniques to efficiently pre-process the data. We found that conventional pre-processing must be fine-tuned and adjusted to run algorithms for data integration and transformation. Even then it is difficult to calculate and summarize all features in one data set needed for training to enable the ANN to detect outliers. Moreover, the resulting csv file will be very large, so that the training of the ANN, depending on the method used, is also challenging. For example, linear regression to predict a binary target is simple to implement, but there is a risk of underfitting. Challenge 2 The number of approaches to accurately recognize objects is limited. While these techniques have been deeply studied for shallow water for instance [4,14,17], they fail for deep sea. Seafloor classification tasks should satisfy the precondition that the area covered by several consecutive pings belongs to the same seafloor type [3]. This precondition is easily met in shallow water, but it is difficult to ensure in the deep sea because, due to the fanshaped nature of the beam bundle, the width of seafloor insonified by one ping is proportional to depth, and so consecutive pings cover a much larger area. This shows how challenging object recognition is in large data sources with certain properties like depth. Challenge 3 Due to the complex pre-processing of the data, there is a great range of uncertainty in the analysis result. The analysis result should be interpreted more as a fuzzy value with a certain range. In addition, ANN methods like gradient boosting are very fast and powerful, but the results are not easily interpretable. Once again, this hampers transparency and explainability of the analysis result. Funding Open Access funding enabled and organized by Projekt DEAL. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4. 0/.
3,431.4
2022-08-01T00:00:00.000
[ "Environmental Science", "Computer Science" ]
A Comparison of Artificial Neural Network (ANN) and Long Short-Term Memory (LSTM) in River Water Quality Prediction : River water is a crucial natural resource utilized for various purposes, including agriculture and drinking. Human activities such as mining, industrial discharge, and improper waste management contribute to river water pollution, affecting its quality and posing risks to human health. Monitoring and predicting river water quality are essential for effective management and pollution control. The research focuses on Dissolved Oxygen (DO), and comparing of Artificial Neural Network (ANN) and Long Short-Term Memory (LSTM) to developed prediction models. Evaluation of the models' performance shows that the ANN model outperforms LSTM in predicting Dissolved Oxygen (DO) concentrations, achieving lower Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE). Although LSTM exhibits lower Mean Squared Error (MSE), the ANN model demonstrates better accuracy in minimizing the average distance between predicted and actual values. The findings suggest that ANN-based models offer good performance in river water quality prediction, with potential for further enhancement through additional variables or model architecture adjustments. INTRODUCTION River water is one form of water that is frequently used and consumed.Water is a natural resource that is essential to human life as well as the sustainability of the ecosystem.River water is frequently used for irrigation in agriculture, drinking water, and as a habitat for a variety of organisms.Human activities include mining, industry, and home waste are the main causes of pollution in river water [1] [2].The domestic garbage that has not been properly managed may contain pathogens and bacteria that can spread disease via river water.River water pollution has a negative impact since it lowers the quality of water used for human consumption and endangers the lives of aquatic species.The indicator used for this study is dissolved oxygen (DO), a component of river water pollution that affects the river's quality. Predicting river water quality can help minimize issues.To monitor river water quality, an algorithmic method is required to analyze time series data.This study expects to provide effective model predictions that may be evaluated more correctly by comparing the Artificial Neural Network (ANN) and Long Short-Term Memory (LSTM) algorithms.Multi-task deep learning can be used to analyze water quality model predictions [3].The research object is Chemical Oxygen Demand (COD) from the Yellow River environment in China.The data used is a complete time series of water quality data, which will be processed using Multi-Task Learning-CNN-LSTM.The MTL-CNN-LSTM model can forecast numerous parts of the Lanzhou section of the Yellow River at the same time, with more prediction accuracy than a single model prediction section and the ability to properly accommodate the sequential complicated COD variations in the Yellow River water.The methods used are Artificial Neural Network (ANN), Discrete Wavelet Transform (DWT), and Long Short-Term Memory (LSTM) [4] [5].The prediction model developed in this study is utilized to monitor water quality and cleanliness management in the Jinjiang River.This study applies artificial neural networks (ANNs) to fill in missing data using time series data from water quality samples.DWT is used to reconstruct water quality time series, remove the impact of short-term random noise, improve the accuracy of model predictions on out-of-sample data, and predict future dynamic trends, allowing it to predict short-term and long-term trends in quality time series data more effectively. Neural network models can be useful for forecasting water quality indices [6].A Neural Network is used to describe the link between input data -physicochemical data from water parameters (TDS, chloride, TH, nitrate, and manganese) and output datathe water quality index.A qualitative or quantitative technique might be used for forecasting.[7] CNN network to extract local features from preprocessed air quality data and transfer time series with better expressive power than original water quality information to LSTM layers for prediction [10] [11].The selection of optimal parameters is done by adjusting the number of neurons in the LSTM network and the size and number of convolution kernels in the CNN network.LSTM and the proposed model are used to cool the air quality data.This experiment shows that the proposed model is more accurate than conventional LSTM in predicting peak fitting effects.Compared with the conventional LSTM model, the root mean square error, Pearson correlation coefficient, mean absolute error and mean square error are optimized to be 5.99%, 2.80%, 2.24% and 11.63%, respectively.Long-Short term memory (LSTM) algorithms are known to be able to overcome several typical obstacles in hydrological model applications [12].In this research, the ability of the LSTM model to predict complex and nonlinear air quality behavior parameters at the Schwingbach Environmental Observatory (SEO), Germany.By selecting weekly nitrogen-nitrate concentration, weekly stable isotope air concentration (δ18O) and daily air temperature in six streams and six ground air sources with different land use and hillside conditions.The RMSE evaluation of LSTM performance ranges from 0.27 to 3.38 mg/l, from 0.069 to 0.27 ‰ and from 1.3 to 2.1 °C for nitrogen-nitrate, δ18O and air temperature.By comparing RMSE with statistical data parameters.The results confirm that LSTM can be used for initial risk assessment of air quality, and obtain robust results. Water quality prediction method based on Long Short-Term Memory Neural Network (LSTM NN) [13].Training data and water quality indicator data taken from Lake Taihu were measured every month from 2000 to 2006 which were used for the training model.This proposed method will be compared with two methods, namely, based on back propagation neural network and based on online sequential extreme learning machine.The result after comparing with back propagation neural network (BPNN) and online sequential extreme learning machine (OSELM), is that the prediction accuracy of LSTM NN is higher.In addition, LSTM NN is more generalized.Managing pollution levels through water quality predictions is one of the most effective ways to speed up problem discovery.Artificial Neural Network (ANN) is a computer system that imitates the way the brain analyze data developing algorithms for modelling complex patterns [14] [15].It can be concluded that the ANN architecture can predict water quality, although to varying degrees according to efficiency, performance and time required.The RNN-based LSTM model achieved the best model, with measurement accuracy of 96% to 98%. THEORITICAL FRAMEWORK Artificial Neural Networks (ANN) are a system of parallel processors connected to each other in the form of a directed graph.according to the chart each neuron of the network is represented as a node.These connections provide a hierarchical structure that attempts to mimic brain physiology, seeking new models of processing to solve specific problems in the real world.ANN can be used to represent nonlinear mapping between input and output vectors and also as a signal processing technology [16].ANN functions as a pattern classifier and as a nonlinear adaptive filter.Artificial neural networks consist of 3 layers, which are Input layer, Hidden layer and Output layer.Each layer is responsible for performing the same function completing the system.[17] Multilayer Perception (MLP) is an example of an artificial neural network that is widely used to solve a number of different problems, including pattern recognition and interpolation.Each layer consists of neurons that are interconnected with each other with weights.In each neuron, a mathematical function is called an activation function that receives input from the previous layer and produces output for the next layer [18].In the experiment, the activation function used is the hyperbolic tangent sigmoid transfer function. Long Short-Term Memory (LSTM) was introduced by Hotchreiter and Schmidhuber in 1997 to solve the gradient diffusion problem of Recurrent Neural Network (RNN).LSTM is a variation of RNN which was created to avoid the problem of remembering long-term information in RNN.LSTM consists of three gate structures, namely input gate, output gate and forget gate as shown in Figure I.The way LSTM works is by making changes to the RNN by adding memory cells that can store information for a long period of time.Memory cells are used to overcome the occurrence of vanishing gradients in RNNs when processing long sequential data [19] [20].f.Verify and validate the models.g.Compare the result of actual value and predict value.When the evaluation process for both ANN and LSTM methods is declared complete, testing is carried out using testing data.The process carried out is making predictions.The prediction results that have been made based on the ANN and LSTM models will be compared with the actual values. RESULT AND DISCUSSION The training results of the prediction model were obtained using 1247 training data and 312 test data to find out how accurate the model that had been designed and used in research was.This process is carried out before testing the prediction model.Based on this data, this research uses Python syntax to process prediction training with training data.The aim is to produce a model that is used in the testing process and the prediction results will be matched with actual or real data from the training data.The table I provides information about the configurations of two different models ANN and LSTM, that used for a specific machine learning task.Hidden Neuron is the number of neurons in the hidden layers of the model.The ANN and LSTM models are compared in this table according to the way ANN and LSTM predicted a certain parameter known as Dissolved Oxygen (DO).The MSE is measures the average squared difference between the predicted and actual values.The lower the value, the better the model's performance.Based on table II, the LSTM model has a lower MSE (0.000109) compared to the ANN model (1.90465), it means that the LSTM model is more accurate in predicting the Dissolved Oxygen (DO) parameter.The RMSE is the square root of the MSE and represents the average distance between the predicted and actual values.The lower the value, the better the model's performance.Based on table II, the ANN model has a lower RMSE (0.0436) compared to the LSTM model (0.01045), it means that the ANN model is better at minimizing the average distance between the predicted and actual values.And the MAPE is measures the average absolute difference between the predicted and actual values as a percentage of the actual values.The lower the value, the better the model's performance.In this case, the ANN model has a lower MAPE (1.85%) compared to the LSTM model (4.27%), it means that the ANN model has a better performance in predicting the DO parameter. Based on the evaluations, the LSTM model has a lower MSE, but the ANN model has a lower RMSE and MAPE, and the result of evaluation is indicating that the ANN model forecasts the DO parameter has a greater accuracy.Therefore, the correct interpretation of the table is that the ANN model performs better than the LSTM model in predicting the DO parameter, according to the evaluation metrics.The LSTM model has a lower MSE, indicating that it is better at reducing the average squared difference between the predicted and actual values.However, the ANN model is better at minimizing the average distance between the predicted and actual values and has a lower percentage difference between the predicted and actual values. CONCLUSION In conclusion, the models show the performance of the ANN and LSTM in predicting the DO concentration in a river water quality.The models show a good performance in predicting DO concentrations, but there is a slight deviation in the predicted values for observations.The ANN model performs better than the LSTM model in predicting the DO parameter, according to the evaluation metrics.The LSTM model has a lower MSE which is 0.000109, indicating that it is better at reducing the average squared difference between the predicted and actual values.And, the ANN model is better at minimizing the average distance between the predicted and actual values and has a lower percentage difference between the predicted and actual values which is RMSE 0.00436 and MAPE 1.85%.Further research can be done to improve the performance of the model by incorporating additional variables or changing the model architecture. Predicting economic trends, and the impact of the environment on these trends are the goals of forecasting[8][9].The research's prediction results are expected to aid in the creation of a mathematical model for river water pollution, which will be tested and trained on relevant datasets through the use of Artificial Neural Networks (ANN) and Long Short-Term Memory (LSTM) techniques, and its effectiveness will be assessed.Neuronal Network (ANN) and Long Short-Term Memory (LSTM). Figure I. LSTM Architecture ANN has two different configurations with 128 and 1 neurons, whereas LSTM has only one configuration with 128 neurons.The step size for the model to update the weights during the training process, both models have a learning rate of 0.001.The number of training samples used for each update of the model's weights during the training process.ANN has a batch size of 32, while LSTM has a batch size of 48.The function used to introduce non-linearity in the model's output.Both models use the Rectified Linear Unit (ReLU) activation function.The function used to measure the error between the predicted and actual outputs of the model.Both models use the Mean Squared 2003 * Corresponding Author: Sekarlangit Volume 07 Issue 04 April 2024 Available at: www.ijcsrr.orgPage No. 2000-2005 Error (MSE) loss function.The algorithm used to optimize the weights during the training process.Both models use the Adam optimization algorithm.The strategy used for the training process.ANN uses a regular training strategy, while LSTM uses early stopping to prevent overfitting.Based on the given configurations, it can be observed that the ANN model has a higher number of configurations with different hidden neurons, while the LSTM model has a fixed configuration with 128 neurons.The learning rate is the same for both models, while the batch size, activation function, loss function, and training optimizer are also the same.However, the training strategy is different for the two models, where ANN uses a regular training strategy, while LSTM uses early stopping. Figure II . Figure II.The Result of ANN
3,178
2024-04-01T00:00:00.000
[ "Environmental Science", "Computer Science" ]
Data-Driven Technology Roadmaps to Identify Potential Technology Opportunities for Hyperuricemia Drugs Hyperuricemia is a metabolic disease with an increasing incidence in recent years. It is critical to identify potential technology opportunities for hyperuricemia drugs to assist drug innovation. A technology roadmap (TRM) can efficiently integrate data analysis tools to track recent technology trends and identify potential technology opportunities. Therefore, this paper proposes a systematic data-driven TRM approach to identify potential technology opportunities for hyperuricemia drugs. This data-driven TRM includes the following three aspects: layer mapping, content mapping and opportunity finding. First we deal with layer mapping. The BERT model is used to map the collected literature, patents and commercial hyperuricemia drugs data into the technology layer and market layer in TRM. The SAO model is then used to analyze the semantics of technology and market layer for hyperuricemia drugs. We then deal with content mapping. The BTM model is used to identify the core SAO component topics of hyperuricemia in technology and market dimensions. Finally, we consider opportunity finding. The link prediction model is used to identify potential technological opportunities for hyperuricemia drugs. This data-driven TRM effectively identifies potential technology opportunities for hyperuricemia drugs and suggests pathways to realize these opportunities. The results indicate that resurrecting the pseudogene of human uric acid oxidase and reducing the toxicity of small molecule drugs will be potential opportunities for hyperuricemia drugs. Based on the identified potential opportunities, comparing the DNA sequences from different sources and discovering the critical amino acid site that affects enzyme activity will be helpful in realizing these opportunities. Therefore, this research provides an attractive option analysis technology opportunity for hyperuricemia drugs. Introduction The amount of uric acid in the body needs to be kept at a stable level. While the synthesis of uric acid increases or the amount of uric acid excreted from the body decreases, the concentration of uric acid in the blood increases [1][2][3]. A person is considered to have hyperuricemia when the uric acid level in the blood exceeds the average levels [4][5][6][7]. The increased intake of high-fat, high-protein, and high-sugar food leads to metabolic disorders and a rising risk of hyperuricemia [8][9][10]. It is estimated that there are currently about 17.7 million hyperuricemia patients worldwide. As can be seen, hyperuricemia positively correlates to many other potential diseases, such as obesity, hypertension, diabetes, cardiovascular disease, and chronic kidney disease [11]. The primary approach in treating hyperuricemia is to rebalance uric acid synthesis and excretion. Drugs for treating hyperuricemia are divided into three categories: xanthine oxidase inhibitors, urate anion transporter 1 (URAT1) and urate oxidase. However, most of these chemical drugs for hyperuricemia have specific side effects, and the oxidase-based drugs produce antibodies with long-term use. Neither of them can dissolve uric stones deposited in the joints [12,13]. Technological innovation for hyperuricemia drugs is imperative. TRM is a comprehensive approach to capturing changes in technology and markets over time in an integrated manner. It is not only a flexible approach to analyzing technologies and market requirements [34,35], but it can also create a more effective way to track and analyze the latest technology trends by integrating data analysis tools [36][37][38]. Pharmaceutical technology innovations are influenced by various factors, such as changing customer expectations, uncertain intellectual property (IP) procedures, unconsidered technology changes, and resource requirements [23,[39][40][41]. It is necessary to use TRM to analyze technology opportunity trends from both technological and market dimensions. In addition, TRM is a valuable tool in shortening the technology development cycle, discovering drug targets, and optimizing resource consumption. There are three major categories of research on TRMs: theory-based, case study-focused, and data/methodology-specific [15]. Among numerous extensions of TRM, data integration is a notable trend. Some researchers have employed the data-driven approach in TRM. For example, Yu developed a patent roadmap for the competitive market and patent layout planning analysis [42]. Zhou traced the innovation path of Solid lipid nanoparticles [37]. Despite the contributions of previous research using TRM to analyze TOA, data-driven TRM have some limitations in three areas: processing data data-driven TRM, identifying potential opportunities, and selecting the data source. From the data processing standpoint, most previous research has adopted keyword-based network analysis approaches for datadriven TRM. However, traditional keyword-based analysis approaches neglected to express the relationships between technologies and the market, inadequately reflecting the contexts. Compared with identifying potential technology opportunities, most researchers relied on current trends from the perspective of technology forecasting. The technology opportunity analysis (TOA) process contains six main stages. It includes data acquisition, description, potential relationship extraction, visualization, analysis of results, and identifying potential technical opportunities. Instead of identifying potential technology opportunities, the current TRM focuses on the first five steps in TOA. Technology opportunities are primarily based on technology hotspots, while technology opportunities often exist in potential connections. From the perspective of data source selection, technology and the market have become complex. Some critical technical information exists not only in patents, but also in literature and market reports. Similarly, market information is hidden not only in market reports but is also embedded in patents and literature. Therefore, it is necessary to construct a comprehensive data-driven TRM to automatically, quickly, and accurately extract technology and market data from a large amount of literature, patent, and commercial data. Therefore, this article is proposing a systematic method for developing data-driven TRM to identify potential technology opportunities for hyperuricemia drugs which contains three stages. The first stage is layer mapping. We map the literature, patent, and commercial data into the technology and market layers based on BERT and semantic analysis for the technology layer and market layer based on SAO. The second is content mapping. We identify topics of SAO components for technology and market layers based on BTM. The last stage is opportunity finding. We identify possible links between unconnected nodes based on link prediction. This data-driven TRM effectively identifies potential technology opportunities for hyperuricemia drugs and suggests pathways to realize these opportunities. The rest of this paper is organized as follows. Section 2 describes the relevant thermotical background on TOA, TRM, and data-driven TRM regarding BTM, SAO, and link prediction. Section 3 outlines our proposed approach for a data-driven TRM. It explains integrating BTM, BERT, SAO, and link prediction to analyze technological opportunities for hypouricemic drugs. A case study of technology prediction related to hyperuricemia is then presented in Section 4. Section 5 discusses our discovery and extension of the study. Section 6 summarizes the paper, looks at possible future research, and provides some limitations of our study. Technology Opportunity Analysis Technology opportunity analysis (TOA) helps researchers and organizations explore potential technological opportunities. It also enables a better understanding of scientific and technological developments by deeply mining valid information in publications, patents, and the literature [43]. Many researchers have developed effective methods to identify and predict technology opportunities. Early research used qualitative analysis methods that relied on expert experiences, such as Delphi and Workshop. While in specialized fields, specialist opinion can provide creative foresight for analyzing technology opportunities. However, information increased steeply. It is impossible to consistently identify technology trends based on expert knowledge alone, which is time-consuming and costly. In addition, specialist judgment is often limited by personal expertise and bias. Sometimes consensus cannot be reached. Bibliometrics was first introduced to analyze technological opportunities for emerging technologies [44], which is used to evaluate R&D activities by counting the number of authors and literature citation relationships [45,46]. This method has been widely used for the analysis of technology evolution trajectories in energy [47], conductive polymer nanocomposites [48], and robotics [49]. Bibliometrics provides quantitative data and objective evidence to evaluate technical opportunities and reach a consensus of experts. However, bibliometrics cannot extract the meaning of documents in-depth but can only reflect information such as the flow of knowledge, citations of literature, and patents. However, it has a time lag. Then text mining is then introduced in TOA, which is suitable for unstructured text data analysis and can extract text features in-depth [50]. There are many data analysis techniques such as machine learning and natural language processes arose. Some research has focused on developing automated and semi-automated data analysis methods in TOA [51][52][53]. Among them, principal component analysis (PCA) and text clustering are often used to extract topic information [37]. The similarity is used to measure connections between technical topics [54]. For example, Wu predicted evolutionary relationships between stem cell themes based on LAD, HMM, and co-occurrence theory [55]. Du used the topic models to predict potential topics for new drugs [22]. Zheng presented text mining tools to reveal possible innovation pathways and commercial applications of solid lipid nanoparticle drugs [56]. Zheng reviewed the importance of machine learning in facilitating the translation of bioenergy and biofuel innovations [57]. TOA has evolved over a long period of time and has been enriched by many scholars. To analyze potential technology opportunities efficiently, various data analysis tools were explored. These tools integrate mathematics, statistics, computer science, and operations research in TOA. However, rather than working independently, these methods are often combined into a new, more efficient analysis path. Technology Roadmap and Data-Driven Technology Roadmap TRM is a time-based multi-layer chart that can be integrated with various data analysis tools to form a more efficient analysis path in TOA. TRM-based technology opportunity analysis can better identify the dynamic distribution of technology. It can also predict technology development trends and identify potential technology opportunities in a time series [34,40,58]. It usually consists of three layers: the market, product, and technology [39,59] layers, as shown in Figure 1. The existing research of TRM in TOA is mainly categorized into the following streams: theory-based, case study-focused, and data/methodology-specific. Regarding theory-based TRM, some previous studies have focused on the concept and process of TRM [60,61]. To support market-pull and technology-driven innovation, new frameworks for TRM have been proposed, such as T-Plan TRM [62], learning-based TRM [63], or umbrella-based TRM [61]. In terms of a case study-focused on TRM, some previous studies have focused on applying TRM in different industries or sectors [59,[64][65][66][67]. To accommodate the domain-specific and case-specific needs in other areas, the customization of TRMs has been developed. There are TRMs in the aeronautical and aerospace sectors [68] and in robotics technologies in the power sector [69]. There are also TRMs in agile hardware development [67] and pharmaceutical technology landscape development [70]. Regarding data/methodology specific to TRM, some previous studies have focused on integrating data analysis tools to develop an efficient TRM [58,[71][72][73]. The TRM has excellent flexibility in the structure and development process. Various tools can be flexibly selected to build a TRM according to different purposes of TOA. For example, researchers have integrated various tools into TRM to accommodate the complex business environment and the rise of big data. Means such as technology mining (TM), analytic hierarchy process (AHP) [74], business model canvas (BMC) [75], cross impact analysis (CIA) [76], and fuzzy set theory [58] have been employed. Some studies used technology mining-based patents analysis for technology roadmaps to explore AI-healthcare innovation [77]. Furthermore, some studies employ tools such as Bayesian network and topic modeling to develop a risk-adaptive technology roadmap under deep uncertainty [78]. With the rise of big data analytics and the rapid change in the business environment, TRM integrating data and analysis methodology for TOA is becoming increasingly popular. More and more researchers concentrate on the importance of data to TRM [79]. The datadriven technology roadmap is gradually proposed [80]. The data source for data-driven TRM is increasingly diversified, mainly including patents, literature, and commercial data. Data analysis tasks can be used for data-driven TRM, such as text classification, summarizing, key information extraction, topic clustering, semantic analysis, navigation, topic visualization, and node linking [15,71,81]. To put the data source selected into proper layers of TRM, some studies choose data analysis tasks, such as text classification models [82]. Some studies employ data analysis tasks to identify potential technology topics for TRM, such as text clustering tools [57,83]. Some studies used semantic analysis tools to extract critical technology information, such as SAO [84,85]. And to identify potential technology opportunities, some studies adopt link prediction [15]. Data Analysis Techniques and Data-Driven Technology Roadmap Data analysis techniques have been increasingly employed to support quantitative and intelligent data-driven TRM. Bidirectional Encoder Representations for Transformers with the Data-Driven Technology Roadmap Data analysis tools, such as text classification models, can be used to put the data source selected into proper layers of data-driven TRM. Classification models such as support vector machine (SVM) [86], k-nearest neighbor (KNN) [87,88], Hidden Markov [89], and Bayesian [44,90] can be employed. To improve classification accuracy, these text classification models should be trained based on a massive manually labeled training dataset [86], which is time-consuming, laborintensive, and costly. The low accuracy of the classification model is often caused by the small sample size, inefficient model computation, and high reliance on domain experts. To improve the accuracy of classification based on small sample training set models, the BERT model is proposing in 2018. The model has been widely used for its excellent performance in text classification [91]. When dealing with domain-specific classification tasks such as pharmaceutical technology, it is required to construct small domain sample datasets and pre-train the model [17,92,93]. This paper intends to create a small domain-specific training set. This will be followed by training the BERT model based on fine-tuning [94] to accurately classify pharmaceutical data into proper layers of data-driven TRM. Subject-Action-Object Analysis with the Data-Driven Technology Roadmap Data analysis tools, such as SAO, can extract critical technology information for layer mapping of data-driven TRM [95]. Initially, the SAO structure was widely used to analyze technical documents such as patents, present valid technical information, critical technocratic findings, and represent the relationships between technical elements [96][97][98][99][100][101][102]. Guo constructed SAO chains to identify future directions of technology [103]. Wang identified technology opportunities based on SAO and the morphological matrix [104,105]. Natural language processing techniques have enabled SAO structures to express rich semantic information compared with topics. Therefore, it is considered an effective tool for identifying critical technical inter-elements in a corpus [106]. Subsequently, the SAO structure has been extended to many other fields, such as patent similarity analysis [85,107] and patent network analysis [108]. It can also apply to technology tree analysis [96,109], technology trend analysis [110], online review demand extraction [27], and M&A target selection [101]. Although SAO is widely used in TRM with TOA, there are still some limitations. When constructing the data-driven TRM, if the SAO structures are adopted directly without refining, the TRM is likely has a large amount of redundancy. It is inappropriate for efficient analysis and needs further refinement [111]. Therefore, it is necessary to identify topics of SAO components for different layers of data-driven TRM based on topic modeling tools [27,105]. Biterm Topic Model with the Data-Driven Technology Roadmap Data analysis tools, such as topic models, can identify potential technology topics for data-driven TRM [112,113]. Identifying topics of SAO components for different layers via a topic modeling tool can help researchers understand the target domain effectively. It can help to extract which areas the technical solution focuses on, how the solutions work, and which parts of the solution are the targets [44,52]. The most popular topic modeling techniques are LDA [81]. The LDA model has been widely used in various fields, such as text mining, bioinformatics, and image processing. The model has proven effective in extracting topics from large amounts of text data and analyzing technical topic changes [114,115]. However, the data is increasingly various, and short text data has emerged and been exploded. Sparse and unbalanced texts characterize short texts. The accuracy of extracting topics using the same topic modeling algorithms as long texts, such as LDA, is low. There is an urgent need to propose a topic model suitable for short texts [24]. Yan proposed a topic model algorithm, BTM, that is more suitable for short text clustering [116]. The model enhances the learning efficiency of the topic model by calculating the unordered co-occurrence word pairs. It effectively solves the semantic sparsity problem of short texts [117]. BTM can automatically extract hot and potentially technical topics from large amounts of short text data, even in the face of domain-specific datasets. BTM has become one of the most widely used short text modeling technologies [118]. BTM analyzes the technologies and market topics based on keywords that can't reflect the contexts. To reflect the contextual semantics of the topics, BTM is more effective when combined with SAO [108]. However, the SAO structure consists of phrases. BTM is suitable for identifying topics of SAO components, which helps reduce the redundancy of SAO effectively. Some limitations exist in using BTM and SAO for technical opportunity analysis. They only consider the existing relationships and links, and pay less attention to identifying potential technology opportunities in automation [102]. However, technology opportunities possibly exist in potential connections [111]. SAO and BTM need to be combined with predictive tools for better performance in TOA, such as link prediction. Link Prediction with Data-Driven Technology Roadmap Data analysis tools, such as link prediction, can identify potential connections for datadriven TRM. Link prediction is a technique for discovering nodes or links in a network that are currently unknown but may be connected in the future. It has been well developed and applied in social network analysis and TOA [119,120]. There are three significant categories of link prediction: link prediction based on similarity, maximum likelihood estimation, and probabilistic models. Link prediction based on maximum likelihood estimation is unsuitable for massive amounts of data with low prediction accuracy. Link prediction based on probabilistic models often relies on external attributes of nodes, which are often difficult to obtain. In contrast, similarity-based link prediction is more accurate and is widely used [121]. With the development of classification models, Hansen found that link prediction can be constructed based on them [122]. Supervised classification models based on Bayesian, neural networks, and support vector machines (SVM) [123] can be employed to improve model accuracy. Subsequently, scholars began to compare the performance of link prediction based on classification models in processing domain datasets. In TOA, Yoon compared similarity-based and SVM-based link prediction performance. We can try to identify potential technology opportunities based on more classification models, such as Lightgbm with link prediction [123]. Link prediction is increasingly becoming a research hotspot in technology prediction. It has been widely used in the technical analysis of biological and medical patent data [124]. Shibata performed link prediction analysis in five large citation networks [125]. Xiao combined SAO with link prediction for identifying technical opportunities in skin melanoma [111]. Ma proposed a link-prediction-based technical knowledge network framework to predict potential technical opportunities in Alzheimer's disease [126]. Methodology This article proposed a systematic method for developing data-driven TRM to identify potential technology opportunities for hyperuricemia drugs. It contains three stages. The first is layer mapping. We classify the literature, patent, and commercial data into layers based on BERT and semantic analysis for the technology layer and market layer based on SAO. The second stage is content mapping. We identify topics of SAO components for technology and market layers based on BTM. The last stage is opportunity finding. We identify possible links between unconnected nodes based on link prediction. The datadriven TRM benefits technology needs assessment and technology response development in the technology roadmap process. The proposed model consists of three modules, as in Figure 2. The numerous databases of patents, academic papers, journals, and business reports contain voluminous technology and market information. Their abstracts are also stored in a structured database format, making them a beneficial source for data analysis. Since this study calls for technical and market-related data in developing a data-driven technology roadmap, we employ Medline, Derwent Innovations Index (DII), and Abstracts of Business Information (ABI) databases as data source collection. We then use the different search queries related to the research topic to download relevant scientific papers, patents and business journals and reports. Setting the Timeframe of Data-Driven TRM Considering the complex and dynamic technology replacement and market changes, one core of the TRM is setting the time frame. The technology opportunity analysis can be done within each time frame. Different rules, such as S-curve, can set the time frame. In the S-curve, the generation and development of technology have their pattern and trajectory. The stages of the technology life cycle are predictable and iterative. Building a time frame based on the S-curve helps to identify technology opportunities in a forward-looking manner. Therefore, this study uses the s-curve-based model to determine the development stages of R&D activities of technologies, and to identify technology and market trends within the altered time frame [127]. Classifying the Data into Layers Based on BERT In the second module, BERT, the text classification algorithm based on fine-tuning is used to classify the tech-related and market-related data into layers for data-driven TRM. The previous research on TRM using data-driven approaches tends to use abstracts. The abstract is an overview of the full text and facilitates a rapid discovery of high-value information with low-value density data. This section consists of the following steps: First, the abstracts are separately extracted from the database in a timeframe. After that, we pre-process the tech-related and marketrelated data in text format in the timeframe. This article chooses the sent tokenize module of the Natural Language Toolkit (NLTK) in the Python package to divide the abstracts into sentences. We then conducted a BERT model to classify technical and market data. Even if BERT performs well in classification tasks based on the Google corpus, organizing technical and market data requires domain training sets, such as pharmaceutical data. To construct the domain training set, domain experts will invite, and 30 percent of sentences will be extracted randomly from the entire data set. The extracted training set will then be manually labeled with technology-related, market-related, and irrelevant data. After that, we will pre-train the BERT model. Only the technical-related and market-related data will be left. Finally, the whole dataset will be divided into several subsets and classified into technology and market layers of data-driven TRM in the timeframe. Semantic Analysis for the Technology Layer and Market Layer Based on SAO The SAO structures are a machine learning technique. It is always employed to obtain objective, structure, and effect data from text and then converts that information into structured text data. The SAO structures consider contextual meaning, which is superior to the keyword-based analysis. Thus, we chose the SAO technique to extract technical and market semantic structures, reflecting the contexts. This section consists of the following steps: first, we will extract the SAO structures. SAOs cannot be extracted without the help of a parser, which can analyze textual data through regular syntax rules. In this study, we employ the Stanford Parser, (Standford Parser, 3.9.2-models; package for extracting SAO structures) available as an open-source package for sentence separation [128]. Next, we extract the SAOs with a series of linguistic algorithms from each sentence in the timeframe. After that, we will filter, clean, and combine the SAO structures. The data used for technical analysis is likely to be very large with a low-value density. It contains all of the SAO structures. It is not appropriate for high-efficient analysis and needs to be filtered, cleaned, and combined. Therefore, we delete the duplicated technology and market SAO structures; only the unique SAOs are left. However, in the base of the dependency parser, the SAO (subject + action + object), SO (subject + action), and AO (adjective + object) structures all collected [129]. We remove the technology and market SAOs without subjects, actions, or objects, such as SO and AO. Pre-Processing the SAO Components There will be vast redundancy if the contents mapping is solely based on SAO semantic analysis to identify potential technology opportunities. Hence, in this module, in addition to filtering the SAO structure above, we will employ a text clustering approach toward the dimensionality reduction of SAOs. Short text data is sparse text and an imbalance of data. Text clustering algorithms such as LDA cannot extract topics of short tests. Most of the SAO structures are phrases. We selected BTM, more appropriate for short text clustering, to extract technology and market SAOs components topics. This section consists of the following steps: first, we will divide the remaining technology and market SAOs into several subsets. We will then remove the data noise and pre-process the sub-datasets group by group, such as word token, stemming, lemmatization, and excluding stop words. Identify Topics of SAO Components for Technology and Market Layers Based on BTM We conducted a BTM-based topic model to identify meaningful core and potential technology and market topics automatically. Perplexity is a crucial index to evaluate the clustering effect of topic modeling models. Nonetheless, perplexity cannot explain the semantic coherence of words for each topic on a non-probabilistic model. Topic coherence can describe it. Therefore, we chose coherence as a metric to evaluate the BTM model's effectiveness. This section consists of the following steps: first, we evaluate the value of coherence while varying the number of topics to determine the optimal number. Second, we pre-train the BTM model. Lastly, we identify topics of SAO components for technology and market layers based on BTM. Identify Potential Connections Based on Link Prediction The core of building a data-driven roadmap is to predict possible technology opportunities. However, content mapping only analyzes past data and ignores potential future technological opportunities. Therefore, in this module, we chose link prediction to predict the potential links between unlinked nodes. This section consists of the following steps: first, we select the results of SAO preprocessing in Section 3.3.1 as a train set. We then train the model based on link prediction to construct the overall network. Finally, we identify the probability of potential links for all unlinked topic nodes based on the trained link prediction model. Integrating TRM and Analyzing Technology Opportunities It is challenging to analyze such a large-scale network, so we only keep and interpret technology and market subnetworks. Subnetwork nodes are selected from the topics extracted by BTM. This section consists of the following steps: first, we map the technology and market subnetworks in a time series and layer series in two-dimensional data-driven TRM. Figure 3 shows an example of the final visualization. As shown in the figure, the technology roadmap is divided into two layers, technology and market. For example, the technical layer is divided into three sub-layers, from sub-layer S, sub-layer A to sub-layer O, along the vertical axis. The horizontal axis is the time series arranged in the timeframe-the same for the market layer. Figure 3. The data-driven TRM for the technology layer. The technical layer was divided into six communities represented by C 1 , C 2 , C 3 , C 4 , C 5 , and C 6 . Dashed cycles with different colors highlight diverse communities. The edges' width and arrows' width represent the probability of a potential link between unconnected nodes. The wider the edges and arrows, the higher the likelihood of a potential link. Arrows and edges with different colors represent different community themes. Second, we select the nodes and edges in the technology and market subnetworks where potential links exist and visualize them in technical and market layers. If there is a potential link between two unlinked topics, the two nodes are linked with a line with arrows. The edges' width and arrows represent the probability of a potential link between unconnected nodes. The wider the edges and arrows, the higher the likelihood of a potential link. We use these connections as possible directions for future technology and market development. We then divide the link prediction visualization results into different communities in technical and market layers [130,131]. Arrow edges with different colors represent different community themes. Dashed cycles with different colors highlight diverse communities. Lastly, we analyze possible opportunities for future technology and market development based on the final technology roadmap. Data Collection The dataset of this study was derived from three distinct databases extracted from Medline, Derwent, and ABI, respectively. We then used the Mesh term 'MH = (gout OR hyperuricemia)' as a search query from Medline. We chose the International Patent Classification Number 'IP = (A61P-019/06)' as a search query from Derwent. And we selected the keyword "hyperuricemia" as a search query for ABI. The cutoff date was 31 December 2021. Any data beyond that date are not part of this study. Therefore, a set of 6124 hyperuricemia-related essays, a collection of 5158 hyperuricemia-related patent data, and 4582 hyperuricemia-related commercial data were extracted, as shown in Table 1. The study keeps the dataset with abstracts. It includes 5066 literature abstracts, 5158 patent abstracts, and 1447 commercial data. The databases used in this study are presented in Table 1. The literature abstracts, patent abstracts, and commercial data were analyzed for technology opportunities. This study divides the part of the dataset with abstracts related to hyperuricemia drugs into three sub-periods according to the S-curve, as shown in Classifying the Data into Layers Based on BERT In the second module, the abstracts are separately extracted from the tech-related and market-related data of TS 1 , TS 2 , and TS 3 for text analysis. After that, we pre-processed the data extracted from TS 1 , TS 2 , and TS 3 by dividing the 11,671 abstracts into 85,656 individual sentences with Python's NLTK package. We then conducted a BERT model to classify technical and market data. Even if BERT is beneficial for classifying technical and market data, how to build a training set cannot do without the help of experts. Three experts engaged in introducing hyperuricemia drugs for more than ten years were invited to construct the training set. With the help of the domain experts, we reviewed the data sentence-by-sentence and extracted 30 percent sentience randomly from the entire data set as the training set. The extracted training set is then manually labeled and classified into technical-related, market-related, and irrelevant data, denoted by C 1 and C 2 , and C 0 , respectively. The BERT model is pre-trained, and each subset (TS 1 , TS 2 , TS 3 ) was divided into three categories. This study calls for technical and market-related data in developing a datadriven technology roadmap. The irrelevant data in C 0 is useless to our research [15]. Thus, after classification, we only kept 54,026 tech-related sentences in C 1 and 31,630 marketrelated sentences in C 2 . Finally, the whole dataset is divided into six subsets and classified into different layers of data-driven TRM. In the third module, we extract technology and market SAO components topics in the timeframe. Thus, we divided the 6343 technology SAOs and 1076 market SAOs into 19 subsets, as shown in Table 4. For example, T-S-TS 1 meant the technology S components related data in 2010-2013, and M-S-TS 1 represents the market S components associated data in 2010-2013. To remove the data noise, we pre-processed the 18 sub-datasets group by group, such as word token, stemming, lemmatization, and excluding stop words. In addition to the 972 basic stop-words, we designated 2188 domain-specific stop-words and excluded them from analysis, such as 'uric acid', 'hyperuricemia', 'treatment', etc. Identify Topics of SAO Components for Technology and Market Layers Based on BTM We conducted a BTM-based topic model to identify meaningful core and potential technology and market topics automatically. To determine the optimal number of topics, we evaluate the value of coherence while varying the number of topics. The maximization of the coherence value defines the number of optimal topics. We then calculated the coherence value subset by subset, as shown in Table 5, Figures 5 and 6. Supplementary Tables S1-S6. Take Supplementary Table S6 as Identify Potential Connections Based on Link Prediction The last section forecasts possible technical opportunities for hyperuricemia. We constructed the entire network based on link prediction. To build the whole network, we selected the data set pre-processed in Section 4.3.1 to train the model in technology and market layers based on link prediction. The entire technology network consists of 7693 nodes and 17,190 edges. The whole market network consists of 1868 nodes and 2907 edges. Integrating TRM and Analyzing Opportunities We select some nodes and edges in the technology and market sub-networks for visualization. Only the nodes and edges that are potentially connected are selected for visualization from the 61 technology topics and 42 market topics extracted in Section 4.3.2. Next, in the timeframe, we arrange the visualizations in technology and market layers of data-driven TRM from sub-layer S to sub-layer O, as shown in Figures 3 and 7. Finally, we divided the communities. The technical layer was divided into six communities represented by C 1 , C 2 , C 3 , C 4 , C 5 , and C 6 . The marketing layer was divided into five communities represented by C 7 , C 8 , C 9 , C 10 , and C 11 . Figure 7. The data-driven TRM for the market layer. The marketing layer was divided into five communities represented by C 7 , C 8 , C 9 , C 10 , and C 11 . Dashed cycles with different colors highlight diverse communities. The edges' width and arrows' width represent the probability of a potential link between unconnected nodes. The wider the edges and arrows, the higher the likelihood of a potential link. Arrows and edges with different colors represent different community themes. Figures 3 and 7 integrate semantic analysis, topic modeling, and link prediction results and show the final technology roadmap. We analyzed the 11 communities in the technology and market layers. Several technological opportunities can be identified. The technical opportunities in the C 1 , C 4 , C 7, and C 10 communities are mostly related to chemical-based drugs such as small molecule drugs. The technological opportunities in the C 2 , C 3 , C 6 , C 8 , and C 11 communities are primarily associated with biological-based medications such as protein drugs. Take protein drugs related communities as an example. The technology opportunities for hyperuricemia drugs are protein drugs related to the C 2 , C 3 , C 6 , C 8 . From 2019 to 2021, while trying to restore human urate oxidase activity, researchers continued to study how to reduce the half-life of existing protein drugs. We can derive urate oxidase from Aspergillus flavus (T-S-TS 3 -T 1 in C 2 , T-S-TS 3 -T 4 , and T-S-TS 3 -T 5 in C 2 ). How to eliminate or reduce the immunogenicity of existing urate oxidase and obtain active and non-immunogenic human urate oxidase drugs will be the future direction of protein drug development. At these three stages, developers' attention to restoring human uric acid oxidase activity is greater than the market demand. In the future, a comparison of DNA sequences of uric acid oxidase from different sources could be considered to discover the critical amino acid sites that affect the enzyme activity (T-S-TS 2 -T 1 in C 2 ). The essential amino acid sites were then mutated (M-O-TS 2 -T 2 in C 2 ). After each completed mutation, human uric acid oxidase (T-A-TS 2 -T 2 in C 4 ) was induced, and the activity of uric acid oxidase (T-S-TS 3 -T 1 in C 2 ) was assayed after affinity purification. We can resurrect the human uric acid oxidase pseudogene through the above pathway. We can also overcome the disadvantage that existing oxidase drugs are immunogenic and produce antibodies when used for a long time. We can obtain human uric acid oxidase with high drug activity but low immunogenicity to improve the treatment of hyperuricemia and gout. The technology opportunities for hyperuricemia drugs are small molecule drugs related to the C 1 , C 4 , C 7 , and C 10 communities. Similarly, how to reduce the side effects of small molecule drugs will be the future direction of small molecule drug R&D, such as lowering hepatotoxicity and nephrotoxicity(M-O-TS 1 -T 2 ). At these three stages, developers' attention to reducing the side effects of small molecule drugs is lower than the market demand. Reducing the side effects of small molecule drugs is critical in the future. We can try to discover new structures from Chinese medicine or in combination with Chinese medicine, such as combining heat-clearing, dampness-relieving herbs with small molecule drugs (M-S-TS 3 -T 10 in C 7 , M-S-TS 3 -T 10 in C 3 ). In addition, it would be good to try to change diet patterns combined with small molecule drug therapy. We can reduce the intake of high purine foods and prevent a eutrophication diet (T-S-TS 2 -T 6 in C 1 , T-O-TS 3 -T 2 in C 5 , M-S-TS 2 -T 9 in C 7 , M-S-TS 3 -T 7 in C 8 ). Discussion This paper presents a systematic approach to developing a data-driven TRM to identify potential technology opportunities for hyperuricemia drugs. Despite the contributions of previous research, we can extend the existing data-driven TRM from three aspects. These are identifying potential opportunities with data-driven TRM, the process, and the data source of data-driven TRM. Compared with current trends, we chose link prediction from the perspective of technology forecasting to identify potential technology opportunities for opportunity finding of data-driven TRM. SAO considers existing technology connections, while technology opportunities are often hidden in potential relationships. SAO needs to be combined with link prediction to predict technology opportunities better. Therefore, we identify possible links between unconnected nodes based on link prediction. Based on the results of the link predictions, we should focus on resurrecting the pseudogene of human uric acid oxidase and reducing the toxicity of small molecule drugs in the future. From the perspective of the data process, compared with keyword-based analysis, we chose SAO to extract critical technology information for layer mapping of data-driven TRM reflecting the contexts. The SAO structure was widely used to analyze documents such as patents, online review demand extraction, and paper. Although SAO is commonly used in TOA, the SAO structures must be refined for efferent analysis. Therefore, this article identifies topics of SAO components for different layers of data-driven TRM based on BTM. It can reduce the redundancy of SAO effectively. Besides that, it can also help to extract which areas the technical solution focuses on, how the solution works, and which parts of the solution are the targets of SAOs. Based on the potential opportunities identified by the link prediction, the realization path of the opportunity is inferred by the SAO structure. For example, it is critical to resurrect the pseudogene of human uric acid oxidase. We could consider comparing DNA sequences of different sources of uric acid oxidase to discover the critical amino acid site that affects enzyme activity (T-S-TS 2 -T 1 in C 2 ). We would then mutate the essential amino acid site (M-O-TS 2 -T 2 in C 2 ) and induce the expression of uric acid oxidase (T-A-TS 2 -T 2 in C 4 ) affinity purification to analyze the enzyme activity (T-S-TS 3 -T 1 in C 2 ). From the perspective of data source selection, compared with selecting patents as technical data and commercial reports as market data, we choose patent, literature, and commercial reports as technical and market data of data-driven TRM. We distinguish technology and market data from the vast amount of literature, patent, and commercial data automatically based on BERT and combined with the small domain training set to train BERT models to classify hyperuricemia drugs with high accuracy. To identify potential technology opportunities in concert with market demand, it is necessary to analyze technology opportunities from both market and technology perspectives. Conclusions It is essential to assist hyperuricemia developers via a data-driven TRM for TOA. However, less attention has been paid to integrating multiple analytical tools within a data-driven TRM to identify potential technology opportunities automatically, such as SAO, BTM, and link prediction. This study extends the existing data-driven TRM from several aspects to fill this gap. There are data process, technology forecasting, and data source selection. And we try to respond to the following questions in this study. First, we illustrate how to build a semantic-based data-driven TRM and identify topics of SAO components based on BTM. Second, we point out how to identify potential technology opportunity points through link prediction. Last, we demonstrate how to extract technical and market information automatically based on text classification tools from various patents, literature, and business data. Therefore, this research provides an attractive option for hyperuricemia drugs TOA. It is critical to narrow down the technology research topics, reducing R&D risks, and supporting the decision-making of hyperuricemia drug research. Despite the promise of data-driven TRM in hyperuricemia drugs' TOA, challenges have remained in one aspect that can be addressed by future research. First, the SAO structure is complex. It is challenging for machine learning models to identify semantic relations from complex sentences. In future research, we will focus on exploring how to extract SAO structures from complex sentences in the future. Second, patent, literature, and commercial data are not real-time data because of the time lag. It is difficult to obtain the latest technology and market information for TOA. In the future, we will consider developing a dynamic data-driven TRM, such as by adopting dynamic topic models for the topic analysis of the SAO structure. Finally, this paper selects patent, technical and commercial data for data source selection. In the future, more diversified data sources, such as online reviews, drug instructions, etc., will be selected to complement the existing analysis effectively. Supplementary Materials: The following supporting information can be downloaded at: https://www. mdpi.com/article/10.3390/ph15111357/s1, Table S1. Topic result of S components for technology layer in the time-based framework; Table S2. Topic result of A components for technology layer in the time-based framework; Table S3. Topic result of O components for technology layer in the time-based framework; Table S4. Topic result of S components for market layer in the time-based framework; Table S5. Topic result of A components for market layer in the time-based framework; Table S6.
9,839.2
2022-11-01T00:00:00.000
[ "Biology" ]
Comparison of ANM and Predictor-Corrector Method to Continue Solutions of Harmonic Balance Equations In this work we apply and compare two numerical path continuation algorithms for solving algebraic equations arising when applying the Harmonic Balance Method to compute periodic regimes of nonlinear dynamical systems. The first algorithm relies on a predictor-corrector scheme and an Alternating Frequency-Time approach. This algorithm can be applied directly also to non-analytic nonlinearities. The second algorithm relies on a high-order Taylor series expansion of the solution path (the so-called Asymptotic Numerical Method) and can be formulated entirely in the frequency domain. The series expansion can be viewed as a high-order predictor equipped with inherent error estimation capabilities, which permits to avoid correction steps. The second algorithm is limited to analytic nonlinearities, and typically additional variables need to be introduced to cast the equation system into a form that permits the efficient computation of the required high-order derivatives. We apply the algorithms to selected vibration problems involving mechanical systems with polynomial stiffness, dry friction and unilateral contact nonlinearities. We assess the influence of the algorithmic parameters of both methods to draw a picture of their differences and similarities. We analyze the computational performance in detail, to identify bottlenecks of the two methods. Introduction Harmonic Balance (HB) permits the efficient approximation of periodic solutions of nonlinear ordinary differential equations. Often, we want to determine the solution as a function of a free parameter. To this end, numerical path continuation is commonly applied. Continuation provides higher robustness and efficiency as compared to simply computing the solution for a sequence of equidistant parameter values. Perhaps more importantly, continuation allows us to overcome turning points and therefore to capture multiple solutions for a single parameter value. The purpose of this work is to enlighten the strengths and weaknesses of two popular methods for continuing solutions of the HB equations. HB uses a truncated Fourier series as approximation ansatz. Substitution into the ordinary differential equation system gives a residual term, which is then made orthogonal to the Fourier basis function (Fourier-Galerkin projection). This corresponds to requiring that the Fourier coefficients of the residual are zero, for those harmonics retained in the ansatz. When we apply this method to the equation of motion of a mechanical system, with generalized coordinates q q q, subjected to periodic forcing, we obtain the algebraic equation system, Herein, Ω is the angular excitation frequency,q q q is the vector of Fourier coefficients of the approximation for q q q(t), andf f f nl andf f f ex are the Fourier coefficients of the nonlinear forces and the external forces, respectively. S S S is dynamical stiffness matrix representing linear internal forces proportional to q q q,q q q, andq q q, where overdot denotes derivative with respect to time. S S S is block diagonal (different harmonics are decoupled in the linear case). Alternating-Frequency-Time scheme with Predictor-Corrector Continuation (AFT-PreCo) The AFT scheme approximates the Fourier coefficientsf f f nl by sampling q q q andq q q at a certain number of equidistant time instants N along one period, evaluating f f f nl (q q q,q q q) in the time domain, and applying the discrete Fourier transform to the samples of the nonlinear force. Hence, the nonlinear forces are represented by N samples. N affects the accuracy of the procedure and has to be selected properly. For polynomial forces, the sampling procedure is exact beyond a certain N. Otherwise,f f f nl is an approximation and contains aliasing errors. The great advantage of the AFT procedure is that it can be easily applied to smooth as well as non-smooth nonlinear forces. The downside of the AFT scheme is that it is not straight-forward to efficiently compute derivatives of order higher than one. The AFT scheme is commonly combined with PreCo continuation. Suppose we want to determineq q q for a range of the free parameter Ω . Starting from an initial solution X X X 0 with X X X := q q q T Ω T , we can do a prediction, where X X X 1 is the unit tangent to the solution path at the point X X X 0 and α is the step length of the prediction. X X X pre PreCo will usually not satisfy Eq. (1). Hence, correction steps are necessary, commonly in the form of Newton iterations, to improve the estimate until ||r r r(X X X)|| < ε for a given tolerance ε. Note that the equation system is under-determined. This is often resolved by considering an additional equation that ensures that the new solution point lies at a certain distance α from X X X 0 (arc-length continuation). In the PreCo, the step length is a crucial parameter: A too small step length leads to spurious computation effort, a too large step length may lead to many and costly correction iterations or even divergence. The step length is commonly adjusted automatically with the intent to achieve a desired number of correction iterations. However, it is advisable to choose reasonable upper and lower bounds, to avoid overlooking important features of the solution path and getting stuck near branching points, respectively. These bounds are usually chosen based on experience. Classical Harmonic Balance with Asymptotic Numerical Method (cHB-ANM) The ANM can be interpreted as high-order predictor (order p), which is a Taylor series expansion of the solution path, around X X X 0 , in the arc length α, X X X pre ANM = X X X 0 + αX X X 1 + . . . + α p X X X p . The step size α is determined such that the estimated error, based on the range of utility of the Taylor series, remains below the tolerance ε, with the intent to completely avoid correction iterations. For the ANM, the algebraic equation system is commonly recast into quadratic form, i. e., with only constant, linear and bilinear (quadratic) terms in the unknowns. The Fourier coefficients of the nonlinear terms can in this case be expressed analytically without having to resort to sampling, which corresponds to classical HB. Note that all variables, including the nonlinear forces are represented by 2H + 1 Fourier coefficients. The quadratic form permits the successive computation of X X X k up to high order. Only a single Jacobian matrix has to be factorized to determine all X X X k , k > 0, per expansion point X X X 0 [2]. This contrasts the PreCo where several of such factorizations are required, one per Newton iteration, to get to the next point on the solution branch. The cHB-ANM cannot directly deal with non-smooth nonlinearities. These have to be approximated by appropriate analytic functions (regularization). Moreover, the equations have to be brought into quadratic form, which generally requires introducing auxiliary variables and equations. For details on how the cHB-ANM can be implemented in a computationally efficient and user-friendly way, we refer to [3]. Fig. 1 (a) depicts the frequency response of the Duffing oscillator. For the same H, both methods have the same number of unknowns (Fourier coefficients of q up to order H). However, the results differ as the nonlinear terms are represented in a different way. For the cubic term of the Duffing oscillator, it can be shown that the sampling procedure yields the Fourier coefficients of the nonlinear forces up to order H without aliasing error for N ≥ 4H + 1. For Fig. 1 (a) N was selected accordingly. For 2H cHB + 1 = N AFT , the nonlinear terms are represented with the same number of Fourier coefficients in cHB as samples in the AFT, so that a similar accuracy can be expected. Hence, AFT N AFT = 5 agrees well with cHB H ANM = 2, N AFT = 13 with H ANM = 6 and so on. Fig. 1 (b) shows the results for a single degree of freedom (SDOF) oscillator with elastic dry friction nonlinearity. Without regularization, only AFT can be applied, and a reasonable approximation is achieved with H = 1, N = 45. To regularize, an auxiliary variable describing the force had to be introduced, which is not well-represented with H = 1. Again, for H = 22 = (45 − 1)/2 both methods yield similar results for the regularized model. The remaining deviation to the reference is due to regularization. To compute the results depicted in Fig. 1 (b), AFT-PreCo H = 1, N = 45 (non-smooth) was about 25 times faster than cHB-ANM H = 22. We observed that as the regularization becomes steeper, the number of harmonics in the cHB-ANM has to be increased to continue the whole branch. Sometimes this value seems too high to be useful. In Fig. 1 (c) the computation effort is shown for a modal model (n modes) of a beam with geometric nonlinearity. The results suggest that cHB-ANM becomes more efficient for larger numbers of DOFs. The number of solution points was larger in the case of the cHB-ANM, which overall lead to approximately the same total number of Jacobian factorizations for the entire solution path. Apparently, the AFT calculation of the nonlinear forces and their derivatives is less efficient than the evaluation of the nonlinear terms and calculation of higher-order Taylor series coefficients within the cHB-ANM. Further investigations are in progress to gain a better understanding of this. Fig. 1: Representative results: a) Duffing oscillator; b) SDOF system with elastic dry friction; c) geometrically nonlinear beam with n coupled modal coordinates. A RMS is the root-mean square of the generalized coordinate. More results on different benchmark systems and further analyses of opportunities and limitations of both methods will be published in a journal article. Conclusions The results indicate that the AFT-PreCo is better suited for non-smooth nonlinearities such as stick-slip friction or unilateral constraints. The cHB-ANM is highly efficient for low-order polynomial nonlinearities and problems with a large number of DOFs, as in the case of geometrically nonlinear finite element models. It is expected that the cHB-ANM would greatly benefit from selecting a separate, higher truncation order for the auxiliary variables.
2,370.2
2019-01-28T00:00:00.000
[ "Computer Science" ]
Quantum de Sitter geometry Quantum de Sitter geometry is discussed using elementary field operator algebras in Krein space quantization from an observer-independent point of view, {\it i.e.} ambient space formalism. In quantum geometry, the conformal sector of the metric becomes a dynamical degree of freedom, which can be written in terms of a massless minimally coupled scalar field. The elementary fields necessary for the construction of quantum geometry are introduced and classified. A complete Krein-Fock space structure for elementary fields is presented using field operator algebras. We conclude that since quantum de Sitter geometry can be constructed by elementary field operators, the geometry quantum state is immersed in the Krein-Fock space and evolves in it. The total number of accessible quantum states in the universe is chosen as a parameter of quantum state evolution, which has a relationship with the universe's entropy. Inspired by the Wheeler-DeWitt constraint equation in cosmology, the evolution equation of the geometry quantum state is formulated in terms of the Lagrangian density of interaction fields in ambient space formalism. I The quantum de Sitter geometry or quantum gravity is a subject fraught with enigmas that has garnered attention for over four decades.These enigmas encompass the absence of an S-matrix, challenges in defining observer-independent gauge-invariant, the issue of infrared divergences and renormalizability, and the construction of a complete space of quantum states, among others.In a previous article, we delved into the study of asymptotic states and the S-matrix operator, based on the construction of a complete Hilbert-Fock space for massive scalar fields in the de Sitter ambient space formalism [ ].The formulation of an observer-independent non-abelian gauge theory is also achievable using the ambient space formalism [ , ]. Krein space quantization leads to the disappearance of infrared divergence [ , ].In this work, we explore the construction of a complete space of quantum states for quantum geometry and delve into quantum state evolutions. Recently, Morris discussed that full quantum gravity may be perturbatively renormalizable in terms of Newton's constant, but non-perturbative in ℏ [ ]. Morris's interesting idea is to use the renormalization group properties of the conformal sector of gravity.It is well known that in quantum theory, the conformal sector of the spacetime metric becomes a dynamical degree of freedom as a result of the trace anomaly [ , ].Then the metriccompatible condition is no longer valid and the simplest chosen geometry in this situation is Weyl or conformal geometry.Weyl geometry can be described with the tensor metric field and its conformal sector, which can be expressed as a scalar field [ ].In the Landau gauge of the gravitational field in de Sitter (dS) space, the conformal sector is described by a massless minimally coupled (mmc) scalar field [ ].Its quantization with positive norm states breaks the dS invariant [ ].For its covariant quantization, Krein space quantization is needed [ ].Using the interaction between the gluon field and the conformal sector of the metric in Krein space quantization, the axiomatic dS quantum Yang-Mills theory with color confinement and the mass gap can be constructed [ , ].We showed that the mmc scalar field can be considered as a gauge potential and the dS metric field and its conformal sector are not elementary fields à la Wigner sense [ ].However, they can be written in terms of elementary fields, in which the mmc scalar field plays a central role.We presented two different perspectives on quantum geometry, namely the classical and quantum state perspectives.The first is observer-dependent and the second is observerindependent.We discussed that it is essential to use an observer-independent formalism when considering quantum geometry.Therefore, we must use the algebraic method in the ambient space formalism for studying quantum geometry and define the quantum state of geometry | G , which will be addressed in this paper. In recent years, some authors have also used the idea of the algebraic approach to consider quantum gravity.This approach takes into account an algebra of observables, Hilbert space structure, and geometry quantum state [ , ].By using the algebraic method, in the previous paper the complete Hilbert-Fock space was constructed for the massive elementary scalar field in dS ambient space formalism [ ].Here we generalize it to construct a Hilbert-Fock space structure for any spin fields in subsection . .This space is a complete space under the action of all elementary field operators in dS space except linear gravity and the mmc scalar field.To obtain a complete space for these two fields, we need Krein space quantization, which is discussed in subsection . .We know that the QFT in Krein space quantization combined with light-cone fluctuation is renormalizable [ ].Therefore, the two problems of renormalizability and constructing the complete space of quantum states for quantum dS geometry can be solved using Krein space quantization and ambient space formalism. In the next section, we briefly review the necessary notions of general relativity and QFT for our discussion.All possible fundamental fields necessary for quantum geometry are introduced and classified in Section .In section ., Krein-Fock space as a complete space for quantum geometry is presented.The quantum state of geometry | G is considered in section , which can be formally written in terms of orthonormal bases of Krein-Fock space.It is immersed and evolves in the Krein space K instead of the Hilbert space H.Quantum state evolution is characterized by the total number of accessible quantum states in the universe, which has a relationship with the total entropy of the universe.In Section , using the Wheeler-DeWitt equation, the constraint equation for the quantum state of geometry is formulated in terms of the Lagrangian density of interaction fields. .B Spacetime structure and observation are challenging concepts in quantum theory.Riemannian geometry is usually employed in general relativity.In Riemannian geometry, spacetime can be described by the metric ( ì , ) (with the metric-compatible condition) and curved spacetime can be visualized as a 4-dimensional hypersurface immersed in a flat spacetime of dimensions greater than 4.Although the 4-dimensional classical spacetime hypersurface is unique and observer-independent, the choice of metric is completely observer-dependent, which is a manifestation of the general relativity principle, all observers are equivalent (i.e.diffeomorphism covariance).However, spacetime hypersurfaces are no longer unique in quantum geometry.In the classical perspective, quantum spacetime is described by a sum of different spacetime hypersurfaces [ , ].But in the quantum perspective, the quantum spacetime is modelled by a quantum state | G , which is presented in Section . . In QFT, the physical system can be described by a quantum state vector |Ψ( , ) , where and are the set of continuous and discrete quantum numbers respectively.They are labeled the eigenvector of the set of commutative operator algebras of the physical system and determine the Hilbert space, for dS space with more details see [ ].Although the particle and tensor(-spinor) fields, Φ( ì , ), are immersed in a spacetime manifold , the quantum state vector is immersed in a Hilbert space H.The field operator Φ( ì , ) plays a significant role in the connection between these two different spaces: a spacetime manifold and a Hilbert space H. On the one hand, it is immersed in spacetime, and on the other hand, it acts in Hilbert's space, which is defined at any point in a fixed classical spacetime background (of course in the distribution sense).Hilbert space can be thought of as the "fiber" of a bundle over the spacetime manifold, where each point of the manifold corresponds to a different fiber, H × .The bundle is typically referred to as a "Fock space bundle".For a better understanding of this idea, see [ ] and noncommutative geometry [ ].The Wightman two-point function, W( , ′ ) = Ω|Φ( )Φ( ′ )|Ω , provides a correlation function between two different points in spacetime and their corresponding Hilbert spaces.|Ω is the vacuum state.Historically, time played a central role in quantum theory, since the time parameter describes the evolution of the quantum state.Time, however, is an observer-dependent quantity in special and general relativity, and for quantum geometry to be observer-independent, the time evolution of quantum states must be replaced by another concept, which is discussed in Section . It is useful to recall that in contrast to all massive and massless elementary fields, the mmc scalar field disappears at the null curvature limit [ ].Its quantization with positive norm states also breaks the dS invariant [ ] and its behavior is very similar to the gauge fields [ ]. Since the collection of all these properties is the same as the properties of curved space-time geometric fields, the mmc scalar field can be considered as part of space-time geometry.This idea was previously applied to explain the confinement and mass gap problems in dS quantum Yang-Mills theory, by using the interaction between the vector field and the scalar gauge field, as a part of the spacetime gauge potential [ , ]. . E In the background field method, = + ℎ , the linear gravity ℎ propagate on the fixed background .The tensor field ℎ can be divided into two parts: the tracelessdivergencelessness part ℎ , which can be associated with an elementary massless spin-2 field and the pure trace part, ℎ = 1 4 .The pure trace part can be transferred to the conformal sector of the background metric: It is also called the conformal sector of the metric, which becomes a dynamical variable in quantum theory [ , ].Quantum geometry is equal to the quantization of the tensor field or equivalently , ℎ , ℎ and .In quantum geometry, the choice of the curved metric background is not critical since we have simultaneous fluctuations in and ℎ and it can also be considered as an integral over all possible spacetime hypersurfaces [ , ].For a covariant quantization of ℎ , the background must be curved [ ], and from the cosmological experimental data, the dS metric is selected as a curved spacetime background. For an observer-independent point of view, we use the dS ambient space formalism [ , ].In this formalism, the dS spacetime can be identified with the 4-dimensional hyperboloid embedded in the 5-dimensional Minkowski spacetime as: and is like Hubble's constant parameter.The metric is: , where the 's ( = 0, 1, 2, 3) form a set of 4-space-time intrinsic coordinates on the dS hyperboloid, and the 's are the ambient space coordinates.In this coordinate, the transverse projector on the dS hyperboloid, = + 2 , plays the same role as the dS metric .In this formalism, quantum geometry is described by the quantization of the tensor fields , K and K = 1 4 Although the tensor field K and scalar field are elementary fields, the background metric and conformal sector of the metric, K = 1 4 , are not elementary fields à la Wigner sense, since [ ]: The transverse-covariant derivative acting on a tensor field of rank-2 is defined by: ( . ) where • is tangential derivative.The tensor fields K and can be written in terms of elementary fields: the massive rank-2 symmetric tensor field ( 2 = 15 4 ), mmc scalar gauge field and massless vector field = ⊤ [ ].The tensor field is discussed as massive gravity in literature, which was studied in the previous paper [ ].The massless vector field quantization was presented in [ ].The constant pure trace part evokes the famous zero-mode problem in linear quantum gravity and the quantization of the mmc scalar field.The classical structure of our universe may be constructed by the following fundamental fields, which can be divided into three categories: • A: Massive elementary fields with different spins, which transform by the unitary irreducible representation (UIR) of the principal series of the dS group. • B: Massless elementary fields with the spin ≤ 2, which includes the gravitational waves K , and mmc scalar fields .They transform by the indecomposable representation of the dS group where the central part is the discrete series representation of the dS group.They play an important role in defining the interaction between different fields as the gauge potential [ ]. • C: The spacetime geometrical fields and conformal sector of the metric K , which are not elementary fields, but they can be written in terms of the elementary fields of categories A and B. Although in classical field theory, they preserve the dS invariant, their quantization breaks the dS spacetime symmetry [ ].The quantization of the elementary massive and massless fields with the spin ≤ 2 in dS ambient space formalism has been previously constructed for principle, complementary, and discrete series representations of the dS group; for a review, see [ ].The mmc scalar field and linear gravity K can be quantized in a covariant way in Krein space quantization [ , ].We know that the QFT in curved spacetime suffers from renormalizability, and for solving this problem, Krein space quantization must be used; see [ ] and the references therein.Due to the quantum fluctuation of tensor field K , the dS invariant is broken [ ], which is reminiscent of the quantum instability of dS spacetime [ ]. . Q Before discussing the quantum geometry space of states, Hilbert-Fock space and Krein-Fock space constructions are briefly introduced using dS group algebra and field operators algebra.We discuss that the Krein-Fock space is a complete space for all elementary fields and quantum geometry. . .Hilbert-Fock space.Fro the dS spacetime, one can construct a one-particle Hilbert space H (1) from dS group algebra for the principal, complementary, and discrete series UIR of the dS group [ , , ]: where are the generators of the de Sitter group ( , = 1, 2, • • • , 10), are the structure constants, 1 and 2 are two numbers, labeling the UIR's of the maximal compact subgroup SO(4), picked in the sequence 0, 1 2 , 1, The 's are sets of parameters numbering the columns and rows of the (generalized) matrices, assuming continuous or discrete values [ ]. The UIRs of the principal and complementary series are classified by the two parameters and , whereas is continuous and the sum is replaced with an integral [ , , ]: ( . ) where ( ) is a positive weight in the dS background [ ]. refers to the mass parameter and is equivalent to the spin.They determine the eigenvalues of the Casimir operators of the dS group.H ≡ ∫ ∞ 0 d ( ) H ; is one-particle Hilbert space for a specific spin .A quantum state in this Hilbert space may be represented with | , ; ∈ H (1) , where is a set of quantum numbers that concern the maximal set of commuting operators with the Casimir operators, which represent the dS enveloping algebra [ , ].It is critical to note that the Hilbert space H ; is not a complete space under the action of the dS group generator , but the Hilbert space H (1) is complete space [ ].The notation H (1) means it is the "one particle state" (first quantization).Since dS group generators and field operators do not modify the spin of the state, for a fixed spin , the Hilbert space H is also a complete space.In this study, we do not consider supersymmetry and supergravity, otherwise, the sum over the index should be taken into account for obtaining a complete space. There are different realizations for the bases of the one-particle Hilbert space H (1) where some of them are presented for the scalar field ( = 0) in [ ]. Formally, we define a UIR of the de Sitter group by ( ; ) ( ( )) ≡ ( , , , ), which is a regular function of dS group generators and acts on the Hilbert space as: where the 10 's are the group parameters.These parameters make up a 10-dimensional topological space T( ).By using the expressions of the matrix elements (∼ coefficients) of this representation, one can construct a space of square-integrable functions over some subspaces S of the topological space T, i.e. 2 (S) where S ⊂ T( ).Takahashi discusses different subspaces and defines the relations between some of them by the Plancherel formula [ ].The classical tensor(-spinor) field can be identified with some coefficients of the UIR of the dS group in dS ambient space coordinates under certain conditions: Φ( ) ≈ | ( ; ) ( ( ))| ′ , where ∈ .In QFT, these classical fields are assumed to be the operators, which act on a space with the Fock structure, i.e. like the harmonic oscillators.The well-defined operators are defined in a tempered-distributional sense on an open subset O of spacetime [ ]: where is a test function and d ( ) is dS invariant measure element.As usual in Fock structure, the field operator can be written in terms of its creation part, Φ + , and its annihilation parts Φ − : Φ( ) = Φ − ( ) + Φ + ( ), where Φ + ( ) creates a state and Φ − ( ) annihilates a state in the Fock space.By defining a "number" operator ( , ) ≡ Φ + ( )Φ − ( ), one can prove the following algebra, which results in the construction of the Hilbert-Fock space [ ]: ( . ) , and here Φ is the tensor field.For the fermion field, the anti-commutation relation must be used.W( , ′ ) = Ω|Φ( )Φ( ′ )|Ω is the Wightman two-point function and |Ω is the vacuum state, which can be fixed in the null curvature limit [ ]. Now using the infinite-dimensional closed local algebra ( .), one can construct the Hilbert-Fock space in a distributional sense on an open subset O of the dS spacetime [ , ]: where ℂ is vacuum state, H (1) is one-particle states and H ( ) is n-particle states.The n- particle states are constructed by the tensor product of one-particle states (for bosons, a symmetry product, H (2) = H (1) ⊗ H (1) and for fermions anti-symmetric products, H (2) = H (1) ⊗ H (1) ).We used the Hilbert-Fock space name to emphasize that the structure of our QFT Hilbert space is in the form of the equation ( .).An overview of axiomatic quantum fields, observable algebraic nets, and the algebraic setting of second quantization can be found in [ ]. Considering the interaction fields, it does not add any supplementary operators but reduces the number of commuting operators.Then we have a new algebra, resulting in a new Hilbert space H .This space H can be immersed in the original space, which means H ⊂ H. Therefore one can use the Hilbert space ( . ) for the interaction fields, for the scalar field see [ ]. . .Krein-Fock space.The above Hilbert-Fock space structure cannot be used for the mmc scalar field operator, and then for K , , and K .The one-particle Hilbert space of mmc scalar field is not a complete space under the action of the dS group generators .Their action results in the negative norm state [ ].This problem appeared as a dS-invariant breaking and the appearance of infrared divergence in the two-point function W( , ′ ) [ ]. Then the field operator algebra ( . ) breaks the dS invariant and the dS invariant Hilbert-Fock space structure cannot be constructed.That means the effect of the field operator over some states results in states out of the Hilbert space, i.e. states with the negative norm.These states are necessary to obtain a complete space. In this case, the two-point function is the imaginary part of the two-point function of the positive mode solutions [ , ]: is the two-point function of the negative norm states.If we replace the two-point function in the field operator algebra ( . ) with the Krein two-point function W ( , ′ ), we can construct the following dS invariant Krein- Fock space structure: ( . ) It is pertinent to note that the Krein-Fock space is a complete space for all massive and massless elementary field operators in the dS spacetime.The Krein space can be considered the "fiber" of a bundle over the dS base manifold, K × .In this complete space, we can define (in the distribution sense) the identity operator formally as ½ ≡ M |M M|. . .Quantum geometry space of states.In quantum geometry, the biggest challenge appears in the quantization of .Its quantum fluctuation breaks the dS invariant and the concept of spacelike separation points cannot be defined.Therefore one cannot define the field operator algebra ( . ) for the construction of the Krein-Fock space structure.This problem has a long history and we do not want to discuss it here, see [ ].We ignore this problem for now since the Krein-Fock space ( . ) is a complete space for all elementary fields in dS space, and the geometrical fields and K can be written in terms of elemen- tary fields.Therefore we can use the Krein-Fock space ( . ) for quantum field operators and K .We can assume that quantum geometry is described by a quantum state | G , which is immersed in the Krein-Fock space ( .), | G ∈ F(K (1) ).It can be formally written by a superposition on the Krein-Fock space bases in the following form: The action of the field operators on | G results in a new state | G ′ , which is in the Krein-Fock space: ). Krein-Fock space in quantum geometry plays the same role as all parts of the dS spacetime hyperboloid in classical theory.Hilbert space H may be considered the observable part of space for an observer.When we discuss Hilbert space this means we have only positive norm states.First, let's review some facts about dS spacetime, in which particles and fields are immersed and evolve within.The basis vectors of dS spacetime have negative, positive and null norms, where the spacetime interval is given by the metric signature (1, −1, −1, −1).When we move from Euclidean geometry to Minkowskian geometry, negative norm vectors appear.However, this norm's meaning is completely different from the Euclidean norm.There are three types of vectors in spacetime based on their norms: lightlike vectors, space-like vectors, and time-like vectors.Some regions of the dS hyperboloid are also not observable to an observer due to spacetime curvature and the event horizon.However, these regions are necessary to construct a covariant formalism of the physical system. Similarly, in discussions of quantum geometry, we must employ the quantum state with a negative norm for covariant quantization.Consequently, the Krein-Fock space constitutes a complete space under the influence of geometrical field operators.However, the physical significance of this negative norm state in quantum geometry remains an open question.In classical dS geometry, certain spacetime regions are beyond the observation of an observer.Analogously, in dS quantum geometry, negative norm states are necessary to achieve a complete space, yet they remain unobservable to a local observer.By implementing the observer reality principle [ ], these states can be excluded from the observer's physical space.It can be argued that the absence of interaction beyond the event horizon parallels the lack of interaction between negative and positive norm states in Krein's space for a local observer.Similar to the negative values in the Wigner quasi-probability distribution function in quantum optics, which signify non-classical states and quantum interference effects, the negative norm states in the Krein quantization of geometry might represent the non-classical and pure quantum interference phenomena, devoid of a direct classical counterpart. At the null curvature limit, negative norm states have negative energy [ ].For a free particle state in flat spacetime, they have no physical interpretation and can be considered auxiliary states for the local observer.If we assume that the gravity state contains negative energy, the matter-radiation state carries positive energy, and their sum is zero, this hypothesis is compatible with the theory of the creation of everything from nothing in cosmology, see the similar discussion after equation ( .). Different quantum gravity models are constructed in Hilbert space rather than Krein space.One of them, which is very close to our model is noncommutative geometry [ ], where in the previous paper some similarities and differences were discussed [ ].The other is higher-dimensional spacetime with > 4. In this case, the field operator algebra ( . ) can be defined concerning the space-like separation point in , which can be imagined as a fixed background space.The quantum fluctuation of 4 may be considered as a sum over different 4-dimensional manifolds in .However higher-dimensional spacetime is used in some theoretical models. . Q As time is an observer-dependent quantity, time evolution does not make sense in quantum geometry from an observer-independent point of view.We see that the Kerin-Fock space is constructed from the free field operators algebra, which explains the kinematics of the physical system.Since all matter-radiation fields and geometrical fields are entangled and the change of one has a consequence for the other, therefore the dynamics of a physical system may be extracted from the algebra of interaction fields.But here for simplicity, we use the Lagrangian density of the interaction field for defining the evolution equation of the geometry quantum state. Assuming the universe's evolution begins from the vacuum state, i.e. a quantum state without any quanta of the elementary and geometrical fields, | G ≡ |Ω .Our universe is also assumed to be an isolated system.By these assumptions, the universe began with zero entropy.Due to quantum vacuum fluctuations in all elementary fields, and the interaction between some of them in the creation situation, the universe leaves the vacuum state and enters the inflationary phase.This means its entropy increases because isolated systems spontaneously evolve toward thermodynamic equilibrium, which is a state of maximum entropy.In the inflationary phase, which is explained by dS spacetime, we have an infinitedimensional Hilbert space.But due to the compact subgroup SO(4) of the dS group and the uncertainty principle, the total number of quantum one-particle states becomes finite [ ].The finiteness hypothesis of energy results in the finiteness of the total number of quantum states N in Fock space, which results in a finite entropy for the universe [ ]. Since the universe is an isolated system and its entropy is increasing, N increases with the evolution of the universe.Therefore the total number of accessible quantum states in the universe, N, may play the role of the time parameter and is used as the parameter of quantum state evolution.We assume that the evolution of the quantum state can be written by an operator U as follows: ) , which satisfies the following conditions: ( . ) Due to the principle of increasing entropy, we always have N 3 ≥ N 2 ≥ N 1 .For obtaining the evolution operator (N ′ , N), we need a constraint equation for the quantum state. The quantum state of the universe is a function of the configuration of all the fundamental fields in the universe, Section .Previously, we obtained these fields' classical action or Lagrangian density in the ambient space formalism.It can be formally written in the following form: For free field Lagrangian density L see [ ], and for interaction case L see [ , ].Since in dS spacetime 0 plays the same role as the time variable in Minkowski space, see section 4 in [ ], we define the conjugate field variable by Π ≡ ∇ ⊤ 0 Φ.The Legendre transformation of the Lagrangian density L(Φ, ∇ ⊤ Φ) with respect to the variable ∇ ⊤ 0 Φ can be rewritten in the following form: The explicit calculation of this function in the dS ambient space formalism for elementary fields is possible.Its physical meaning is unclear but at the null curvature limit it can be identified with the Hamiltonian density in Minkowski spacetime. From this fact and inspired by the Wheeler-DeWitt equation, we define the constraint equation of geometry quantum state as follows: The first part is free fields theory which includes the dS massive gravity, the linear gravitational wave, and the mmc scalar field.The second part concerns the interaction of various fields.Using equation ( . ) and ( .), we obtaine HU| G; N = 0 = UH| G; N .Therefore the simple form of U, which satisfies the conditions ( .), is: Although the physical meaning of H is unclear, it remains constant throughout the universe's evolution.By dividing it into geometrical and non-geometrical parts, H = H + H , we have a fluctuation between these two parts under the evolution of the universe, which neither is constant individually.It may be interpreted as an "energy" exchange between our universe's geometrical and non-geometrical parts.While the geometry quantum state evolves in Krein-Fock space, fluctuation of breaks the dS invariant.The explicit calculation of the equation ( . ) is out of the scope of this paper and will be discussed elsewhere. In summary, to construct the quantum geometry in this article, we have used four essential key ideas, briefly recalling them.) Utilising the ambient space formalism to attain an observer-independent perspective, which is essential for quantum geometry.) Substituting Riemannian geometry with Weyl geometry to describe the spacetime geometry by the metric tensor and the mmc scalar field since the latter is also a geometrical field.) Replacing the Hilbert space with the Krein space to achieve a complete space and a covariant quantization.) The time parameter for quantum state evolution is replaced with the total number of quantum states to obtain an observer-independent formalism. . C In quantum dS geometry, the Hilbert space H is no longer a complete space.Instead, it is a subspace of a complete Krein space, H ⊂ K, in which the requirement for positive definiteness is abandoned.Replacing Hilbert space with Krein space is essential in our quantum geometry model.Krein space quantization (including quantum light cone fluctuation) permits us to construct a renormalizable QFT in curved space and quantum geometry.Ambient space formalism permits us to formulate quantum geometry from an observer-independent point of view and to visualize the many-world interpretation.It should be noted that although the metric quantization breaks the dS invariant, the Krein-Fock space is a complete space for quantum geometry.The dS geometry quantum state is introduced as a superposition of the Krein-Fock space basis, and its evolution is parametrized in terms of the total number of quantum states.Using the idea of the Wheeler-DeWitt constraint equation in cosmology, the evolution equation of geometry quantum state can be written in terms of the Lagrangian density of interaction fields.
7,119
2023-04-12T00:00:00.000
[ "Physics" ]
An Application-Based Review of Haptics Technology : Recent technological development has led to the invention of different designs of haptic devices, electromechanical devices that mediate communication between the user and the computer and allow users to manipulate objects in a virtual environment while receiving tactile feedback. The main criteria behind providing an interactive interface are to generate kinesthetic feedback and relay information actively from the haptic device. Sensors and feedback control apparatus are of paramount importance in designing and manufacturing a haptic device. In general, haptic technology can be implemented in different applications such as gaming, teleoperation, medical surgeries, augmented reality (AR), and virtual reality (VR) devices. This paper classifies the application of haptic devices based on the construction and functionality in various fields, followed by addressing major limitations related to haptics technology and discussing prospects of this technology. Introduction Haptics or haptic technology is defined as the technology of applying touch sensation while interacting with a physical or virtual environment [1]. Physical interaction may be performed at a distance, called teleoperation and the virtual environment could be conducted through a computer-based program. Over recent years, the development of haptic devices has exponentially grown, thanks to the rapid development in technology [2]. With many kinds of information unexploited and the necessity to respond quickly to actions, the importance of haptic devices has rocketed [3,4]. Designing, testing, and manufacturing haptic devices require multidisciplinary knowledge from computer science, programming, and coding to electromechanical design, human factors, and ergonomics [5]. Haptic devices enable a user to interact with computer-generated environments and create a sense of realness and tangibility [1]. This type of interaction is made possible by the actuators of the haptic device mechanism and is called haptic feedback [6]. On the application front, haptic interfaces are employed in different areas from the entertainment industry to specialized medical devices, wearable gloves, and surgical procedures [2]. Telemanipulators, exoskeletal devices, advanced prosthetics, physical rehabilitation, intelligent assistance devices, and near-field robotics are other applications that benefit from haptic technology [7]. Recently, haptic devices have also been implemented in computerized forensics such as 3D facial reconstruction, radiological cross sections, and analysis procedures [8]. For example, giant automobile manufacturer BMW has implemented volume control using gestures [7]. A realistic manufacturing plan for a 5-axis CNC milling process in a multi-sensory virtual environment with visual, haptic, and aural feedback was proposed in [9]. Haptic devices operate based on haptic feedback provided in the form of force and/or torque from objects in a real, teleoperated, or computer-generated environment through a Human-Machine Interface (HMI) [10,11]. Owing to the bidirectional and symmetrical interaction capabilities of haptic devices, the use of haptic apparatus has been productive while collaborating with computer systems to provide real-time feedback on a remote environment [5,12]. Human haptics, machine haptics, and computer haptics are three different areas of haptics technology [3]. When an object is touched by an operator, interaction forces are imposed on their skin, and consequently the sensory systems convey information to the brain and haptic perception is generated. In response, the brain provides commands that activate the muscles, resulting in hand or arm movement. This principle is called the human haptic system [3,13]. Specifically, human haptics relies on kinesthetic information and tactile information [3,14]. Machine haptics is defined as the use of machines to replace human touch autonomously or through telerobotics or haptic interfaces [3,13]. Measurement of positions or contact forces from any part of the human body, computation of information, and display of position and forces to the user are the basic operations performed in machine haptics [3,6,13]. Computer haptics has become prominent over the years and it is related to creating and rendering a sense of touch and feel of virtual objects to the user with the help of algorithms and software architectures [3]. Computer haptics deals with the creation of forces and torques, and sense of touch, while computer graphics deals with haptic rendering and visual rendering [13]. Computer graphics are incapable of providing manual feedback [6]. The interaction with the virtual environment is constructed utilizing joysticks, robotic arms, and actuation systems [2,6]. There exist various types of feedback including force, vibrotactile, and electrotactile feedback systems. The majority of the haptic devices operate based on force feedback and vibrotactile feedback [2,6,15]. Research related to the haptic interface may be classified as (i) studies carried out on technologies providing haptic stimuli and (ii) studies on how users perceive the haptic stimuli. The significance of human perception of haptic stimuli and the effect of emotional and psychological aspects on haptic feedback were addressed in [16]. For example, to create a sense of tactile realness, the joysticks used in the gaming industry provide vibrations in highly tense circumstances [6]. The efficiency, performance, and advancement of haptic interfaces depend on the type of feedback, maneuverability, and manipulability of the end-effector, haptic stimulation, and actuator technology [16]. One of the drawbacks in the design of haptic interfaces is limited workspace and low dexterity. For instance, in Minimally Invasive Surgery (MIS), disturbed hand-eye coordination, reduced perception of depth, and substandard haptic feedback are common limitations faced by surgeons [17]. Factors such as how the device responds, feels, or interacts should also be considered in the design phase for efficient operation [18]. Novel forms of communication, cooperation, and integration between humans and robots have become possible because of the wearability of robotic devices. Wearable haptics is gaining popularity in the robotics world [15], as they enable companies like Apple and Nintendo to improve user experience (UX) with the help of high-fidelity inertial actuators [19]. However, the major challenge of commercializing wearable haptic devices used in AR, medical procedures, and rehabilitation purposes remains intact and to be exploited [15]. Various applications of haptic devices have been studied including some works with a focus on the use of haptic devices in just a particular application. The main contribution of this review paper is in categorizing haptic devices used in research, industry, and medical fields based on their applications. This paper provides information on how the application, in which a haptic device might be used, can change the design of the haptic device and its features to match the requirements of that particular application. For example, while a wearable haptic device might be very useful and important in one application, an anchored desktop haptic device might be much more useful in another application. These facts are shown based on the evidences in literature and how popular a type of haptic device is in each application. Various types of haptic devices used in different applications are discussed in detail, and the importance of haptic devices in each discipline is signified. Existing challenges in the process of designing, customizing, and fabricating haptic devices are also dissected in different applications. The paper also uses several examples to highlight the importance of wearable haptic devices and the compromises that should be made in the process of manufacturing. Last, the paper addresses multiple challenges of implementing haptic devices in real-time applications, followed by highlighting their scope of application. Working Principles of Haptic Devices Sensing and manipulation of objects through touch can be defined as haptics. Haptic devices are used to give feedback on the movements/force generated by users [19]. With this feedback, many operations that are visually impaired can be carried out [18,19]. Haptic devices are also called as input-output devices because they provide feedback to the system [5]. The construction of haptic devices involves the combination of various concepts from different streams such as engineering, computer science, human perception, physics, and information technology [18]. Figure 1 depicts a number of commercial haptic devices with different types of mechanisms in the design. Basically, haptic devices are comprised of actuators (such as electric motors), interface devices, and sensors. Haptic devices can provide the sense of touch to the operators while manipulating a virtual object or an actual object remotely through a teleoperation system. Haptic devices provide tactile and/or kinesthetic feedback, and in some cases, thermal feedback as well [20]. Tactile feedback, also known as cutaneous feedback, can be defined as feedback obtained from various mechanoreceptors attached to the outer surface of the user's body, generally on the skin [20,21]. Kinesthetic feedback is a different kind of feedback, which also plays a significant role in the field of haptics. Kinesthetic feedback can also be described as proprioception, and it refers to awareness or sense of touch created from muscle tensions with the help of sensory receptors. Unlike tactile feedback, the sensory receptors are implemented in muscled and joints, and not on the surface of the user's body [15,20,22]. Interaction with a haptic device is usually a two-way interaction by the operator. The operator moves the haptic device, and this motion is sensed by the sensors (e.g, motor encoders) and used to provide motion commands (e.g., velocity commands for a teleoperated robot). The interaction force between the robotic arm can be sensed by force/torque sensors and sent back to the haptic device to be regenerated by actuators of the haptic device and be applied to the operator's hand. This force feedback is an additional link connecting the user to the task environment and can potentially improve telepresence. For many years, the development of haptic devices has focused on acquiring information and manipulations of objects with the help of touch by machines and humans. The interaction between machine and human in real, teleoperated, virtual, or artificial environments has also been the subject of research [22]. Table 1 shows popular haptic devices. Applications of Haptic Devices Haptic devices find various applications in different fields such as medical training, rehabilitation, teleoperated robotic surgeries. The following section discusses different types of haptic devices used for micromanipulation such as medical procedures and surgery, wearable technology, as well as for the tasks conducted through teleoperation. Figure 2 depicts a number of high-end commercial haptic devices having more degrees of freedom force feedback than the haptic devices in Figure 1, used for different purposes including medicine and surgery. Haptic Devices for Micromanipulation Various fields such as electronics, microscopy, surgery, biology, and material sciences require micromanipulation systems that can perform tasks like sensing, processing, stiffness, and conductivity testing [23]. A 4-degrees of freedom (DoFs) hybrid parallel flexure mechanism-based device was designed for micromanipulation in which a master-slave manipulator is controlled using multi-DoF piezo actuators with the capability of providing haptic feedback. The platform consists of a planar 3-PRR manipulator (three rotary joints) and a 1-DoF bridge mechanism. The system can be employed for the assembly of microelectromechanical systems (MEMS) or a micro-teleoperated contact with biological cells [23]. Inadequate performance of the system presented for small-scale tasks in [23] prompted the creation of a new manual, bilateral cell injection device that uses a null displacement active force sensor coupled with a haptic interface with negligible effective inertia to carry out manual injection in biological samples was introduced [24]. A haptic teleoperation control scheme to carry out micromanipulation tasks was proposed in [25], in which a particular mechanical design was considered for the haptic device architecture that enables an operator to perform a range of micromanipulation tasks. The study emphasized the importance of haptic devices in teleoperation, micromanipulation, and nanoworld. The design of a haptic teleoperation control scheme, which is capable of controlling the actions of the human user and enables monitoring of items at the microscale level, was also discussed [26]. A bilateral telemanipulation system for controlling paramagnetic microparticles was also developed. The platform consists of a pantograph haptic interface and an electromagnetic system with four electromagnetic coils that enables a user to control the position of magnetic beads [27]. The design and control of a teleoperated robotic system, consisting of a 3-DoF robotic wrist and a spherical five-bar mechanism, was explained in [28]. The design of the system was based on motion data gathered using a simulated microanastomosis operation. The platform can be used for microsurgical operations and dextrous micromanipulation tasks [28]. To name a few tokens, some of the teleoperated industrial robots used for surgical applications are as follows: Nowadays, most technologies used for healthcare training are haptic simulators [33]. Training students and novice doctors and technicians in healthcare practice and education training have begun to utilize computer-based simulation systems because of their capability in providing real-time visualizations [34]. These systems enable the users to interact with a virtual environment that is similar to the real world. Training healthcare providers, including surgeons and physicians [35] and dentists [36], is of importance and requires state-of-the-art innovations such as the implementation of haptic devices. Haptic devices add a sense of touch when the user interacts with the virtual environment [37]. A haptic dental procedure simulator called HapTEL was designed to allow dental students to learn dental drilling, cavity preparation for tooth restoration [38]. DenTeach is a portable and compact vibrotactile platform that was developed to facilitate fully remote and physical distancing-aware teaching and learning in dentistry [36]. This platform helps dental schools adapt to the COVID-19 pandemic by allowing dental students and instructors to learn and teach practical dental tasks from home or a remote location, and in turn, helps to limit the spread and transmission of the novel coronavirus. DenTeach platform consists of an instructor workstation (DT-Performer), a student workstation (DT-Student), advanced wireless networking technology, and cloud-based data storage and retrieval [36]. The platform procedurally synchronizes the instructor and the student with real-time video, audio, feel (haptics), and posture (VAFP). DenTeach offers three modes: teaching, shadowing, and practice. Teaching mode provides each student with haptic feedback from the instructor workstation (inside the lab or a remote place), and shadowing mode enables the student to download augmented videos and start watching, feeling, and repeating the tasks before entering the practice mode. In the practice mode, students use the DenTeach to conduct delicate dental tasks and evaluate their performance skills automatically evaluated in terms of key performance indices. An extensive review of simulation of palpation procedures, different techniques used for palpation, and various approaches used in medical systems with the help of haptic feedback is explained in [39]. While comparing different palpations, the multi-finger palpation was found commonly preferred over single-finger palpation due to covering multiple contacts at the same time [39]. Figure 3 presents the list of simulators presently used in surgery, medicine, and dentistry training. Medical and Surgical Procedures-Examples of Micromanipulation Tasks The application of haptic devices in surgical operations like stitching, palpation, dental procedures, endoscopy, laparoscopy, and orthopedics was explained in [37]. An external suture environment was developed, called SutureHap, that is a simulator to replicate the sensations in the medical rooms and offices. Omnihaptic device was used for suture training [21]. They also discussed a popular medical simulation system for dental training system as well as the development of an oral implantation simulator that is able to store data collected from trainees and may be used for rehearsal and medical education [21]. Haptic technology has a growing importance in surgical operations especially MIS that is commonly used in conjunction with a robotic manipulator in a bilateral teleoperation fashion [40,41]. A teleoperation system was proposed for use in MIS: a modified 6 degreesof-freedom (DoFs) Denso VP-6242G with a serial mechanism and a PHANToM Premium 1.0 kinesthetic device (see Figure 1). In this system, the serial manipulator was employed at the slave site and the haptic device was at the master site [42]. The authors also suggested that grippers can also be added for grasping surgical items. An adjustable, immersive, and configurable platform for optometry training simulation was proposed, involving headmounted displays, AR interfaces, and a multi-point haptic device [43]. In this platform, preoperative planning and virtual training system were developed based on force feedback. The platform involves an Omega 6 haptic device, an immersive workbench, and a CHAI3D software toolkit. Using this system, the preoperative planning data are transferred, and surgical simulation are carried out by a surgeon to perform osteotomy procedure, learn, and improve their surgical skills [44]. The construction of Pneumatic Artificial Muscles (PAMs) was proposed [45] that have flexible, inflatable membranes and they exhibit orthotropic material behavior. PAMs can be formed conveniently and are light; therefore, in another application, they are of interest for rehabilitation purposes because of their functioning as locomotion devices [45]. The importance of haptic detection between the surgical instrument and human organ and tissue in virtual surgery was presented in their paper [46]. Navigation in surgery was possible with the help of tactile and force feedback between the surgical instrument and the human organ [46]. An application of Omega 7 haptic devices in neuroArm surgical system is depicted in Figure 4. The platform uses two haptic devices to transfer the sense of touch to both hands of a surgeon. A haptic intracorporeal palpation was proposed that Uses a cable-driven robotic system that includes a remote sensing strategy [47]. The platform employs teleoperated cable-driven parallel manipulator that is a new, simple, and cost-effective approach for restoring haptic sensation during the performance of intracorporeal palpation. The conducted tests showed evidence of reasonable accuracy in estimating the amount of force. In another work, the authors integrated a 7-DoF master device into the da Vinci Research Kit and conducted tissue grasping, palpation, and incision tasks using robot-assisted surgery by experienced surgeons, surgical residents, and non-surgeons [48]. Statistical analysis showed that haptic feedback improves key surgical outcomes for tasks requiring a pronounced cognitive burden for the surgeon; however, possible longer task completion times were observed. Wearable Haptic Devices Over the years, many industries have started to design and develop haptic devices with portability and wearability in mind. These wearable haptic devices enable better communication, cooperation, and integration between humans and interfaces [15] (Pacchierotti et al., 2017). In this section, a number of wearable haptic devices are presented. Note that a majority of the listed devices are still in the research phase and still need further improvement to be commercialized and be adopted by clinicians in healthcare. Many industries incorporate pick and place operations, and the main actions to be considered are grasping and manipulation for efficient manufacturing and profitable production rates. The shape and weight of the objects to be held are measured using cutaneous feedback derived from the fingertip contact pressure and kinesthetic feedback of finger positions. The currently used VR systems cannot provide a realistic haptic experience and are normally large, mechanically complex, and their workspace is limited. Grabity is a wearable haptic device designed to simulate kinesthetic pad opposition grip forces and weight for grasping virtual objects in VR. The arrangement of movement, that may affect the amount of grabity, enables precision grasping and strong grasping force feedback by means of a brake. In addition, two tiny actuators aids in creating a virtual force that is tangential in the direction of each finger pad. Grabity provides vibrotactile feedback during contact, high stiffness force feedback during grasping, and weight force feedback during lifting [49]. LinkTouch is a wearable haptic device that can represent the force vector sensation at the fingerpad. This device is distinct because it consists of an inverted five-bar linkage. The device consists of two DC motors that drive the cranks of the five-bar linkage mechanism. These two motors are mounted at the sides of the distal phalanx. The direction of rotation of motors determines the outcome of the perception. When the rotation of the motors is in the same direction, the definition of contact point coordinates takes place. On the other hand, when the rotation of the motors is in the opposite direction, the generation of pressure occurs. In this way, the fingerpad is able to produce 2-DoF force feedback. Besides, the device can also represent the transitional state from a non-contact condition to a contact condition [50]. HapThimble is a device that can produce tactile, pseudoforce, and vibrotactile feedback from the users' fingertips. It is similar to physical buttons and the amount of force is measured. With the help of force-depth curves, all kinds of haptic feedback rendered were analyzed and used for efficient operation [51]. A novel wearable haptic device called MagTics is introduced and tested for positive results. Unlike conventional devices, MagTics eliminates huge power consumption problems to produce sufficient force. MagTics allows for a thin form factor and supreme flexibility in the haptic interface. The interface works based on magnetically actuated bidirectional tactile pixels shortly known as taxels. Henceforth, rich haptic feedback is achieved in a mobile setting using an interface of this kind [52]. Hapballoon is a novel device that can be worn on the fingertips. This device can present three types of sensations: force, warmth, and vibration. Feedback is generated from the devices when the inflated balloon encounters individual devices [53]. FinGar stands for "Finger Glove for Augmented Reality", and it can be described as a wearable haptic device that combines electrical and mechanical stimulation. The skin sensory mechanoreceptors are stimulated, and tactile feedback is generated for virtual objects in AR. The device is mounted on the fingers and with the help of a DC motor, high-frequency vibration and shear deformation are produced to the whole finger. The device is usually attached to the middle finger, index finger, and thumb. Unlike other conventional devices, it is lightweight, has a simple mechanism, and in no way hinders the natural movement of the hand. The characteristics above mentioned can be attributed to any wearable haptic device used in a VR system. The principle behind FinGar is the type of stimulation it employs based on the application in need. Electrical stimulation is used to provide pressure and low-frequency vibration whereas, mechanical stimulation is used to provide high-frequency and skin deformation. The stimulations can be differentiated based on the type of activities the mechanoreceptors in our skin carry out [54]. The proposed device is capable of providing kinesthetic force feedback which is paramount for haptic feedback and interface. Touchscreens gap the bridge between the spatial and cognitive gap between input and visual display. Conventional touchscreens provide a visual response that is triggered by physical interactions. Many works have begun to provide physical feedback and not disconnect the user from the virtual space. This has been possible because of kinesthetic haptic feedback and physical feedback. Several works have focused on controlling the magnitude of friction force between the user and the device. The proposed device uses a mechatronic design to implement a haptic interface with the help of a steering wheel. This steering wheel provides kinesthetic force feedback on a large-format touchscreen. This device serves the purpose of presenting the haptic constraints such as area-of-effect fields or paths. This device can be used in various applications such as wall and maze applications, gaming, touchscreen accessibility, and upper limb stroke rehabilitation therapy or physiotherapy. This device focuses on enforcing elements with the virtual display. The sensing devices used in the system are nylon flexures to provide uniform stiffness and haptic feedback [55]. A novel wearable device is designed with three pairs of micromotors with belts attached to each pair of motors. The device can be worn on the user's fingers. When the motors rotate in opposite directions, the device sends feedback tgo the user based on the tension and vibration created by the belts. This device can be used to sense force or vibration on texture and also aids in edge detection of a surface [56]. Multi-modal haptic feedback is becoming an attraction in the haptic world and a novel wearable device called PATCH was developed. PATCH stands for Pump-Actuated Thermal Compression Haptic device and the device can read thermal and compression cues off the user's skin. Water under different temperature conditions is utilized by the device to provide thermal or compression readings. The PATCH system can be rated high in terms of wearability [57]. Cybergrasp is a novel 7-DoF device that is wearable on the human arm. The device is structured using mechanical links and human joints, unlike conventional haptic devices that just contain mechanical links. Such a biomedical design allows the device to the users' actions quite easily. The 7-DoF module consists of a 3-DoF wrist arrangement, 3-DoF shoulder arrangement, and 1-DoF elbow arrangement. These arrangements adapt easily to the motions of the human arm. Furthermore, these arrangements can be operated individually. This device can be operated in three modes: active, passive, and restrained, and finds application in teleoperation, VR, and medical rehabilitation [58]. Haptic Rendering Haptic rendering and visual rendering are fundamental components of developing a virtual haptic system. Haptic rendering refers to determining the haptic force, and visual rendering enables us to visually display the interaction of a virtual object [59]. A VR or AR system is normally constructed with head-mounted displays such as goggles, accelerometers, and loudspeakers, and is based on these elements to generate a computer-based environment for an operator to interact with the environment [60,61]. Besides, audio and video, skin-based sensory systems are underdeveloped in VR and AR technology [59,60,62], which were proved to provide an enhanced experience in several applications such as entertainment and medicine. Wireless modes of operation have become possible with the help of small, lightweight epidermal VR systems. These systems are thin, soft architectures that can be mounted onto the skin and programmed for the operation of haptic operations [60,62]. Three VR devices, namely, LeapMotion that tracks the motion of the fingertips, hand avatar that mimics the motion of the hand, and fingertips wearable tactile device that provides pressure stimuli were proposed. The interaction between the operator and a virtual environment [63] may be enabled by rendering based on textures and pressure differences [63]. A novel interaction VR device called SlingDrone was proposed [64] that can provide force feedback with visualized trajectory planning. This device employs a micro-quadrotor for control and interaction and provides application in 3D printing technology [64]. A wearable VR device called the ThirdHand, can be attached to the wrist of the user and act as additional support. The device provides constant force feedback [65]. In wearable technologies, the haptic modality is discussed based on (i) tactile feedback to address the tactile perception from the skin, such as vibrations and (ii) kinesthetic feedback to address the kinesthetic perception of our muscular effort. This helps clinicians and therapists to assess the performance of a patient during treatment sessions using the data received based on two main factors: skin and muscle. Commercial cutaneous wearable devices such as Apple Watch and Samsung Gear Live were reviewed [66]. The hRing is a novel wearable haptic device that uses cutaneous feedback and can be used on the proximal finger phalanx. The device consists of two servo motors and a belt that is placed in contact with the user's finger [67]. The design and development of a 3-RRS fingertip device, which is a wearable skin stretch device were discussed. The device consists of a static upper body and a movable end-effector. The device also consists of three small servo motors which are supported by the upper body which is located on the nail side of the finger. The end-effector is assembled in a way to contact the finger pulp. The end-effector makes contact with surfaces by enabling movement and rotation towards the user's fingertip [68]. Commodity electromyography (EMG) armbands are popular in the gaming industry. They provide a new platform for human-computer interfaces [67]. The user interface of EMG with the help of kinesthetic haptic sensory feedback was reviewed in detail [69]. The characterisation of Novint Falcon as a robot manipulator to provide feedback, and also create viable kinematic and dynamic models [70]. A new haptic technology called Po2 was developed. The device utilizes gesture-based illusive tactile sensations in gaming platforms. It consists of two vibrating actuators and provides tactile motion. The device is able to sense movements and vibrations between hands [71]. Haptic in Teleoperated Robotic Systems Several surgical robotic systems have been developed to provide enhanced dexterity, improved accuracy, and better ergonomics. Surgical robotic systems and teleoperation systems have been developed in recent years to overcome workspace constraints, to improve dexterity, accuracy, and provide enhanced ergonomics [72]. A master-slave teleoperation system to carry out surgical operations and manipulations. The master device consists of a pair of haptic devices and the slave device consists of multiple arms [73]. Two franka emika robot arms were presented to serve as a twin master-slave system. The system was designed to carry out haptic-guided teleoperation. The objective was to study the interaction forces between the master and the slave [72]. A Cable-Driven Parallel Robot teleoperation system, consisting of a master CDPR and a slave CDPR, was proposed. The master and slave CDPRs were connected through a wireless channel, and the components of the haptic force were realized using admittance control. With this system, gait training was carried out and reduction of stress on the body and legs was achieved [74]. The design and development of various teleoperation systems, with two stylus arrangements for CombX haptic device, was proposed to provide force outputs. This stylus arrangement can be implemented in telesurgical systems. Examples of haptic devices are Cobotic handcontroller, DELTA-R, CU parallel haptic device, and VirtuaPower [73]. The significance of implementing fingertip devices for pick and place operation was explained in [75]. The work proposed the importance of multipoint, multi-contact haptic feedback. Fingertip devices realize forces on human fingertips, showing that wearable devices are promising for robotic manipulations systems in bilateral teleoperation [75]. An fMRI-compatible haptic device was proposed to study and investigate neural mechanisms for precision grasp control. An electromagnetic actuation system was used to control the haptic interface [76]. The design and development of a haptic teleoperation system using soft micro-fingers. In this system, the microfingers act as the slave, and they are maneuvered using a haptic interface. These micro-fingers integrated with artificial muscles are used to transmit tactile information. The proposed teleoperation system can be used for rehabilitation purposes [77]. The design and fabrication of a teleoperation device called MiniOct. MiniOct was designed particularly for the continuum teleoperation of manipulators. The prototype was proposed and tested for the haptic response, kinesthetic feedback quality [78]. A pick and place teleoperation system was designed, and experiments were conducted. Participants of the experiment micro-manipulated cotton strips to mimic microsurgical operations. The experiment was conducted with three haptic devices: sigma 7, neuroARMPLUSHD, and a master manipulator [79]. The evaluation of haptic devices and end users in telerobotic microsurgery were discussed [79]. The design and operation of teleoperated mobile robots: The mobile robots consist of a 6-DoF haptic device and electromyography (EMG) signals sensor to receive force feedback. Using this hybrid mechanism, mobile robots are operated synchronously, and obstacle avoidance is achieved [80]. Challenges of Haptic Technology The development of haptic devices has been exponential; however, the implementation of haptic technology in various fields has faced numerous challenges. Some of these challenges are discussed in this section. Challenges in Industrial Applications Design complexity, quality of feedback, and the safety of operation are some of the factors that require improvements [14]. For instance, the inability of teleoperation systems incorporating haptic devices to handle nuclear waste due to the complexity of the task, the unpredictable nature of hazardous materials are still the areas that need enhancements as discussed earlier. Furthermore, the regulatory standards in the nuclear industry are stringent and against the incorporation of autonomous robotic systems [72]. The design of a haptic interface is a multidisciplinary task that is very complex due to its multicriteria and often overlapping, functional, and performance specifications [10]. Design and development of a haptic device that is acceptable for industrial applications and provides high-quality sense of touch in terms of force and tactile feedback is still an open research area. Challenges in Health Sciences Applications In medical applications, teleoperation is considered impractical by some researchers [17], while other researchers have found haptic technology very useful [79]. Patients' safety, reproduction of realistic haptic feedback, affordability, probe control, and feedback training are some of the challenges that need to be addressed before haptic devices can be implemented in medical procedures. Virtual simulators play a significant role in medical training, and ongoing research has shown evidence of many possible challenges. A virtual simulator used in medical training comprises of a haptic device, medical tools, and a virtual environment with a virtual patient (task environment) [37]. Presently, most simulators employ haptic devices with 3-DoF or 6-DoF. Most training simulators are equipped with 3-DoF haptic devices such as Falcon as they are affordable. Phantom Omni (Touch) offers 6-DoF of positional and rotational feedback with 3-DoF of force rendering. Other devices with high-quality haptic feedback such as Phantom Desktop (Touch X) or Phantom Premium are more expensive, which can be considered as one of the main challenges in the acceptance of the system [40]. In robotics surgery, there are many research studies focused on implementation of haptic feedback in tele-surgical robotic systems [35] and showed its potential and benefits [81]. Challenges in medical applications include safety of the system in terms of stability, quality of the force feedback and transparency, regulatory approvals, economical considerations, and challenges related to the environment in which the haptic device is going to be used, such as MRI compatibility [82]. Haptic devices used in surgical environments provided insufficient realism and future doctors were not well trained for complex surgeries. The haptic devices did not provide enough force feedback, and the benefits of these devices are still not documented properly [19]. Haptic devices are still far from being used in medical communities. Implementation of haptic devices in the medical field is a subject of controversy until now and the absence of or limited haptic feedback is one of the reasons inhibiting the growth of haptic devices. Cutaneous-based haptic devices have large variations in design, frequency response, spatial field, and tactile feedback. These variations are not suitable for commercial applications unlike kinesthetic interfaces [1]. Limitations of the Haptic Technology The important aspects of VR systems are immersion, interaction, and imagination. Currently, available VR systems provide visual realism and auditory feedback to some extent. However, they provide insufficient haptic feedback, by which humans can understand the virtual world. Lack of high-quality haptic feedback is one of the limitations of haptic devices in VR applications [83]. Human-Computer Interaction (HCI) is a key element that defines the performance of haptic devices in VR applications. Human user, interface device, and virtual environment synthesized by computer are the three factors that contribute to HCI [84]. Over the years, the major challenges of haptic devices have been simulation and sensing of interactive objects in the computer synthesized world. In free space, the haptic device must be capable of free motion and not exerting large resistance on the operator's hand. In a constrained virtual world, the range of the device inhibits sufficient contact with objects in the virtual world [62,83,84]. A good haptic device requires the following criteria: (1) stiff solid virtual objects, (2) unsaturated virtual constraints, and (3) ample free space. Promising advancements have been made to overcome these challenges by designing haptic devices with low inertia, adjustable impedance range, high sensing resolution for tracking, and adequate workspace for task simulation [83]. Two factors are required for a better teleoperated robotic system: transparency and stability. Transparency can be defined as the extent to which the remote environment and telepresence can be created. Stability can be defined as the amount of haptic information the sensors can relay back to the user. To balance both stability and transparency, tactile feedback and force feedback can be used in multi-modal platforms. However, different kinds of feedback are still under research and most of the prototypes still require in vivo validation. Therefore, the research community is still looking to commercialize teleoperated robotic systems with haptic feedback [85]. Teleoperated robotic systems are completely void of physical contact between the surgeon and the patient. Therefore, surgeons rely on the sensory information they receive from the workstation. Visual feedback is provided using high-definition (HD) cameras in 2D or 3D and there has been tremendous advancements in recent years in development of high resolution cameras. Auditory feedback is provided by the microphones and speakers with high fidelity. However, providing the sense of touch for surgeons lacks behind the other two. This part relates to the complexity of providing haptic feedback for surgeons. Although there has been many research studies focusing on providing realistic haptic feedback in teleoperated surgical robotic systems, the quality of the haptic feedback is still not acceptable among the medical community, which in part relates to the stability of the operation and complexity of the medical procedures [85]. Haptic feedback must relay information to the surgeon to avoid tissue injury. Until now, the control of interactions between the robot, master, and the remote environment has been insignificant [77,85]. Cutaneous feedback is stable but less transparent. Force feedback is more transparent but less stable and can cause possible tissue damage. There has been significant efforts in recent years for improving the quality of haptic feedback for better differentiating between the hard and soft contacts with tissue [86]. Without the help of improved haptic sensations, teleoperated robotic surgeries will continue to only be a subject of research [76,85]. Reasons for Delayed Acceptance of Haptic Technology Adoption The reproduction of realistic workstations can be considered as another challenge. A high level of visual realism is an aspect to be considered for the development of simulators. Many factors such as forces implemented on the objects, detection of the probe in the virtual environment, and the scale of the virtual environment should be accounted for to produce realistic workstations. In many cases, the force applied by the user is not equal to the output force incident on the virtual objects. Calibration of force feedback is one of the major challenges when virtual simulators are incorporated with haptic devices [40]. Furthermore, in some cases, operators have found it difficult to detect the probe in the virtual environment. Research has been carried out to overcome these challenges. Researchers have combined 3-DoF haptic devices with external modules to provide additional DoF to carry out medical procedures. Moreover, computer graphics have been developed to improve simulations. Recently developed simulation engines are PhysX, Havok, and Bullet [37,40,42]. Surgeons practice a particular kind of surgery more than 100 times to minimize error. Patient safety is one of the significant aspects to be considered for the implementation of haptic devices in medical training. Patient safety mainly depends on error minimization and minimal bleeding. Furthermore, other factors such as financial, psychological, technical, and organizational should be considered. Until now, only a mere representation of the real workstation has been achieved. Haptic simulators that can record sessions provide in-depth feedback must be developed to increase their usage in medical practices [40]. Future Directions The design, modeling, and fabrication of tactile displays and cutaneous receptors have been challenging, resulting in the widespread use of kinesthetic interfaces [87]. The following challenges may be considered as potential future work in the field: • Improper sensory feedback is recognized as one of the reasons for prosthesis rejection that affects the performance of the system is noises and disturbances are not removed properly. • A common disadvantage of the implementation of haptic devices is the limitation in workspace and space constraint [31], which is particularly investigated during the performance of surgical operations [70]. The significance of the workspace and the idea of multiple contact points in a haptic interface, that requires more research and developments in the future, may lead to the increase of manipulability and dexterity of the operator and may increase the performance of the operation [75]. Due to the kinematic structure of robotic arms, unlike exoskeletal devices, workspace is restricted. Exoskeletons are wearable and hence provide a larger workspace. There exist some solutions such as cutaneous haptic devices that are compact and wearable but are not precise as kinesthetic devices. Kinesthetic devices are preferred over cutaneous devices although they have overall stability issues; however, more research is required to prove [75]. • In addition, the application of collaborative mechanisms in teleoperation fashion could be of importance when dextrous motion is required. The solution of using collaborative robots was studied in [72] and the lack of force feedback at the master side was recognized as one of the main issues in using collaborative robots that need to be addressed by more research. • The discrepancies occurring due to improper feedback, high contact speeds, stiff environment setups in cable-driven teleoperation systems require more enhancements [74]. • Some haptic devices are heavy and operators find them difficult to operate [75][76][77][78]. The disadvantage of different kinds of haptic devices highlights the need for more research and development to provide high fidelity haptic feedback for users [15]. Table 1 shows some haptic devices developed by different companies, but there are many more emerging every year. Conclusions Innovative ideas and inventions are constantly evolving, and this could be owed to the technological advancements happening in the engineering world. Haptic devices have become imperative in not just the engineering world, but also in different other disciplines. The introduction of haptic devices has enabled the possibility of noteworthy interaction between hardware devices and users. Many applications, especially medical training, have attracted a lot of interest in implementing haptic technology. The medical field has benefited due to the handful of haptic devices available. However, the medicinal world is still skeptical about the usage of these devices in surgeries and training. From this paper, it can be inferred that haptic devices are beginning to replace conventional devices. The ease of operation of haptic devices has been highlighted in this paper. However, lack of awareness and expenses associated with the installation of haptic devices prove to be a limitation for their use in different applications. In conclusion, more research on the technical aspects of the haptic devices is required, and the awareness about these devices needs to increase in order to increase the rate of adopting this technology. To investigate the technical aspect, our future work will focus on classifying the haptic devices in terms of their linkage configuration, the actuation and sensory systems used, and their mechanism and solutions to address kinematic challenges such as redundancy. Besides, we are working on manipulability and dexterity analysis of the studied haptic devices and are categorizing the devices based on manipulability indices such as isotropy index. Conflicts of Interest: The authors declare no conflict of interest.
9,851.8
2021-02-05T00:00:00.000
[ "Computer Science" ]
DIMENSIONAL ACCURACY OF PARTS MANUFACTURED BY 3 D PRINTING FOR INTERACTION IN VIRTUAL REALITY Realism of a Virtual Reality simulation can be improved by use of physical objects tracked in real time. The paper presents possibility of using low-cost FDM process, realized by MakerBot Replicator 2X machine, in comparison with a professional one to build tooling for a simulation of ultrasound examination procedure. The objects were manufactured on two different machines out of ABS material, 3D scanned for accuracy testing and finally possibilities of their use in a VR system were evaluated. INTRODUCTION The Virtual Reality (VR) technology uses digitally built worlds to create a sense of presence at a user and help realize certain tasks faster and more effectively.As the technology develops and hardware prices drop, VR becomes more and more widely applied in entertainment, professional training [8] (especially regarding medicine [9]), in some industry branches [11], specialized learning and languages teaching [1], among other things. Early VR-focused studies [10] prove that going into artificially prepared reality allows subconscious obtaining of competences in many fields.The state of being inside VR is known as the immersion.This phenomenon is used, among other things, for curing phobias [3], as well as in training.To obtain immersion, it is important to engage as many senses of a user as possible in interaction with a virtual environment.It is possible to achieve in many ways, among other things by representation of movement of real objects, manipulated by a user, in an application.The more real, physical object is similar to its virtual representation (both in terms of shape and movement accuracy), the easier is to achieve full immersion state, which translates into increase of training effectiveness [4].The physical objects which can be tracked for manipulation in virtual environments are often manufactured using 3D printing techniques [7].As in recent years many low-cost 3D printers were introduced to the market (usually working in Fused Deposition Modeling -FDM -technology), it has become more and more possible to produce additional objects tracked inside a virtual simulation due to greatly reduced costs and acceptable material strength [5].The paper addresses the problem of effective manufacturing of objects aiding VR educational simulations using low-cost 3D printers. Case and problem definition -medical VR application The simulation of ultrasound examination in VR is an expansion of an application created for Poznan University of Medical Sciences.According to initial assumptions, it was a virtual 3D human anatomy atlas, with the following functions: DIMENSIONAL ACCURACY OF PARTS MANUFACTURED BY 3D PRINTING FOR INTERACTION IN VIRTUAL REALITY Filip Górski 1 , Radosław Wichniarek 1 , Wiesław Kuczko 1 , Przemysław Zawadzki 1 , Paweł Buń 1 Fig. 1.Markers of the PST-55 system and their recognition in the system software [2] • Lecture -a scenario regarding a specific anatomical or physiological problem presented interactively by a teacher, • Exercise -application used directly by students for manipulation of the virtual human model, • Immersive presentations -single-person use, with application of a Head-Mounted Display (HMD). The atlas was decided to be expanded with possibilities of training certain diagnostic procedures.The ultrasound examination was selected for the initial studies.The aim was to make the examination simulation realistic and inexpensive at the same time.Therefore, it was decided to build an additional device -a physical representation of the manual head for the ultrasound examination (in real life operated by a doctor who is performing the procedure), along with a physical phantom of a patient (in form of a mannequin).The device was manufactured using two different 3D printers, a low-cost 3D printer -Maker-Bot Replicator 2X, and a professional 3D printer -Dimension BST 1200.The aim of the studies presented in this paper was to compare both processes and evaluate if the device manufactured using a low-cost device can be successfully used in professional training of the ultrasound examination procedure using Virtual Reality. MANIPULATION IDEA -A TRACKING SYSTEM A tracking system is a device which allows real-time measurement of position and/or orientation of a given object.Usually tracked objects are special markers.There are certain devices, which allow tracking objects of any shape, by placing patterns of markers on them.This concept is utilized in the described ultrasound examination procedure, where a special object -a head for manipulation -is covered in markers recognized by a tracking system.The system used in the presented studies is PST-55 -it can track objects distant between 40 cm and several meters from the device, by means of infrared light detection.The device sends a wave of IR light, which is reflected back by the retroactive markers.If their pattern is recognized as forming a previously recorded object (shape), its position is calculated by analyzing image from two cameras built inside the device.It is important for the device to see at least 4 markers simultaneously, so the more markers on a surface of a given object, the better [2]. Figure 1 presents a general concept of markers and their recognition by the software. Initial shape of the ultrasound examination device (see Fig. 1) was not properly recognized in the preliminary studies by the authors.That is why it was decided to change its geometry and expand it with additional elements for larger area to put markers into. ULTRASOUND HEAD MODELS PREPARATION The studied objects were models of heads used for ultrasound examination of human abdominal cavity.These models were prepared on the basis of commercial ultrasound examination systems.To make it more realistic, the physical objects were 3D scanned using the ATOS I optical scanner by the GOM company, with measure-ment field of 125 x 125 mm.Three-dimensional scanning ensures rapid time of measurement and its result is a point cloud [6].The initial head was re-created from the 3D scan and the modified models were prepared directly in the CATIA v5 CAD system, on the basis of 3D scans.The two models (Fig. 2) are based on real ultrasound devices, applied to examine different internal body areas (see Table 1 for details). To use the re-created head in a VR simulation, a contact sensor was built in the head tip and the models were modified to allow its assembly.The contact sensor usually is non-present in real ultrasound examination head.In the real examination, the image of the patient's body internals is visible only when the head touches the skin.To obtain the same effect for the computer simulation in VR, it is necessary to detect a moment when user touches the physical phantom.The contact sensor is the easiest and least expensive method to ensure acceptable level of realism of simulation.The signal from the sensor can be easily accessed in available software, e.g. by emulating a mouse click. Main part of the head was divided in 2 parts, which can be assembled using specially designed snap fasteners.The cable channel for the contact sensor was shaped in a way to not cause discomfort to the operator.Dividing the model directly along the channel axis allowed to avoid necessity of using support material in the Fused Deposition Modelling process.Figure 2 presents location of exit of the cable channel in both shapes of heads for the examination. MANUFACTURING OF ULTRASOUND HEADS In the previous work by the authors, modified head models manufactured by 3D Printing process were used.The 3D Printing was realized using ZPrinter 310 machine (3D Systems com-pany), out of a powder based on gypsum, joined by a binder based on methyl alcohol (old head visible in Fig. 1). The new models of examination heads were manufactured using the Fused Deposition Modelling technology.The FDM technology consists in linear deposition of plasticized thermoplastic material, extruded by a nozzle of a small diameter, by a special head.The extrusion head can move in two axes (XY) and the table (on which the model is made) can move in the vertical (Z) axis.After each layer is made, the table goes down in the Z axis leaving a space equal to desired layer thickness between itself and a nozzle. To ensure support of geometry of manufactured object, if subsequent layer contour is significantly going beyond the previous layer, it is necessary to build special support structures.They are also made of thermoplastic material, but with different mechanical properties, which makes it possible to mechanically or chemically separate them from the actual object after the layer deposition is finished [11].The head models were both manufactured using the Dimension BST 1200 -professional machine -and the MakerBot Replicator 2X -a low-cost machine.Manufacturing parameters are presented in Table 1. After manufacturing the heads, it was necessary to remove supports (Fig. 3).Relatively large difference between heads manufactured using two different machines is a result of applying two different materials.The HIPS (polystyrene) material used in the MakerBot Replicator 2X machine is removed in a much easier manner than the ABS material used in the Dimension BST 1200 machine.Moreover, it can be removed chemically (using citric acid) if the support is located in places difficult to reach with manual tools -it was not the case here, all the supports were removed mechanically. To ensure proper fitting of parts of the head, proper assembly clearances were assumed Fig. 2. Models of ultrasound examination heads with visible exit of the cable channel while designing the parts (presented in Fig. 4).Size of clearance was assumed 0.4 mm total (0.2 mm per one side). Elements manufactured using the BST 1200 professional machine were possible to assembly directly after support removal.After assembly, there was a minimal clearance between the parts -assumed clearance size could be smaller, to ensure better snap fitting. In case of the Replicator 2X machine, it was not possible to assembly both halves of the main head frame -there was no clearance.A layer of ABS material had to be removed on both sides of the snap fasteners, to enable joining the parts together (0.5 mm total).Removal of the material did not cause obtaining clearance -the joining was tight, although the two halves were not matched precisely, with a visible displacement. RESULTS AND DISCUSSION The visual evaluation was performed first.In case of objects manufactured using the BST 1200 machine, the threads of material are visibly more parallel to each other, with no "waving" effect on side walls.The only visible defect is a "sew" perpendicular to layer division plane, caused by breaking material during layer contour deposition -it always occurs at the same point on the layer contour.The objects manufactured using the MakerBot Replicator 2X machine do not have the "sew", as it is possible to eliminate it by software means (starting and finishing the layer contour in each consecutive layer is shifted by a small distance).Still, visual quality of deposed threads in the low-cost machine was visibly worse than in the professional machine.Layers in contact with the support material were slightly displaced. After initial visual evaluation, the parts were assembled together, to be used in the VR simulation.As the assembly process had different course for objects from each machine, as mentioned in the chapter 2.4, it was decided to investigate the obtained accuracy by 3D scanning.The manufactured wide heads were scanned using the GOM Atos Compact Scan 5M optical 3D scanner, using a measurement field of 150 x 110 x 110 mm.Then, data analysis (matching scan with the nominal solid CAD model) was performed and colorful deviation maps were prepared.Selected maps are presented in Fig. 5 and 6. Accuracy, approximated by an average deviation of the "best fit" method of matching between the CAD model and the scans, was 0.098 mm for the wide head manufactured by the BST 1200 machine and 0.147 mm for the Replicator 2X machine.For the BST 1200 machine, 100% of measured points were in the +/-0.5 mm tolerance field and 99.4% in +/-0.38 mm field.For the Replicator 2X machine, the percentage for the same tolerance fields were 95.3% and 89.7%, respectively. It can be therefore assumed, that accuracy of the low-cost FDM process is far worse than the professional one, despite roughly the same parameters of manufacturing (leaving aside construction differences between two machines).The parts manufactured by the Replicator 2X machine require higher assembly clearances or manual processing after manufacturing to assure the proper assembly. After the accuracy study, the heads were covered with markers of the PST-55 tracking system.Their arrangements were then introduced to the tracker's memory.Markers on the devices and their visibility in the tracker software are presented in Fig. 7. The practical tests have proven that in case of both ultrasound examination heads, method of gripping has significant influence on detection of marker arrangement in the tracker software.The authors decided to solve this problem by adding extra geometrical elements on top of each head (Fig. 8), with additional markers, impossible to cover while gripping a head with one hand.The additional elements are universal and are assembled by snap fitting, but their addition requires recalibration of tracked marker arrangement.After expanding the heads with additional markers, recognition of devices by the tracker in the assumed tracking space increased significantly, to an acceptable level (objects visible and tracked more than 95% of manipulation time). During the tests, the following observations were made regarding influence of manufacturing quality on use of objects in virtual simulation: • Worse surface quality in objects from the Replicator 2X machine had negative influence on placement of markers of a tracking system.They are placed more easily and recognized more effectively when the surface is planar.No problems were observed for heads manufactured by the professional machine.• Ergonomic quality of heads manufactured by the low-cost machine was slightly lower due to SUMMARY The performed processes and tests proved, that it is possible to effectively use low-cost 3D printers in professional VR education simulations.There is a number of problems with the low-cost process itself -the Replicator 2X machine must be supervised constantly as the nozzle is prone to clogging and breakage of material thread.Both situations require immediate action from the machine operator.The Replicator 2X (similarly to other low-cost FDM devices) also requires longer time of machine preparation, as it is not equipped in replaceable trays.Obtained quality and accuracy are also lower.Despite the defects, it is fully possible to obtain a working manipulator of dedicated, individualized shape for a simulation in Virtual Reality.The cost is significantly lower, therefore it is a recommended approach, although it requires higher qualifications of the machine operator. It is noteworthy, that VR simulations can be successfully expanded with 3D printed accessories to further increase realism of simulations, which translates into better educational results.Both VR and 3D printing technologies have become more accessible in recent years due to emerging of lowcost processes, so the hybrid approach (joining virtual and physical objects) will become more and more widespread in authors' opinion. Future work will consist in completing the VR simulation and conducting more tests with a group of students. Fig. 8 . Fig. 8. Heads with additional 3D printed geometry, allowing better recognition by the PST-55 tracker Table 1 . Comparison of manufacturing parameters for heads produced using BST-1200 and MakerBot Replicator 2X machines
3,449.6
2017-12-05T00:00:00.000
[ "Materials Science" ]
Recent Progress in Materials Influence of Aging, Sterilization, and Composition on the Degradation of Polyurethane Foams Shape memory polymers (SMPs) are highly attractive materials for medical devices. Specifically, SMP foams are currently being used as embolic devices in peripheral and cerebral vascular applications. To ensure the proper function and safety of these materials in their intended applications, it is important to understand how processing treatments, such as aging, sterilization, and foam composition can influence their degradation. Here, SMP foams were treated with industry relevant processing parameters, and the influence on degradation was observed via gravimetric, chemical, and morphological studies. Accelerated thermal aging was shown to have an influence on material degradation rate in real time oxidative studies. Sterilization was performed via Electron beam (E-beam) irradiation at the high and low dosages commonly used in industry and had no significant influence on foam profiles. These findings help inform appropriate treatment of SMP foam embolic devices their Introduction Degradation is a vital consideration in the design of any implantable medical device. If the material degrades too quickly, it can fail to provide required mechanical support or produce high concentrations of cytotoxic by-products [1,2]. If it degrades too slowly, it may inhibit replacement or ingrowth by native tissues. For this reason, much work goes into understanding the degradation profile of biomaterials used in medical devices. Many factors can influence a material's degradation profile in ways that can alter in vivo performance, including processing conditions, sterilization, and handling and storage. Understanding the influence of these factors on the degradation profile is required to achieve clinical success of implanted materials [3]. Polyurethanes have several beneficial characteristics for biomedical applications, such as biocompatibility, strength, and processability [4]. They have been used in devices ranging from catheters to pacemaker leads and continue to be studied for use in novel devices. Additionally, certain polyurethanes can show a shape memory effect [5]. Shape-memory polymers (SMPs) can be fabricated in a primary shape that can be deformed to a secondary shape, typically while heated above a transition temperature. If cooled while deformed, the material will hold this secondary shape until it is again heated above the transition temperature, at which time a thermally induced shape-recovery takes place [6]. This shape memory ability can be beneficial for a number of applications ranging from biomaterials [7] to aerospace [8]. Recently, advances have been made in increasing Tg of SMPs with high recovery stress [9] and using shape memory composites for electromagnetic shielding [10]. Among SMP scaffolds, foams are of particular interest due to high levels of compressibility and volume recovery. Namely, SMP foams are particularly useful for minimally invasive procedures, wherein they can be compressed to a low-profile delivery shape, guided to the desired area in the body, and then expanded to their primary, clinically relevant shape once in place. Recently, SMP foams were approved for use as embolic devices in medical application (IMPEDE Embolization Plug, Shape Memory Medical, Inc.). These porous poly (amino urethane) foams form a tortuous pathway for blood, initiating the clotting cascade, and serving as a scaffold for stable clot formation [11]. Previous studies have investigated the in vitro and in vivo degradation profiles of SMP foams. These studies showed that the materials are resistant to hydrolytic degradation (though variations have been created to allow for hydrolysis) but are susceptible to oxidative degradation [11][12][13]. This degradation is believed to occur via oxidation of the tertiary amines in foaming monomers. None of these previous studies have investigated the influence of common processing parameters, such as sterilization via electron beam (E-beam) or aging, on SMP foam, both of which are important considerations for biomaterial commercialization. One consideration with sterilizing SMP foams is their sensitivity to heat and moisture. For embolic applications, it is requisite that the materials are sterilized in the crimped, secondary state and that they are not actuated by the sterilization process. Thus, techniques that use high temperature or moisture, such as autoclaving and traditional ethylene oxide, are not viable. Previous studies with SMP foams compared the impact of a modified ethylene oxide (EtO) gas treatment and E-beam irradiation sterilization on thermomechanical properties [14]. They showed that even the lower temperatures and humidity of the modified EtO gas treatment, as compared to traditional EtO, influenced the glass transition temperature of the materials and caused premature expansion in the packaging. E-beam sterilization had minimal influence on the thermomechanical properties. Gamma radiation was not used for this study due to concerns regarding oxidative degradation [15]. E-beam irradiation is a popular technique for sterilizing medical devices, and ideal for SMP foams, because it can be performed in ambient temperatures and does not add any potentially cytotoxic chemicals. E-beam sterilization works by producing a concentrated beam of electrons that disrupts chemical bonds and DNA, disabling replication in cells and microbes [16]. This technique has a short penetration depth but is ideal for low density materials such as SMP foams. The dosage used for sterilization is generally between 20-30 kGy with the industry standard set at 25 kGy (ISO 11137). Ebeam is also commonly used to form crosslinks in various polymers as it can alter chemical bonds in the polymer backbone, though the required dosages are generally much higher than those used for sterilization [17]. Even at low dosages, E-beam irradiation can influence the backbone structure of a polymer [18]. Murray et al. investigated the effect of E-beam irradiation ranging from 5 kGy to 200 kGy on a commercial polyurethane (Pellethane 2363 90A) and found that even at dosages as low as 25 kGy, observable changes occurred in chemical structure as measured by Fourier transform infrared (FTIR) spectroscopy [19]. Kang et al. investigated the influence of E-beam sterilization on the in vivo degradation of β-tricalcium phosphate/polycaprolactone using volumetric microcomputed tomography measurements and found that sterilized samples degraded faster than nonsterilized samples [20]. However, no studies have been performed with SMP foams to determine if E-beam sterilization changes the material in a way that impacts the degradation profile. Accelerated aging is a commonly used technique to determine the appropriate shelf life of a material. Often, physical changes in the material are measured throughout aging, such as colorimetric changes [21][22][23][24], changes in mechanical strength or integrity [25][26][27][28][29][30][31][32], or thermomechanical changes in the glass transition or melting temperatures [27,33]. Additionally, chemical changes can be measured using FTIR and/or mass spectroscopy. Accelerated aging is commonly achieved by storing the material at an elevated temperature. This process can be correlated to real-time aging based on the Arrhenius equation, which states that the chemical reaction rate increases with temperature [34]. Many industry standards are based on this equation, including AAMI TIR 17 and ASTM F1980, which use a reaction rate factor of Q10. Previous studies with SMP foams have investigated the effect of aging on the shape memory properties (strain fixity and recovery) [35]. However, gravimetric degradation studies have not been performed to ensure appropriate function of the device after aging. In this study we investigate the influence of E-beam sterilization and aging on the degradation profile of SMP foams intended for occlusion applications. These studies aim to ensure the success and safety of these devices after sterilization and storage. Foam Synthesis and Treatment SMP foams were synthesized using the multi-step protocol previously described by Hasan et al. [36]. Briefly, an isocyanate (NCO) pre-polymer was synthesized using the molar ratios of 60% HPED, 40%TEA, and 100% TMHDI (H60) or 70% HPED, 30%TEA, and 100% TMHDI (H70), with a 43 wt% hydroxyl (OH) component. An OH mixture was prepared with the remaining weight percentage of HPED and TEA, along with catalysts (DABCO T-131 and DABCO BL-22), surfactants (Vorasurf DC198 and Vorasurf DC5943), cell opener (Ortegol 500), and DI water. DI water was used as a chemical blowing agent to generate carbon dioxide bubbles via a reaction with free isocyanates. The NCO premix and the OH component was mixed using a speedmixer (FlakTek Inc., Landrum, SC, USA) and the foam mixture was poured into a tray. The resulting foam was cured in a vacuum oven (Cascade Tek, Hillsboro, Oregon) at 90 °C for 10 minutes. The SMP foam was cooled to room temperature (21 ± 1 °C) followed by a 24-hour cold cure (21 ± 1 °C) before further processing. Table 1 shows the weight percent of each component used for foam synthesis and Figure 1 shows the chemical structure of the monomers used to fabricate the SMP network and a hypothetical schematic of the amorphous network. Foam cubes (1 cm x 1 cm x 1 cm) were cut out of the bulk foam and cleaned in 1 litre glass jars using one 30-minute sonication wash in DI water, two 30 minutes sonication washes in 70% isopropyl alcohol (IPA), and four 30 minutes sonication washes in IPA. After each wash, the solvent was discarded, and the jars were replenished with fresh solvent. Prior to testing, foam cylinders were dried at 100 °C under vacuum for at least 12 hours after which they were stored in a plastic storage container with desiccant. After cleaning, all foam cubes were packaged and sent to SteriTek (Fremont, CA, USA) and sterilized via electron beam at 21, 25, or 31 kGy. Foams to be aged were then sent to Westpak Inc. (San Jose, CA, USA) for 1-year accelerated aging. This accelerated aging process followed ASTM F 1980-07 with an aging temperature of 55 °C for 40 days. The accelerated aging time (AAT) was calculated using the following formula, which is derived from the Arrhenius equation: where RT is the desired real time, is the accelerated aging temperature in °C, is the ambient temperature in °C, and 10 is the aging factor. An aging temperature of 55°C was selected as this is the highest temperature to which the foams can be exposed before the onset of the material's glassy-to-rubbery state transition. Degradation and Gravimetric Analysis Oxidative degradation solutions were made by diluting the 50% H 2 O 2 in reverse osmosis water to the desired concentration (20% H 2 O 2 for accelerated analysis and 3% H 2 O 2 for real-time analysis). For the real time hydrolytic degradation solutions, PBS was prepared in RO water according to manufacturer instructions. Accelerated hydrolytic solutions (0.1 M NaOH) where made by dissolving NaOH pellets in RO water. All sterilized foam cubes (aged and non-aged groups) were weighed upon arrival to obtain an initial weight. They were then placed in a labelled glass vial, submerged in 20 mL of the appropriate degradation solution, and incubated at 37 °C. Solution levels were checked daily and solutions were changed every three days. Mass measurements were taken every 6 days for accelerated oxidative solutions, every 20 days for real time oxidative solutions, and every 14 days (28 days after day 70) for all hydrolytic samples. When taking mass measurements, the degradation solution was decanted while retaining the sample in the vial. 20 mL of ethanol was added to each vial and samples were allowed to sit in the ethanol for 3 minutes before the ethanol was decanted. Sample vials were covered with a laboratory wipe and dried in a vacuum oven at 50 °C for a minimum of 12 hours. Once dry, samples were removed from the oven and the mass was measured on a precision scale (1 mg resolution). They were then returned to their respective vial and re-submerged in the degradation solution. For the oxidative degradation samples, weighing was stopped when the samples broke apart and could no longer be removed from the vial without causing significant mechanical degradation or losing material. Morphological and Chemical Characterization Morphological changes were observed using scanning electron microscopy (SEM) images captured throughout the degradation process. For SEM images, a thin slice (~1 mm) was taken from a sacrificial sample and dried in an oven at 50 °C. It was then seated on a metal stub with carbon black tape and sputter coated for 60 seconds using a Ted Pella Cressington 108 gold sputter coater (Ted Pella Inc., Redding, CA). Images were captured using a JEOL JCM-5000 Neoscope benchtop SEM (JEOL USA Inc., Peabody, MA) to observe any structural degradation of the materials. Chemical changes in the material were observed with attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectrometry. At day 0 and day 30 a thin slice (2-3mm) was taken from a sacrificial sample and spectra were collected using Bruker ALPHA Infrared Spectrometer (Bruker, Billerica, MA). For each sample, 64 background scans were taken, followed by 32 scans of the sample. Spectra was collected in absorption mode with a resolution of 1 cm -1 . OPUS software was used to perform baseline and atmospheric corrections. The intensities of sample peaks over time were compared by using the carbon skeletal peak at 1243 cm -1 as a baseline. Statistical Analysis All data are expressed as the mean ± standard deviation of the mean. Statistical analysis was performed in JMP using unpaired Student's t-tests with p<0.05 accepted as statistical significance. For gravimetric studies on the influence of aging on degradation, N=8 throughout all time points because sacrificial samples for SEM and FTIR were not included in weighing. For gravimetric studies on the influence of sterilization and composition on degradation, sacrificial samples were included in weighing until sacrifice at day 30 and day 60. Thus, for these studies, N=8 until day 30, N=7 from days 36-60, and N=6 from day 66 until termination. Foam Characterization Prior to aging or sterilization treatment, pore size and density data were collected for all foams. For H60 foams, the average pore size was 217.7±99.8 µm in the axial direction and 221.4±95.9 µm in the transverse direction with an average density of 0.0378±0.0047 g cm -3 . For H70 foams, the average pore size was 207.7±86.3 µm in the axial direction and 187.5±73.7 µm in the transverse direction with an average density of 0.0447±0.0067 g cm -3 . Influence of Aging on Degradation We first investigated the influence of material ageing on the degradation profile of H60 foams sterilized at 25 kGy when degraded in oxidative and hydrolytic conditions. Figure 2 shows the gravimetric degradation profile of aged and unaged foams in real-time and accelerated oxidative and hydrolytic media. These profiles align with previously seen results [11], showing significant degradation in an oxidative environment ( Figure 2a) and negligible degradation in hydrolytic conditions (Figure 2b). Furthermore, a significant change in degradation profile is observed between aged and non-aged samples when degraded in a real-time (3% H 2 O 2 ) oxidative solution, but not when samples were degraded in an accelerated (20% H 2 O 2 ) solution. FTIR spectra of the foams were collected throughout the degradation process to investigate chemical changes. Figure 3 shows the FTIR spectra of the aged and unaged foams when degraded oxidatively and hydrolytically. Figure 3a shows a shift in the urethane peak at 1692 cm -1 and a loss of the tertiary amine peak at 1052 cm -1 when samples are degraded in oxidative solutions. These changes in the FTIR spectra match previously observed phenomena [11] indicating a scission at the C-N bond in the tertiary amines. Figure 3b demonstrates a lack of chemical degradation in hydrolytic conditions with no observable changes in the FTIR spectra between days 0 and 30. There were no notable differences in FTIR spectra between the aged and unaged samples for either oxidative or hydrolytic degradation. SEM images of the foams were also gathered during the degradation process to show morphological changes. Figure 4 provides SEM images of the foams at days 0 and 30 of the degradation process. In the oxidatively degraded samples, collapse of the porous structure can be observed at day 30 while no observable structure loss is seen in the hydrolytically degraded samples, further confirming the lack of degradation indicated by gravimetric and chemical studies. Similar morphological changes were observed in aged and non-aged samples. Figure 4 SEM images of oxidatively (3% H 2 O 2 ) and hydrolytically (0.1M NaOH) degraded H60 foams at day 0 and day 30. Loss of strut integrity is observed for both aged and nonaged foams after oxidative degradation while strut integrity is preserved in hydrolytically degraded foams. Influence of Sterilization and Composition on Degradation The next study investigated the influence of high and low E-beam irradiation dosage levels of on the degradation profile of two foam compositions (H60 and H70). All degradation was performed in an accelerated oxidative solution (20% H 2 O 2 ). Figure 5 plots the gravimetric degradation profile of the foams where no significant differences were observed between electron beam dosage or foam composition groups. Figure 6 shows the FTIR spectra for these foams demonstrating the same left shift of the urethane peak and loss of the tertiary amine peak when degraded oxidatively. However, there are no observable differences between samples treated with different e-beam dosages, supporting the observations of the gravimetric degradation. Discussion The aim of this study was to investigate how aging, sterilization, and chemical composition may influence the degradation of SMP foams. Gravimetric studies showed the overall mass loss of materials in various solutions while ATR-FTIR readings showed chemical changes to the material backbone when degraded oxidatively. Finally, SEM images captured the morphological changes of the material during various stages of degradation. Together, these studies further inform appropriate biomedical applications of SMP foams while accounting for shelf life and sterilization parameters. Aging studies showed that oxidative degradation does appear to occur more quickly in real time solutions when the materials are aged. Previous studies have demonstrated that thermal aging can decrease the amount of hydrogen bonding in polyurethane foams, leading to increased susceptibility to water penetration [37]. This data may explain why the aged samples degraded faster than the non-aged samples in the real time oxidative studies. Notably, this was not seen in the accelerated studies where the rate of degradation was likely less dependent on aqueous penetration into the polymer system, indicating the importance of real time degradation studies in biomaterial characterization. A more thorough understanding of the cause of this change in degradation rate will be important for future testing of devices, and it may inform which tests require aging to provide accurate and reliable results. Sterilization studies investigated the high and low ranges of commonly used E-beam irradiation dosages and found no significant influence on accelerated oxidative degradation profile for either foam composition tested. This suggests that these dosages do not initiate chemical changes or alter the degradation rate of these materials. It is likely that if significantly higher dosages were used, such changes could occur [19], but such dosages would be outside those commonly used for sterilizing medical devices. This is an important finding for the commercial application of SMP foams because it provides flexibility in the sterilization process without concern for potential influences on the degradation profile. It also shows that sterilization does not influence various foam compositions differently, indicating they can safely be tuned to allow for appropriate working times or stiffness. It was observed that the H70 foams degraded slightly faster than H60 foams. Previous work with these foams found that the tertiary amines in TEA and HPED are susceptible to oxidative degradation and laid out a proposed degradation pathway for these compounds [11]. Because H70 foams have a higher molar concentration of HPDE, which has two susceptible tertiary amines, it is expected that they would undergo slightly faster oxidative degradation. Future studies will focus on real-time degradation analysis to expand upon this data. However, the degradation studies correlate with our previous work that shows that high dosages of E-beam irradiation (40 kGy) do not significantly impact SMP foam expansion, thermal, or mechanical properties [14]. Of note, in all gravimetric studies foams had recorded weights above 100% at the early time points. This is due to residual solvent (water) that is not completely removed during the drying process. It was observed that all solvent could be removed by drying the foams at 100 °C, however, this is above the Tg of these materials and thus was not performed because it would have caused additional thermal degradation. Additionally, the degradation rate for H60 foams did differ significantly between the studies presented in Figures 2 and 5. While the same protocols were followed for both, other work with these foams has shown that slight differences in handling (collection of particles during drying, removal of foams from the vial, foam compression), foam synthesis (reticulation of the foams, starting foam weights), oven temperatures, and H 2 O 2 Concentration can cumulate is varying degradation rates. However, the goal of these studies was to determine the relative influence of various treatments (aging, sterilization, composition) on the degradation rate, not to determine the in vivo degradation rate of these material as that has been previously studied [11]. For this reason, conclusions for this paper were drawn only based on data within each study and we caution against drawing any conclusion between studies. In addition to the understanding provided by these studies, further studies investigating the mechanical properties of foams after aging or sterilization and during degradation would be informative. Previous work investigating the mechanical properties of similar foams [38] showed that while the tensile strength decreases over time, samples were robust enough to test after 2.5 months of accelerated degradation. Conclusions In summary, these studies demonstrated an influence, or lack thereof, of ageing, sterilization, and foam composition on the degradation profile of shape memory polymers intended for biomedical applications. Of note, we demonstrated that e-beam sterilization dosages between 21 and 30 kGy do not influence degradation profile of these polyurethane foams. The influence of accelerated aging was not observed in accelerated oxidative degradation studies but could be seen in real-time oxidative studies. Slight differences in degradation rate for H60 and H70 foams were observed as expected, but neither were affected by e-beam sterilization. These studies will inform the processing and application of these materials in medical devices.
5,103
2021-01-01T00:00:00.000
[ "Materials Science", "Engineering" ]
Channel-independent recreation of artefactual signals in chronically recorded local field potentials using machine learning Acquisition of neuronal signals involves a wide range of devices with specific electrical properties. Combined with other physiological sources within the body, the signals sensed by the devices are often distorted. Sometimes these distortions are visually identifiable, other times, they overlay with the signal characteristics making them very difficult to detect. To remove these distortions, the recordings are visually inspected and manually processed. However, this manual annotation process is time-consuming and automatic computational methods are needed to identify and remove these artefacts. Most of the existing artefact removal approaches rely on additional information from other recorded channels and fail when global artefacts are present or the affected channels constitute the majority of the recording system. Addressing this issue, this paper reports a novel channel-independent machine learning model to accurately identify and replace the artefactual segments present in the signals. Discarding these artifactual segments by the existing approaches causes discontinuities in the reproduced signals which may introduce errors in subsequent analyses. To avoid this, the proposed method predicts multiple values of the artefactual region using long–short term memory network to recreate the temporal and spectral properties of the recorded signal. The method has been tested on two open-access data sets and incorporated into the open-access SANTIA (SigMate Advanced: a Novel Tool for Identification of Artefacts in Neuronal Signals) toolbox for community use. Introduction When recording neural signals, other electrical sources either instrumental or physiological may distort the process. They are commonly known as artefacts, and their identification and removal are of importance to further analyse and infer insights from them. They produce longer review times [5], misdiagnosis of diseases or brain conditions (as in the diagnosis of Schizophrenia, sleep disorders and Alzheimer's disease [32]) or produce false alarms (as in generating false alarms for brain seizures [49]). One of the most common approaches is to discard the affected epochs; however, it causes information loss and sharp discontinuities in the signal. This can impact the use of a brain-computer interfaces as the system cannot obtain the decoding results during the corresponding time. Another case would be where the signal is not meant to be evaluated by a physician but instead processed by an algorithm, causing distortions in the output. As an alternative to keeping or discarding the corrupted segments, there are techniques that allow for their removal, such as filtering, template subtraction, or advanced computational techniques. Invasively recorded signals are less susceptible to external artefacts, but must be processed nonetheless. Local Field Potentials (LFP) are low-pass filtered signals of the extracellular electrical potential recorded in deeper layers of the brain through micro-electrodes [28,34]. In case of LFP, several signal analysis and processing toolboxes offer a range of computational techniques for artefact removal including signal filtering for unwanted components, removal of power line noise, rejection of channel with incompatible interference, automated removal of noisy signal components etc. [18,26,27,38]. Most of these techniques involve the removal of segments that have been corrupted by the noise/artefacts and this often distorts the overall integrity of the signal, which is undesirable in cases, where further processing relies on the completeness of the signal. To recover the original signal with the aim of preserving the information, machine learning (ML) techniques have been applied to this task. These techniques gather information presented to them to construct a model which can be used to make inferences about unseen data, and have been widely used in diverse fields, for example: outlier detection [11,14,57], data mining of biological data [29,30], detection of diseases [35,39,47,50,59], elderly monitoring [2,22,37], financial forecasting [40], image processing [3,45],natural language processing [44,56] and monitoring patients [1,52]. Among the many ML methods, deep neural networks stand out. Their design was inspired by the biological counterpart, and they allow for non-linear processing of information. Within the ML-based solutions, the research found in the literature commonly employs multi-channel solutions. This generates a shortcoming, as they are invalidated if the number of affected channels are more than the ones not affected, or if a global artefact appears. Therefore, channel independent solutions are needed, which can be used in low-channel applications and expanded as needed. This work extends the conference contribution presented at the 14th International Conference on Brain Informatics [12]. In that work, a deep learning-based approach was proposed as an artefact removal module for the SANTIA (SigMate Advanced: a Novel Tool for Identification of Artefacts in Neuronal Signals) open-source artefact removal toolbox [13]. SAN-TIA allows the detection and subsequent removal of artefacts in LFPs by replacing the artefactual segments with signals generated using a single-layer Long-Short-Term Memory (LSTM) network. This current work extends the conference work by validating and testing it using a second data set. It also reports the robustness of the method by expanding the methodology to a more complex network architecture as well as a non-ML method for comparison. Overall, this extended version provides an in-depth description of the methodology and describes the implementation of the improvements. The remainder of this paper is organised as follows: Section 2 describes the state-of-the-art for artefact detection and removal, Section 3 presents the methods proposed in the current work, Section 4 shows the usage of the proposed artefact removal methods after their incorporation into the SANTIA toolbox, Section 5 reports the results obtained on publicly available open-access data sets, and finally, Section 6 provides the conclusion of the work. Related work When attempting to remove artefacts, there are several computational approaches that are typically used. For illustration's sake, Fig. 1 displays signal segments with and without artefacts, alongside their frequency components of the two data sets used in this paper (1a represents the data set in section 3.2 and 1b shows a representative artefact from the data set described in section 3.3). Brief discussions about these existing approaches are provided in the following paragraphs. Regression A regression method begins by defining the amplitude relationship between a reference channel and a neural signal using transmission factors, then removing the estimated artefacts from the signal [55]. In a single-channel approach without a reference channel, this approach is not possible. Adaptive Filtering To apply adaptive filtering, a reference channel is given as one of the inputs to the filter, so the degree of artefactual contamination in the neural signal is measured by iteratively changing the weights according to the optimisation method and then removed [23]. As with regression, the lack of a reference channel invalidates applying this approach. Template subtraction When artefacts have a unique shape, as they come from a specific source, it can be approximated and subtracted to restore the neural signal [36]. As a result of the variance of the shapes of the artefacts in the data sets, as they can be of different unidentified sources, make it impossible to accurately subtract it without introducing further error. Inter-channel interpolation When a channel in an array is impacted locally by an artefact, that segment can be replaced using the average or other methods that take into consideration the surrounding channels, which isn't possible in a single channel approach [4]. Decomposition One major drawback of decomposition methods (e.g., wavelet, empirical mode) is that they cannot remove artefacts completely if the spectral properties of the measured signal overlap with the spectral properties of the artefacts [20]. In the data sets, artefactual segments manifest in the same bands as the physiological signal. Blind source separation Blind source separation is a popular method for removing artefacts in neuronal signals and includes methods, such as independent component analysis, canonical correlation analysis and principal component analysis [21]. However, these Fabietti et al. Brain Informatics (2022) 9:1 Fig. 1 Examples of signal segments with (red) and without (blue) artefacts along with their respective periodograms for data set 1 (a) and data set 2 (b) methods assume that the number of artefact sources should at least be equal to the number of channels, limiting the single channel applications. This is clear from the above discussion that most traditional methods fail to recreate the artefactual region of the signal. To this end, we propose an alternative to discarding the segment, which is to replace it with a model-generated sequence of "normal" behaviour of the signal. This way, subsequent analyses of the signal are not hampered by the absence of segments. To demonstrate the accuracy of the model-generated replacement segments, we applied it to two completely different publicly available data sets (see sections 3.2 and 3.3). From a perspective of restoration of missing values in neuronal signals, there have been cases of both ML or non-ML approaches in electroencephalogram (EEG) signals. From the first group, Svantessona et al. [53] trained a convolutional neural network (CNN) with 4, 14 and 21 EEG channel inputs to up-sample to 17, 7 and 21 channels, respectively. A visual evaluation by board-certified clinical neurophysiologists was conducted, and the generated data was not distinguishable from real data. On a similar approach, Saba-Sadiya et al. [46] employed a convolutional autoencoder, which takes as an input a padded EEG electrode map during 16ms (8x8x8 tensor) with 1 occluded channel, which is expected as the output. They compared it to spherical splines, euclidean distance and geodesic length methods, outperforming them and showing the method is able to restore the missing channel with high fidelity to the original signal. Finally, Thi et al. [54] utilised a linear dynamical system (Kalman Filter) to model multiple EEG signals to reconstruct the missing values. This method showed 49% and 67% improvements over singular value decomposition and interpolation approaches, respectively. In the second group, there are published papers, such as de Cheveigne and Arzounian [8] and Chang et al. [6]. In [8] authors have detected EEG and magnetoencephalography artefacts by their low correlation to other channels, and replaces them with the weighted sum of normal channels, a method called 'Inpainting' . On the other hand, Chang et al. employed artefact subspace reconstruction on twenty EEG recordings taken during simulated driving experiments, in which large-variance components were rejected and channel data were reconstructed from remaining components improving the quality of a subsequent independent component analysis decomposition. Sole-Casals et al. [51] evaluated the performance of four tensor completion algorithms and average interpolation across trials on missing brain-computer interface data (across 6 channels and segments), and evaluated the reconstruction by the performance of machine-learning-based motor imagery classifiers. Overall these approaches rely on the information from other channels of the arrays, which fails when a global artefact is present, the number of affected channels are more than the ones not affected, or they have poor quality. For those situations, we propose the usage of the surrounding information of the affected channel instead to accurately replace the segments affected by artefacts via the use of deep learning. Methods In this section, the ML methods as well as the data sets used are described. Machine learning model We hypothesise that by training an LSTM network to reliably forecast artefact-free data, it may be successfully utilised to substitute artefactual sections of signals when information from other channels has been corrupted and cannot be used to approximate its real behaviour. Figure 2 shows how an LSTM network was trained to predict typical behaviour using a sliding window method. The sliding window approach consists of employing data at a time t to predict the value at t + 1 , and then uses the new predicted value when forecasting the value at t + 2. The neural network architecture was chosen due to the known capabilities of Recurrent Neural Network (RNN), specifically LSTM, in recognising patterns from sequential data. Kim et al. [25] has proven that it is possible to predict the behaviour of local field potentials from 10 to 100 ms forward in time via the use of a regressive LSTM network. A similar approach was established by Paul [42], who used a stacked LSTM to forecast a single point of an EEG signal by feeding the previous 70 ms. Their test data was composed of 9 subjects, in which they achieved correlation coefficients of over 0.8 across all of them. In addition, there have been recently reported applications of LSTM in artefact detection [15,19,24,31] as well as RNN in artefact removal [10,41,43,48]. The LSTM cells include a forget gate which decides what information is kept and what information is discarded from the cell state. If the value of the forget gate f t or f(t) is 1, the relevant information is saved, but if the value of the forget gate is 0, it is forgotten. Equation 1 shows the mathematical expression of this specific LSTM cell. where the variable x t is the input vector, W holds the weights, b is the bias and σ is the sigmoid function. In addition, f t is the forget gate, i t is the update gate, c t is the cell input, c t is the cell state, o t is the output gate and h t the hidden state or output vector of the cell at time t. The testing set was used to calculate the root mean squared error (RMSE), as defined in Eq. 2, of the output over an unseen segment. where x ij is a forecasted data point, x ij the real value of the LFP at that data point, S is the output sequence length and N the number of examples in the test set. This was chosen over the mean absolute percentage error (MAPE) due to the fact that the signal has been zero centred during the pre-processing, so the number of zero crossings a segment has is significant, which distorts the MAPE as it takes an undefined value in those points and they must be removed. Matlab's Deep Learning Toolbox [33] was used to build and train the network of LSTM cells. The LSTM (1) models were made up of the following layers: an input layer, a hidden layer equal to one-tenth of the input, and an output layer equal to the number of predicted points. For comparison, we trained a more complex architecture composed of convolutional and recurrent layers CNN-LSTM described in Table 1. The optimisation algorithm used was Adam, with an initial learning rate of 0.0001, momentum of 0.9 and a batch size of 516 for the first data set and 128 for the second data set, due to having a smaller sample size. The loss function of the regression layer was the half-mean-squared-error of the predicted responses for each time step, not normalised by N: where x i is a forecasted data point, x i the real value of the LFP at that data point, S is the output sequence length and N the number of examples in the training or validation set. To have a performance reference, the linear approximator autoregressive moving average with extra input (ARMAX) was applied on the same testing and model evaluation data. Following the description by Yan et al. [58], given a LFP time series (X t , y t ) for t = 1 to N, where X t = t(x t1 , x t2 , ..., x tk ) is the input vector at time t with k elements and y t is the corresponding neuronal activity voltage at time t, this model approximates a polynomial equation, written as: where A(q), B(q) and C(q) are the polynomials expressed with a time shift term q −1 shown in Eq. 5 and e(t) is the white-noise disturbance value. Here, the hyper-parameters n a , n b , n c denote the orders of the ARMAX model's auto-regressive part, external input vector with k elements and moving average, respectively. Finally, a i , b ik and c i are the polynomial coefficients determined using polynomial curve fitting. Having described the methodology, we proceed to describe the data sets used to evaluate it. Data set 1 Open-access data was utilised to evaluate the toolbox [16]. The data set is linked to an article that provides a detailed report on the recordings and trials [17]. Male Long Evans rats (280 to 300 g) trained to walk on a circular treadmill were used to generate the recordings. The obtained LFPs were sampled at a rate of 2 kHz, and then pre-processed first by low-pass filtering them, second by amplifying times a thousand and finally applying a bandpass filter from 0.7 to 150 Hz. To evaluate the toolbox, a subset of the repository composed of baseline recordings (i.e., before Ketamine administration) was used. The baseline recordings included at least two 5-min counter-clockwise walking loops on a slow-moving treadmill and two 40-s rest intervals free of artefacts. Artefact-free intervals of 100 s in treadmill-on epochs and 40-100 s periods in treadmilloff epochs were classified using visual inspection and recorded motor activity, which are detailed in Table 2. The threshold power value for each channel was calculated using these labelled artefact-free epochs, defined as the maximum power of windows of 50 ms duration within them, where the window length was chosen based on prior classification findings. One-second artefact-free windows were extracted for each of the rodents and then aggregated to a cross-subject data set, which was divided into training (80%), validation (10%) and testing (10%) sets. Out of the training and validation sets, 54 data sets were constructed based on the length of the input from 0.1 to 0.9 in 0.1 increments and the prediction of posterior 1, 5, 10, 25, 50, and 100 data points. To be able to compare the different forecasting output sizes, the test set was used to evaluate the performance over 0.1 s (i.e., 200 points at 2 kHz) of unseen data. Data set 2 A second open-source data set [9] was used to test the methodology. We have selected this data set based on the amplitude of the artefacts, which were ranging between 0.15% and 13.48% of the recordings, as highlighted by the authors on the related work. The open-access data set is composed of uninterrupted baseline recording days for sleep research, where local field potentials were recorded from 9 male Sprague-Dawley rats (3-4 months). The data set contains LFP that were acquired at the prefrontal and cortex parietal cortex, sampled at 250 Hz. Recordings were cut into 4-s long epochs and labelled depending on the state of the animal (awake, rapid eye movement, or non-rapid eye movement sleep). It is worth noting that the data set has intra-subject variability, as these recordings range from 3 to 8 consecutive days (out of 40 that are not shared), as well as inter-subject variability, since it has twice the number of subjects as the first data set. Furthermore, there are differences between states, such as high-frequency components which may distort the detection and removal of artefacts. Therefore, to reduce the variability we extracted the longest awake period of each day (see Table 3), and chose the rodent with the longest consistent awake recordings (i.e., rodent 'MuensterMonty'). The final data set is composed of the recordings of one rodent during the awake state across five recording sessions for a total of 26956 s. Afterwards, we measured the signal's power with a 1-s moving window, and if it exceeded the threshold defined Table 2 Guide to determine the best channels and epochs to use of baseline walk and rest recordings in medial prefrontal cortex (mPFC) and the mediodorsal (MD) thalamus, as mentioned in the file named "Coherence Phase Plot Guide". Column 1 denotes animal id, columns 2 and 3 shows two good channels of the mPFC recordings and coumns 4 and 5 of the MD recordings. Finally, columns 6 and 7 show the range of artefact free epochs during walking and at resting, respectively manually defined in the toolbox, the segment was classified as artefacts. Due to the small sampling rate, we extracted 4-s non-artefact segments from each of the recordings. These were divided into training (80%), validation (10%) and testing (10%) sets. Out of the training and validation sets, 15 data sets were constructed based on the length of the input from 1, 2, or 3 s and the prediction of posterior 1, 25, 50, 125, and 250 data points. To be able to compare the different forecasting output sizes, the test set was used to evaluate the performance over 1 s (i.e., 250 points) of unseen data. Implementation The SANTIA toolbox is composed of four units that carry out different tasks on the neural recording files, these are: data labelling, neural network training, classifying new unlabelled data, and artefact removal. While the first three are used for artefact detection, the first and fourth units are used for artefact removal. The labelling unit performs the following tasks:data loading, scaling, reshaping, channel selection, labelling, saving and 2D display. On the other hand, the fourth unit is composed of: data loading, normal segments extraction, hyper-parameter setting, network selection, network train, test set visualisation, replace segments, plot replaced channels, and saving. The toolbox is available for download directly from the Github repository 1 . The GUI allows quick access to all modules when the toolbox has been launched. We highlight that SANTIA is not a library of functions with a GUI added to make access easier but instead is a generic environment built on a single interface with individual features implemented. Interactions with the GUI are made by selecting functions, settings, and keyboard inputs, which are processed in the back-end. A check procedure runs before each function to ensure that the user hasn't skipped a step or failed to include all of the needed inputs or parameter selections. This is done to minimise both the risk of human mistakes and the amount of time consumed. If the user has a question, tool-tips with a brief explanation display when the pointer is held over a component of the GUI. We now proceed to describe the aforementioned units relevant to the task as well as the outputs produced. Data labelling The first step is loading the neural recordings, which is done with the import wizard launched by the 'Load Signals' button of the first unit, as a matrix with m number of channels and n number of data points for each channel. ASCII-based text (e.g., .txt, .dat, .out, .csv), spreadsheet files (e.g., .xls, .xlsx, .xlsm) , and Matab files (e.g., .set, .mat) are the formats that are compatible with the toolbox. To structure the data, the user must provide the sampling frequency in Hz and the window duration in seconds. The options for data scaling are available to avoid the common incorrect magnitude annotations. A function to structure the data is called via the 'Generate Analysis Matrix' button, which takes in the aforementioned inputs. The following step consists of labelling the data, carried out by giving segments whose power exceeds a user-defined threshold a binary label. The toolbox allows for three options, either of which the user can use to their preference. The first is table that hold the segment power in the first column and the values of the signal in the subsequent columns. The user may sort any column to define a value which divides both classes in the optimal way, and visualise any segment they select. The second option is the 'histogram threshold' , where a histogram of the segments' power shows the distribution, and the user can select with a slider the cutoff value, or visualise a segment. As an alternative, the threshold values can be typed into the table displayed on the module. Once all channels have been filled, the signals are labelled and saved as a standardised struct, which includes the original filename, the structured data with its labels, the sampling frequency, window length, the scale, and the threshold values. The purpose of the format is to allow users to select and contrast the various data sets they build, due to different window lengths or threshold values they may have chosen. Users can see when each stage has been finished with the help of text in the 'Progress' banner, which is duplicated across each unit. Artefact removal The initial step of this unit is to load the structured file mentioned above. Once complete, the user must input the duration of artefact-free segments they wish to extract from the file to train the model. A progress bar indicates the progress of the extraction, followed by a notification of the number of segments extracted upon its completion. The following step is the configuration of the input and output of the model, with the option of selecting either data points or milliseconds as units. They must also input how to split the data for training, validation, and test sets, as they are crucial to avoid over-fitting. For the third step, a new option has been incorporated which allows users to make use of the CNN-LSTM architecture presented in this work, the previously reported LSTM or for the user to load his/her custom set of layers, as shown in Fig. 3. The file must contain a Layer-type variable, in other words, layers that define the architecture of neural networks for deep learning without the pretrained weights. These can be modified via console or the Deep Network Designer Toolbox, for more information, we direct the reader to the Mathworks page 2 . A side panel allows the customisation of training hyper-parameters, such as the validation frequency, max epochs, verbose, mini-batch size, and others. These intentionally mirror the ones available in the Deep Network Designer, making it easier to familiarise with it. The training process is run by clicking on the 'Train Network' button, which loads all the user-defined inputs so far and generates a training plot for the user to evaluate the process and do an early stopping if required. A pop-up notification alerts the user of the root mean square error of the test set, and the user can visualise the examples of the test set in contrast to their forecast. The user can either adjust the network and training parameters to get a desirable result, and once obtained, they can proceed to the last step. This consists of swapping the windows labelled as artefacts for the network's forecast, where a progress bar is displayed to show the advancement. The newly obtained signals can be visualised by first selecting which channel to display and the 'Plot Channel' Button. The last step is to save all the obtained information in the form of a struct with data's filename, the trained network, the training information, the test set's RMSE, the test set original, and replaced segments and the data with the artefactual data removed, where the user sets the file name and directory to store it. Performance evaluation In order for the user to compare the different models, and adapt the network size, type or hyperparameters, the toolbox creates several windows. These are showcased in Fig. 4, which displays examples of the outputs of 'View Test Results' (A) and 'Plot Channel' (B). In the upper subfigure, we showcase an element of the test set in red in contrast to the forecast of the CNN-LSTM network in blue. In this particular example, while the forecast of the first peak is nearly identical to the signal the following peaks have slightly less amplitude, which can be attributed to the fact that they are taking in the previous forecasts of the network. The sub-figure below showcases a channel before (red) and after removal (blue). The high amplitude artefacts which spanned 2 mV peak-to-peak have been removed and replaced by 50 ms windows, and now the channel shows a uniform range of ±0.05 mV, indicating the success of the methodology. Figure 5 shows performance of the 54 LSTM models in the form of validation loss and test set RMSE over 100 ms. In regards to the output of the network, the test performance improves from single value predictions to the fifty points one and then remains constant. In regards to the time input, larger sequences above 0.6 s don't present any major performance improvements. The best performing LSTM model is the 600 ms input and 10 points prediction model with an RMSE of 0.1538. Data set 1 On the other hand, out of the 54 CNN-LSTM models, the best performance is achieved with an output of 20 data points across all inputs, while the worst performances are achieved with 50 or 100 output points. Overall, the performance of the CNN-LSTM is better than the LSTM models, with the best score being 0.1463 of the 200 ms input and 20 points prediction model. To confidently prove the effectiveness of this method, it has been compared to ARMAX. The ARMAX was given the same 200 ms examples for defining the model and the 100 ms to calculate the RMSE, which achieves a performance of 0.1449. This indicates a slightly better performance than the neural networks; however, we must factor in that the signals have been significantly low-passed filtered and the signals have a near-sinusoidal shape. If used on a different set that retains higher frequency components, the performance of the ARMAX model would be challenged, as we will show on the next Table 4, where the neural network method outperforms ARMAX significantly in computational time. The time difference is mainly due to the fact that the ARMAX needs to estimate the grades of the polynomials every new sequence for accuracy, unlike the CNN-LSTM that is able to forecast very rapidly, once it has been trained. All models were tested on a general-purpose Alienware m17 r4 laptop consisting of 32 gigabytes of RAM and Intel ®Core TM i9-10980HK CPU @ 2.40 GHz processor. With both metrics, i.e., RMSE and computational time, we choose the CNN-LSTM as the best compromise between the two. Having defined the best model, a total of 7275 1-s artefactual segments were extracted from the data of the rodents, with the condition that the first 200 ms had to be artefact-free. The forecast produced by the network replaced every 50 ms window labelled 'artefact' in each segment, which in turn was used as part of the input if the following window also shared the same label. The first comparison of the results is done through visual inspection. Examples of normal, artefactual, and replaced-segments signals alongside their periodogram are illustrated in Fig. 6. The new signal after the processing had had its high amplitude artefact removed, demonstrating the method's success. This can also be observed in the periodogram, where the artefactual example Examples of normal (blue), artefactual (red) and replaced-segments signals (green) alongside their periodograms for data set 1. The method has been able to recreate the normal signal, both in amplitude as in spectral properties possesses a low-frequency component that exceeds the −20 dB, but the physiological as well as the processed signal have a power of approximately −40 dB. In regards to segment's power, Fig. 7 shows the violin plot 3 distribution of the three groups: the normal segments, artefactual segments and after replacing them. The method has been successful in replacing the high power artefactual segments with ones that resemble normal activity. While the median is higher than the artefact-free, the distribution has shifted considerably to lower power levels. The presence of high-power segments indicates a shortcoming of the method, where surrounding information has high power, but only one or two windows do exceed the defined threshold, so the total sum of the processed segment still has a high value. Data set 2 The results of the different models are compiled in Fig. 8, where the validation loss and the RMSE over 1 s of the test set are shown. For the 15 LSTM models, the performance improves with longer output sequences, but are best with 2 s of input. Thus, the best performing model is the 2-s input-1-s output, with a RMSE of 0.7418. In regards to the CNN-LSTM models, performance does not vary significantly across input nor output length; however, the best model is obtained with 1-s input-1-s output which has RMSE of 0.7341. Across all combinations, the CNN-LSTM outperforms the LSTM, as it can extract richer features. Subsequently, the comparison to the ARMAX model was carried out. The ARMAX was given 1 s of recording to define the model and asked to forecast the subsequent second to calculate the RMSE, achieving a score of 3.1813. The difference in the performance of the ARMAX between the two data sets can be attributed to the fact that the one being evaluated has not been heavily filtered, and retains high-frequency components, making it more difficult to adjust a model. When looking at the overall performance of RMSE and computational time in Table 5, the CNN-LSTM stands out as the best performing method. With these results, we proceed to extract 4-s (i.e., 1000 data points at 250 Hz) artefactual segments with the condition that the first second had to be artefact-free, for a total of 3826 examples. The forecast produced by the network replaced every 1-s window labelled "artefact" in each segment, which in turn was used as part of the input if the following window also shared the same label. To evaluate the results, examples of the three signals (i.e., normal, artefactual, and replaced-segments signals) with their corresponding periodogram are shown in Fig. 9. Compared to normal segments, artefacts have higher amplitude and frequency, in other words, a non-physiological waveform. We observe this in the periodogram in the repeated round peaks and that the higher frequencies don't decay as much powerwise. By replacing the segment, the smoothness of the spectrum power decay is returned. Finally, the violin plot of the power of the 4-s segments of the three signals is displayed in Fig. 10. Despite the fact that the distribution has lowered significantly to values resembling normal activity, the shortcoming previously mentioned is still present, as cases with surrounding high power are not replaced as they have not exceeded the threshold. Conclusion This paper has presented an artefact replacement algorithm for in-vivo neural recordings in the form of local field potentials. This is particularly useful, where signal segments contaminated with artefacts can not be reconstructed with information from other channels due to the presence of a global artefact or the majority of the channels are affected or the signals are of poor quality (i.e., very low signal-to-noise ratio). This paper introduces a prediction method with the use of a sliding window technique. Two neural networks architectures with recurrent and convolutional layers, along with ARMAX were compared. The best performance was achieved by the CNN-LSTM model. Compari Violin Plot of power in the normal (blue) 1 s segments, artefactual segments before (red) and after (green) processing from data set 1. The method has reduced the power of the artefactual segments to similar values to the artefact-free segments data sets of LFP signals recorded during different tasks. This revealed that the forecasted data may be used to replace artefact parts successfully in LFP recordings. The model was incorporated into the artefact removal module of the simple and effective SANTIA toolbox is a simple and effective toolbox for researchers who want to automatically detect and remove artefacts. Fig. 10 Violin Plot of power in the normal (blue) 1-s segments, artefactual segments before (red) and after (green) processing from data set 2. The method has reduced the power of the artefactual segments to similar values to the artefact-free segments
8,229.2
2022-01-07T00:00:00.000
[ "Computer Science" ]
Noble Metal Nanoparticles in Pectin Matrix. Preparation, Film Formation, Property Analysis, and Application in Electrocatalysis Stable polymeric materials with embedded nano-objects, retaining their specific properties, are indispensable for the development of nanotechnology. Here, a method to obtain Pt, Pd, Au, and Ag nanoparticles (ca. 10 nm, independent of the metal) by the reduction of their ions in pectin, in the absence of additional reducing agents, is described. Specific interactions between the pectin functional groups and nanoparticles were detected, and they depend on the metal. Bundles and protruding nanoparticles are present on the surface of nanoparticles/pectin films. These films, deposited on the electrode surface, exhibit electrochemical response, characteristic for a given metal. Their electrocatalytic activity toward the oxidation of a few exemplary organic molecules was demonstrated. In particular, a synergetic effect of simultaneously prepared Au and Pt nanoparticles in pectin films on glucose electro-oxidation was found. ■ INTRODUCTION Nowadays, researchers focus on matrices allowing for the stable immobilization of noble metal nanoparticles (NMNPs) because of their optical, 1 (electro)catalytic, 2 and antibacterial 3 properties. In the bottom-up approach, NMNPs are prepared by the reduction of metal salts or metalorganic compounds to metal atoms 4 in the presence of a stabilizing agent. 4 Polymers are better stabilizing agents than small ions 5,6 because their charged functionalities are attached to flexible chains, preventing coalescence of nanoparticles formed within the polymer matrix. 5 If a polymer has functionalities capable of reducing metal ions, 6−8 an excess of a reductant, which may be difficult to remove, can be eliminated. 9,10 Biopolymers were applied for the stabilization of nanoparticles 200 years ago. 11,12 They are produced by living organisms such as plants, trees, and bacteria 13 and decomposed through the enzymatic action of microorganisms without the emission of toxic waste. 14 Pectins 15 are biodegradable, nontoxic, flexible, cheap, and are very popular thickeners or stabilizing agents in the food industry and households. They are present in all primary cells of plants, and the degree of their methylation depends on the source and the way they are extracted. Most importantly, pectins were applied as reductants and stabilizers of NMNPs. 16−21 Pectins are structural heteropolysaccharides consisting mainly of D-galacturonic acid units, connected through α-(1−4) glycosidic linkages. Some of the carboxylic groups of the pectin backbones are methyl-esterified, and the degree of their methylation depends on the source. The availability of carboxyl or hydroxyl groups affects their chemistry 22,23 because nonesterified carboxyl groups may coordinate to the metal ions and reduce them to metals in the absence of other reductants. 16,[19][20][21]24 The pectin route was applied to prepare Au, Ag, Au/Ag, and Pd nanoparticles. As a result, NMNPs/ pectin materials 1 7 , 1 9 , 2 1 , 2 4 − 3 3 or pectin-stabilized NMNPs 18,20,24,27,34−39 were prepared. Prospective medical applications of NMNPs/pectin materials as controlled release of nanoparticles, 40 targeted drug delivery, 34 singlet oxygen generation for photodynamic therapy, 38 theranostics, 18 and antibacterial/healing treatment 29,41 were reported. Other areas include surface-enhanced Raman spectroscopy (SERS) 42 and catalysis. 19,27 The studies of the electrocatalytic properties of films formed by these materials were limited to AuNPs/pectin and explored for sensing. 25,31,41,43,44 Here, the goal was to synthesize NMNPs/pectin materials, estimate how pectin affects the nanoparticles' size and their film topography, and detect the interactions of pectin with their surface. We also wanted to know whether the electrocatalytic properties of NMNPs embedded in the pectin film are maintained and how efficient is electronic communication between them and the electrode surface. For this purpose, we demonstrated a simple and effective method of synthesis of selected NMNPs, that is, AuNPs, PtNPs, PdNPs, and AgNPs, from the metal precursor within the pectin matrix in the absence of an additional reductant. It is already known that pure pectin gelation is enhanced by an excess of sucrose, which influences hydrophobic interactions. 45 Such material was used in our synthesis. Moreover, our preliminary experiments demonstrated that only films prepared from such material on a solid support were stable contrary to those prepared from pure pectin or gelled in the presence of Ca 2+ cations or/and excess of hydrated protons. Nuclear magnetic resonance (NMR), UV−vis, infrared (IR), and Raman spectroscopies were employed for the characterization of pectin and/or hybrid materials, in particular the interactions of NMNPs with pectin. X-ray photoelectron spectroscopy (XPS) was used to identify their metallic components, whereas scanning transmission electron microscopy (STEM) allowed for the determination of the size of nanoparticles and its distribution. Atomic force microscopy (AFM) was employed to determine the MeNP/pectin film topography. Voltammetry was performed to study electrochemical surface reactions and the electrocatalysis at NMNPs/pectin films. ■ RESULTS AND DISCUSSION NMR Spectroscopy. The 13 C NMR spectrum of the studied material reveals signals characteristic for the amidated pectin ( Figure S1A) (see Supporting Information S1 for more details). Its characterization is based on the ratio of sucrose molecules to rhamnose units of pectin. Its value 1:12 was estimated on the basis of the 1 H NMR spectrum ( Figure S1B). The NMR spectra of NMNPs/pectin materials are nearly identical. Synthesis. In a typical run, 0.01−0.1 g of pectin was carefully mixed with 2 mL of water until the traces of pectin disappeared. Then, an aqueous solution (0.001−0.015 M) of a metal precursor or a mixture of Au and Pt precursors (1:1) was quickly injected and stirred for 1 h. The cloudy solution became transparent and changed from yellow to pale yellow (Ag), amber (Pd), dark brown (Pt), or purple (Au) ( Figure S2). The time of the color change ranged from 3 min at 90°C to 20 min at room temperature. After cooling down, a gelled MeNPs/pectin matrix was formed. The concentration of pectin and the metal precursor was optimized to avoid too fast gelation or generation of large metal particles. In some runs, 1 mL of 1 mM NaBH 4 aqueous solution was added to compare the efficiency of the reaction with an additional reductant. UV−Vis Spectroscopy. Plasmon resonance bands at 413 and 547 nm, respectively, characteristic for AgNPs and AuNPs, 46 are identified on the UV−vis spectra of the sol prepared from the Ag or Au precursor ( Figure 1). The absorbance recorded with water as a reference is higher than that recorded versus pectin solution because of the significant turbidity of the pectin solution. It almost disappears when NMNPs are formed. The band position is not affected by the type of the reference sample ( Figure 1). An increase of the pectin concentration affects the color ( Figure S3) and UV−vis spectra of AuNPs/pectin sol ( Figure S4). It changes from purple to pink, and the band maximum is red-shifted by ca. 10 nm, indicating that a lower concentration of pectin promotes the formation of larger nanoparticles, as indicated by the appearance of a new band at longer wavelengths. 47 The apparent negative absorbance ( Figure 1a) results from the turbidity of the pectin solution used as a reference sample (see above). IR and Raman Spectroscopies. The comparison of the IR spectra of the amidated and nonamidated pectin ( Figure S5) allows for determining the signals characteristic for the amidated structure and sucrose (see Supporting Information S4 for details). These results indicate that both pectins are almost high methoxy pectins (ca 45% esterified). Liao et al. 48 and Tao et al. 49 reported that carboxylic groups interact with the silver surface covered by the silver oxide layer formed at atmospheric conditions. This is because basic silver oxide interacts with the COOH groups and COO − groups coordinated to the surface. As an effect, the relative intensities of the COOH and COO − bands on the IR spectra change. 49 This effect may be difficult to notice because of the large number of carboxylic groups in the polymeric pectin chain. Here, both the COOH and COO − bands are visible in the spectra of pectin without nanoparticles (Figure 2 curve a). The similarity of the spectra of AgNPs/pectin and pectin without nanoparticles (Figure 2 curves a, b) may indicate that only a small fraction of carboxylic groups interacts with AgNPs. Noticeable differences in the region characteristic for the carboxyl groups are seen in the PdNPs/pectin and PtNP/ pectin spectra. In the case of PdNPs/pectin, the width of the COO − stretching band (ca. 1600 cm −1 ) is larger than that of pectin, and the COO − band overlaps with the amide I band (1670 cm −1 ). On the PtNPs/pectin spectrum, this band is also very broad. The changes of the COO − bands induced by PdNPs and PtNPs are accompanied by the increase of the intensity of the OH stretching band (3400 cm −1 ), suggesting that these samples contain more water than pectin, AuNPs/ pectin, and AgNPs/pectin. One may conclude that PdNPs and PtNPs interact stronger with the COO − groups of pectin, increasing its hydration (see Figure S6 for more details). Also, the Raman spectrum exhibits features characteristic for the pectin chemical character ( Figure 3, curve a) (See Supporting Information S4 for more details). Moreover, some features of the spectra are different when nanoparticles are produced ( Figure 3, curves b−e). There are minor differences in the relative intensities of bands between 1000 and 1500 cm −1 of the Raman spectra of PdNPs/pectin, PtNPs/pectin, and AuNPs/pectin ( Figure 3, curves b−d). This suggests that NMNPs induce changes in the conformation of pectin without changing its chemical structure. These differences are significant for AgNPs/pectin ( Figure 3, curve e), and they can be rationalized by the SERS enhancement. The 532 nm excitation line falls in the typical SERS enhancement range for silver nanostructures. 50 Unlike the spectra of other NMNPs/pectin samples, the Raman spectrum of AgNPs/pectin shows strong bands at 240, 1360, and 1600 cm −1 (Figure 3, curve d). These bands are assigned to the Ag−O stretching mode and the symmetric and antisymmetric motions of COO − groups, respectively, and suggest strong interactions between AgNPs with these functionalities. This interaction was not identified on the basis of the IR spectra. Such inconsistency can be rationalized by a short range of the SERS enhancement. Only modes of functionalities adjacent to the nanoparticle are visible in the spectrum. Therefore, only the COO − groups directly interacting with AgNPs contribute to the spectrum. On the contrary, all carboxylic groups contribute to the IR spectra, and the local effect is difficult to be distinguished. X-ray Photoelectron Spectroscopy. To identify metallic components and the degree of their reduction in the absence and presence of an additional reductant, NaBH 4 , XPS spectra of the MeNPs/pectin films prepared in the absence and presence of NaBH 4 were recorded and compared. Deconvolution of the high-resolution (HR) XPS spectra of Au 4f, Ag 3d, Pt 4f, and Pd 3d ( Figure S7) reveals the chemical character of Au, Ag, Pt, and Pd, respectively (Table S1). The higher efficiency of reduction by NaBH 4 is seen as an increase of XPS signals related to the metal states as compared to those obtained in the absence of the reducing agent ( Figure S7). This effect is significant for XPS signals related to the metal states in Au 4f, Ag 3d, and Pt 4f ( Figure S7a−c). For Pd-based materials ( Figure S7d), this effect is smaller, probably because the Pd surface is recovered by oxygen from the air, prior to the XPS measurement, faster than that on Au-, Pt-, and Agreduced sample surfaces. On the basis of the comparison of the metal/carbon atomic ratios (Table S1), one may note that the elemental surface contents of Au, Ag, and Pd are larger when NaBH 4 was used. This indicates the segregation of these elements at the surface region of samples during reduction. Materials formed with an additional reductant were not further studied because the foam that was formed during the reaction gel formation was stopped. These products did not form a stable film on the flat surface. Scanning Transmission Electron Microscopy. The STEM images of NMNPs/pectin films revealed nanometersized objects (Figure 4). AuNPs and PdNPs are globular and evenly distributed throughout the inspected region. A less regular shape of AgNPs and PtNPs may indicate nanoparticle agglomerates. Cracks seen on the STEM images ( Figure 4) result from the destruction of the carbon support film on the copper network because of the shrinking of the deposited pectin following dehydration under low pressure. The pectin film is not visible because it is transparent for the electron beam. A Gaussian-type behavior with maxima at 10.9 ± 5.2, 9.1 ± 4.7, 11.2 ± 4.7, and 9.7 ± 3.2 nm is seen for AuNPs, AgNPs, PtNPs, and PdNPs, respectively ( Figure 5). The lowest polydispersity was obtained for PdNPs, whereas the largest one for AuNPs. The average diameter of AgNPs is the smallest, whereas that of PtNPs is the largest. In general, the size and polydispersity of NMNPs are not significantly affected by the type of metal. Atomic Force Microscopy. AFM experiments were performed under atmospheric pressure to avoid dehydration Figure S8) identified as fragments of the pectin network. 51 The height images of NMNPs/pectin films are much better resolved and quite different ( Figure 6). They clearly show elongated bundles and globular particles identified as encapsulated NPs. Their size and distribution depend on the metal. AuNPs and PtNPs of diameter 5−10 nm are homogeneously distributed (Figure 6a,c). The image of the AuNPs/pectin film (Figure 6a) is slightly blurred, suggesting the effect of viscosity. PtNPs (Figure 6c) cover both the ACS Omega http://pubs.acs.org/journal/acsodf Article bundles of pectin and the space in between. AgNPs ( Figure 6b) and PdNPs (Figure 6d) are visible and similar in terms of size (diameter of 10−16 nm, estimated from at least 10 objects found at the given AFM image) and distribution. Larger nanoparticle aggregates are randomly distributed and embedded in the pectin network. The estimated size of nanoparticles is similar to that obtained from STEM images. Maps of the adhesion force between the AFM probe and the AgNPs/pectin (Figure 7) and PdNPs/pectin (Supporting Information Figure S9) samples simultaneously acquired with the AFM height images indicate that the adhesion force between the tip and nanoparticles is much smaller than that between the tip and pectin or mica. The correlation between the height and adhesion force maps is nicely seen on the lowand high-resolution images (selected area with two nanoparticles in Figure 7a (Figure 8a). At low potentials, anodic and cathodic signals related to hydrogen adsorption/desorption and hydrogen evolution 53 are visible. On the voltammogram obtained for the AgNPs/pectin film-modified electrode (Figure 8b), anodic and cathodic peaks with the midpeak potential at ca. 0.35 V, characteristic for the electrodissolution/electrodeposition of silver, are visible. 55 To further test the electrochemical activity of the MeNPs/ pectin film, CV was performed in a Fe(CN) 6 3− solution. Symmetric CV curves ( Figure S10) characteristic for the oneelectron electrochemical redox reaction are not much different from those obtained with a bare electrode. More importantly, electrode modification with the MeNPs/pectin film reduces the difference between the peak potentials from 0.12 to 0.06− 0.07 V. This indicates a faster heterogeneous electron transfer rate. Perhaps, the surface of nanoparticles is not blocked by pectin chains. As NMNPs deposited on the electrode surface exhibit an electrocatalytic effect toward a wide range of substrates, 2,56−58 selected reactions were tested with MeNPs/pectin electrodes. A negative shift of the onset potential indicates a catalytic effect of PdNPs/pectin and AuNPs/pectin on the electrooxidation of ascorbic acid (AA) ( Figure S11a). 59,60 The electrocatalytic effect is seen for H 2 O 2 electroreduction at AuNPs/pectin electrodes ( Figure S11b). 61 As AuNPs and PtNPs are known catalysts for glucose electro-oxidation, 62,63 this reaction was tested with AuNPs/ pectin and PtNPs/pectin film electrodes. The characteristic pattern for electrocatalytic glucose oxidation at Au and Pt with the onset potentials of ca. −0.4 and −0.8 V at AuNPs/pectin and PtNPs/pectin, respectively, is visible ( Figure 9). Interestingly, when a mixture (1:1) of AuNPs/pectin and PtNPs/pectin solutions was used for electrode modification, an almost 10-fold increase of the oxidation current as compared to the PtNPs/pectin-modified electrode was observed without any overpotential loss, indicating the synergistic effect. 64 The PdNPs/pectin-modified carbon felt electrode was tested for the electro-oxidation of formic acid (FA). 65 Anodic current related to the FA oxidation with the onset potential at −0.2 V ■ CONCLUSIONS We have demonstrated that hybrid materials consisting of Au, Pt, Pd, Ag nanoparticles, and pectin can be prepared by mixing a metal precursor with pectin in the absence of an additional reducing agent. The preparation of PtNPs/pectin material has not been previously reported. Both the temperature and reagent proportion affect the size of the encapsulated nanoparticles. The size and distribution of nanoparticles in a film prepared from hybrid materials are almost independent of the metal. Interactions between the pectin carboxylic groups and the nanoparticle surface, conformational changes, and the enhancement of hydration were detected in the hybrid materials, and they may be a driving force for nanoparticle formation. 6 Such examples of spectroscopic detection of metal nanoparticle−biopolymer interactions are rare. 66 ■ EXPERIMENTAL SECTION Materials. Citrus amidated pectin was donated by Pektowin Jasło (now NATUREX), and the reference sample pectin from citrus peel was obtained from Sigma-Aldrich. NaPdCl 4 , HAuCl 4 × 3H 2 O, H 2 PtCl 6 × 6H 2 O, AgNO 3 , glucose, NaOH, NaBH 4 , and FA were obtained from Sigma-Aldrich. HCl, HNO 3 , and H 2 SO 4 were obtained from Chempur (Warsaw, Poland). AA was purchased from Riedelde Haen. Deuterium oxide 99.9% atom D with (or without) 0.75 st % trimethylsilylpropanoic acid-d 5 (TSP) (Sigma-Aldrich) was used without any further purification. All solutions were prepared with deionized water (18 mΩ cm) from the Elix Millipore or Arium Sartorius water purification system. The glassware and magnetic stirring bars for synthesis were carefully rinsed with aqua regia (3:1 HCl/HNO 3 ), thoroughly rinsed with deionized water, and dried to avoid any trace of impurities. Apparatus, Procedures, and Data Analysis. NMR spectra were measured on a 7 Tesla Bruker Avance spectrometer equipped with a broadband inverse Bruker probe head. All spectra were measured at 298 K (calibrated on methanol). Chemical shifts were measured in ppm using Attempts to obtain HR scanning electron microscopy (SEM) images were not successful because of the charging effects of the material surface. Therefore, STEM imaging was performed with a Nova NanoSEM 450 instrument under a high vacuum (pressure 10 −7 mbar). It was carried out on samples diluted 10,000 times and deposited on a TEM grid. 77 Images were collected with a high acceleration voltage of 30 kV at a working distance optimized to 6.7 mm from the pole piece. Such dilution of the sample was necessary to avoid the destruction of the copper grid. STEM images were obtained using a bright-field contrast mode of the detector (two segments of the planar solid-state p−n junction) attached under the grid holder at a long scan acquisition time (20 μs) of typically 30 s per frame after choosing the inspection region. Images were analyzed with ImageJ software. 77 They were subjected to the "bandpass filter" procedure with an inner fast Fourier transform filter to remove dark patches much larger than the nanoparticles' predicted size, and high and low spatial frequencies (blurring the image), as well as to reduce edge artifacts. Then, the images were processed further by the default thresholding method to obtain a binary image and the watershed segmentation approach 78,79 to separate noncircular objects. Taking into account their area size and circularity, nanoparticles were counted by using the "Analyze Particles" tool of ImageJ software. Based on the data obtained (NP areas) by expressing the nanoparticles as circles of an equivalent area, the equivalent diameters were calculated. Such a route was applied to over a dozen images to provide reliable data statistics. AFM imaging was performed with a Multimode 8 microscope under the control of a Nanoscope V controller (Bruker). The samples were prepared by drop-casting 3.5 μg NMNPs/pectin solution on a mica surface and then drying in air. 80 AFM substrates were mounted on metallic discs using an adhesive tape. Just before the NMNPs/pectin solution deposition, mica was cleaned in Milli-Q water and dried with an Ar stream. Then, the top layer of mica was peeled off using a scotch tape to give a clean and atomically flat surface. To record the surface topography and adhesion force between the probe and the sample, PeakForce quantitative nanomechanical mapping (PFQNM) mode was utilized. 80 In this mode, the AFM probe oscillated at a typical frequency of 2 kHz, and the individual force−distance (F−D) curves are collected for each contact of the probe with the sample. The adhesion force between the probe and the sample can be extracted from the collected F−D curves. All experiments were done under ambient conditions at room temperature. Standard ScanAsyst-Air Bruker probes were used. The radius of the probe was evaluated by scanning the TipCheck sample (RS-12M, Bruker) and by SEM imaging. The spring constant of cantilevers was determined using the thermal tuning method. 81, 82 The AFM photodetector was calibrated on a freshly cleaved mica. XPS was performed with a PHl 5000 VersaProbeScanning ESCA Microprobe (ULVAC-PHI, Japan/USA) instrument at the base pressure below 5 × 10 −9 mbar. Samples were deposited on a quartz substrate (4.0 × 5.0 mm) and dried at room temperature. The XPS spectra were recorded using monochromatic Al Kα radiation (hν = 1486.6 eV) from an Xray source operating at 100 μm spot size, 25 W, and 15 kV. Both survey and HR XPS spectra were collected with the analyzer pass energies of 117.4 and 23.5 eV and the energy step sizes of 0.4 and 0.1 eV, respectively. CasaXPS software (v. 2.3.18) was used to evaluate the XPS data. Shirley background subtraction and peak-fitting with Gaussian−Lorentzian-shaped profiles were performed. The binding energy (BE) scale was referenced to the C 1s peak with BE = 285 eV. For quantification, the PHI MultiPak sensitivity factors and the determining transmission function of the spectrometer were used. UV−vis spectra were recorded with an Evolution 300 UV− vis spectrophotometer, Thermo Scientific. IR spectra were recorded with a Nicolet iS50 FT-IR spectrophotometer from Thermo Scientific using the smart iTR ATR accessory with a diamond crystal. The penetration depth of the IR beam ranged from 1 to 10 μm, depending on the wavelength. The ATR correction was applied (using the OMNIC software by Thermo Scientific) to take into account the varied penetration depth. The spectral resolution was 4 cm −1 , and typically, 256 scans were averaged for a single spectrum. The samples were cast directly on the top of the diamond crystal. The spectra were studied immediately after casting the sample on the crystal and after drying the sample by a cold air stream for 30 min. Raman spectra were recorded with a DXR Raman spectrometer (Thermo Scientific). The spectra were recorded with dried samples of pectin or NMNPs/pectin. The instrument was operated using the 532 nm excitation line with 50×/NA 0.75 objective. Typically, the spectral resolution was 1 cm −1 , the laser power was 1 mW, and the exposure time was 5 s. A total of 15 scans were averaged for a single spectrum. Both IR and Raman spectra were recorded with the dried samples of pectin or MeNPs/pectin material. CV experiments were performed with an SP-300 potentiostat (BioLogic, USA) at room temperature, 22 ± 2°C. Glassy carbon (GC) (discs of 0.07 or 0.007 cm 2 ) (Mineral, Poland), carbon cloth (AvCarb, 1071 HCB, 1 cm 2 ), or 0.125 cm 2 carbon screen-printed electrodes (CSPE) (Metrohm/Drop-Sens) served as working electrodes. Pt wire and Ag|AgCl served as the counter and reference electrodes, respectively. Current density at the GC electrode was calculated per projected area. Before measurements, the GC electrode was carefully polished with alumina slurry of 1, 0.3, and 0.05 μm grain size (Buhler) on a polishing cloth (Buhler). The remaining alumina particles were removed by polishing on a clean cloth wetted with ethanol. All electrodes were modified by the injection of gel samples from micropipettes for viscous liquids on the electrode surface. A 5 μL sample was deposited on a 0.007 cm 2 GC disc electrode, whereas a 30 μL sample was deposited on a 0.07 cm 2 GC disc electrode, carbon cloth, or screen-printed electrodes. The volume of the sample deposited on the GC disc was adjusted to cover the entire electrode surface. Afterward, the electrodes were dried in air. All electrochemical experiments were carried out, and solutions were purged with argon (99.99% by Multax) before and during the experiments.
5,731.6
2020-09-09T00:00:00.000
[ "Chemistry", "Materials Science" ]
Neuro-Genetic Adaptive Optimal Controller for DC Motor Received Feb 6, 2014 Revised Mar 3, 2014 Accepted Mar 26, 2014 Conventional speed controllers of DC motors suffer from being not adaptive; this is because of the nonlinearity in the motor model due to saturation. Structure of DC motor speed controller should vary according to its operating conditions, so that the transient performance is acceptable. In this paper an adaptive and optimal Neuro-Genetic controller is used to control a DC motor speed. GA will be used first to obtain the optimal controller parameter for each load torque and motor reference speed. The data obtained from GA is used to train a neural network; the inputs for the neural network are the load torque and the motor reference speed and the outputs are the controller parameters. This neural network is used on line to adapt the controller parameters according to operating conditions. This controller is tested with a sudden change in the operating conditions and could adapt itself for the conditions and gave an optimal transient performance. Keyword: INTRODUCTION DC motors have a very good control ability and are used long time ago as adjustable speed drives, for example they are used in traction and electric cars [1], the control of these motors mainly depends on controlling the converter circuits that fed the field or the armature windings [2], [3]. Conventional controllers can be designed optimally at certain operating condition, but its performance will not be optimal for another operating point. In [4], a PID controller is designed and tuned by traditional method at certain operating point, the performance will not be the same at a different point. Accurate model for the DC motor is essential when designing the speed controller to mimic the actual motor performance. Designing ordinary controllers depends on some simplifications such as model linearization and neglecting iron saturation. Artificial intelligence based techniques are used to design speed controller for the DC motor [5]- [12], also optimization techniques such as genetic algorithm are used to optimize the transient performance [12], [13]. Methods used either depend on the linearized model of the motor or the controller lack the adaptive property. In [14], [15] sliding mode controller is designed to control the motor speed, but sliding mode controller is suffering from the chattering phenomenon. In this paper a SIMULINK model for the DC motor is developed that consider the motor saturation and the variation of the magnetizing characteristics with the speed. A PID speed controller is used with the armature winding, the parameter of the PID controller is optimized by genetic algorithm (GA) for each reference speed and load torque to get the best transient and steady state performance, the objective for the GA is to minimize the sum of square errors in speed taking into consideration the limits on the armature voltage. The operating conditions are varied for a wide range of reference speeds and load torques, for each point GA gets the controller parameters which are the required gains for the PID controller. Data obtained from GA are used to train a neural network, this network is simulated in SIMULINK, the function of it is to make the controller adaptive according to operating conditions. The system is tested at different conditions and the speed controller is found to be optimized and adaptive according to each operating point. MOTOR MODELING The mathematical model of DC motor can be expressed by the equations: PROPOSED NEURO-GENETIC ADAPTIVE OPTIMAL CONTROLLER GA will be used to obtain the optimum controller parameters for each operating point i.e., for any load torque and speed. The objective function of GA is to minimize the sum of square error in the transient response of the motor speed, the outputs of GA are K p , K d and K i . Figure 2 shows the flow chart that describes the process of this GA. For each generation, the SIMULINK model is run and GA searches for the optimum PID controller parameters. Figure 3 shows the objective function variation with each generation for a reference speed of 1000 r.p.m and a load torque of 100 N-m. 395 GA is run for a wide range of motor speed (form 500 to 1000 rpm) and load torque (from 0 to 100 N-m) and a training data is obtained that used to train a neural network. The role of the neural network is to adapt the controller parameters according to the operating conditions as shown in Figure 4. The neural network is a feed forward network with back propagation learning role and consists of two hidden layers of tan sigmoid activation function and an output layer of linear activation function. The numbers of hidden neurons are 21 and 2 respectively. The neural network is used on line to generate the appropriate controller parameters according to loading conditions of the motor, Figure 5 shows the block diagram of the system with the neural network. Figure 6-8 show the variation obtained by GA for the controller parameters K P , K I and K D respectively with motor reference speed and load torque. It is noted that controller parameters have a wide variation with operating conditions, for example at 800 rpm the value of K p is 350 at no load, 80 at 20 N-m and 260 at 40 N-m. The largest variation in K i is for 500 rpm reference speed. A wide variation in K d is with 500, 600 and 700rpm. To test the effectiveness of the controller, it is used with the motor model as shown in Figure 5, and motor response with this controller is compared to the response with conventional PID controller. Figure 10 shows transient response of motor speed with a reference speed command of 600, 800 and 1000 rpm and a load torque of 100 N-m. The response with the proposed controller is better than that with conventional controller in all cases, overshoot, settling time and rise time are greatly enhanced. RESULTS AND DISCUSSION One purpose of the controller is to be adaptive, so that the controller structure changes with the motor operating point and became optimal for the new operating point. Figure 11 shows the response of the motor with the proposed and conventional controllers when a step change from 1000 rpm to 500 rpm is occurred at 100 N-m load. It is noticed that the proposed controller changed the controller parameters for the new speed and motor speed followed the change in reference speed command in minimum time and error, while motor speed with conventional controller is slower and has overshoot. CONCLUSION In this paper an optimal-adaptive controller for a DC motor is designed, the controller changes its parameters according to motor operating conditions, namely motor reference speed and load torque. The proposed controller depends on GA to insure that it is optimal and a neural network to insure that it is adaptive. Motor transient and steady state response with the proposed controller has a superior performance than conventional controller.
1,631
2014-09-01T00:00:00.000
[ "Engineering", "Computer Science" ]
Shikonin suppresses proliferation and induces apoptosis in endometrioid endometrial cancer cells via modulating miR-106b/PTEN/AKT/mTOR signaling pathway Shikonin, a natural naphthoquinone isolated from a traditional Chinese medicinal herb, which exerts anticancer effects in various cancers. However, the molecular mechanisms underlying the therapeutic effects of shikonin against endometrioid endometrial cancer (EEC) have not yet been fully elucidated. Herein, we investigated anticancer effects of shikonin on EEC cells and explored the underlying molecular mechanism. We observed that shikonin inhibits proliferation in human EEC cell lines in a dose-dependent manner. Moreover, shikonin-induced apoptosis was characterized by the up-regulation of the pro-apoptotic proteins cleaved-Caspase-3 and Bax, and the down-regulation of the anti-apoptotic protein Bcl-2. Microarray analyses demonstrated that shikonin induces many miRNAs’ dysregulation, and miR-106b was one of the miRNAs being most significantly down-regulated. miR-106b was identified to exert procancer effect in various cancers, but in EEC remains unclear. We first confirmed that miR-106b is up-regulated in EEC tissues and cells, and knockdown of miR-106b suppresses proliferation and promotes apoptosis. Meanwhile, our results validated that the restored expression of miR-106b abrogates the antiproliferative and pro-apoptotic effects of shikonin. We also identified that miR-106b targets phosphatase and tensin homolog (PTEN), a tumor suppressor gene, which in turn modulates AKT/mTOR signaling pathway. Our findings indicated that shikonin inhibits proliferation and promotes apoptosis in human EEC cells by modulating the miR-106b/PTEN/AKT/mTOR signaling pathway, suggesting shikonin could act a potential therapeutic agent in the EEC treatment. Introduction Endometrial cancer (EC) is the most common gynecologic malignancy in the developed countries [1]. Statistical analysis has shown that approximately 142000 women suffered from EC globally every year, and approximately 42000 women die from this disease [2]. Amongst these EC cases, approximately 80% are endometrioid EC (EEC) [3], which is mainly treated with combination of surgery and adjuvant chemotherapy, radiotherapy or hormone therapy. However, these conventional chemotherapy and radiotherapy cause side effects for advanced patients or yield suboptimal results for those with recurrent EEC [4]. Therefore, it is a strong medical need to develop a novel therapeutic agent against EEC that exhibit reduced toxicity and increased efficiency. Shikonin, an active naphthoquinone of Zi Cao, derived from the roots of the herb Lithospermu erythrorhizon that has been used in traditional Chinese medicine to treat skin diseases, burns and sore throats due to its antimicrobial and anti-inflammatory activities [5,6]. Recently, it has been identified that shikonin exerts various anticancer effects such as inhibiting proliferation and promoting apoptosis in human lung adenocarcinoma cells, suppressing prostate cancer cell metastasis, weakening migration and invasion in human breast cancer cells [7][8][9][10]. Liu et al. [11] uncovered that shikonin protects against concanavalin A-induced acute liver injury via inhibition of the JNK pathway in mice. Jang et al. [8] demonstrated that shikonin attenuates human breast cancer cells migration and invasion via suppressing matrix metalloproteinase-9 activation. Wang et al. [12] clarified that shikonin inhibits interleukin-1β-induced chondrocytes' apoptosis through modulating PI3K/AKT signaling pathway. Furthermore, it has been reported that shikonin possesses the suppressive effects on EEC cells via promoting apoptosis and blocking cell cycle [13]. However, the molecular mechanism of the anticancer effects of shikonin on EEC cells remain unclear. miRNAs are a group of endogenous, non-coding small RNAs of 22-25 nts, which serve as a regulator of gene expression at the post-transcriptional level via suppressing translation or promoting RNA degradation. There is a growing body of evidence that miRNAs are involved in a variety of biological and pathological processes including cellular differentiation, proliferation, apoptosis, and carcinogenesis [14][15][16]. In recent years, it has been extensively reported that some Chinese medicinal herbs exert antitumor effects in different cancers via regulating miRNA expression profiles [17,18]. Curcumin suppresses cell growth, invasion, tumor growth in colorectal cancer and in vivo metastasis by regulation of miR-21 [19]. Zhang et al. [20] illustrated that honokiol inhibits bladder tumor growth by blocking the EZH2/miR-143 axis. In addition, shikonin has been identified to act as a potential therapeutic agent to treat human glioblastoma through regulating miRNA expression profiles [21]. Against this background, we hypothesized that shikonin exerts anticancer effect on human EEC via modulating miRNA expression. In the present study, we investigated the anticancer effects of shikonin on EEC cells and explored the underlying molecular mechanism by identifying shikonin-induced miRNA dysregulations. Our results suggested that shikonin may possess anticancer effects on EEC via mediating miR-106b/phosphatase and tensin homolog (PTEN)/AKT/mTOR signaling pathway and act as a potential therapeutic agent for the treatment of EEC. Patient tissue specimens Twenty EEC tissues and twenty normal endometrial samples were collected from patients who underwent surgical resection at Gynecology of Traditional Chinese Medicine, Shanghai Municipal Hospital of Traditional Chinese Medicine Affiliated to Shanghai TCM University (Shanghai, China) between April 2016 and April 2017. None of the patients had received pre-operative radiotherapy or chemotherapy prior to surgical resection. The tumor specimens were independently confirmed by two pathologists. Fresh specimens were snap-frozen in liquid nitrogen and stored at −80 • C immediately after resection for subsequent RNA extraction. The project protocol was approved by the Ethics Committee of Shanghai TCM University. All patients provided written informed consents for the use of the tumor tissues for clinical research. Cell culture and treatment The human EEC cell lines Ishikawa, HEC-1A, KLE, and RL95-2 were obtained from American Type Culture Collection (ATCC, Manassas, VA, U.S.A.) and one normal endometrial cell (ESC) were obtained from the Tumor Cell Bank of the Chinese Academy of Medical Science (Peking, China), and maintained in DMEM, supplemented with streptomycin (100 IU/ml), penicillin (100 IU/ml, Sigma, St. Louis, MO), 2 mM glutamine, and 10% FBS (Gibco BRL, Grand Island, NY). Cells (1 × 10 4 /well) were seeded in 96-well plates for 24 h and then incubated with shikonin (10-20 μM) at 24 h for further measurements. Shikonin was purchased from National Institute for the Control of Pharmaceutical and Biological Products (Beijing, China) with purity >99%. Shikonin was dissolved in DMSO (Sigma) and stored at −20 • C. Cell viability analysis The Cell Counting Kit-8 (CCK-8) assay was used to measure proliferation of cells according to the manufacturer's instruction (Beyotime Institute of Biotechnology, Shanghai, China). The cells (5 × 10 4 cells/well) were seeded in 96-well plate with 100 μl DMEM medium supplemented with 10% FBS. After 48 h incubation, 10 μl CCK-8 reagent dissolved with 100 μl DMEM was added to each well and continuously cultured for 1 h in 5% CO 2 (Thermo). The absorbance rate at 450 nm were measured by Microplate Reader (Bio-Rad, U.S.A.). All experiments were performed in quintuplicate on three separate occasions. Apoptosis analysis The flow cytometry analysis was used to measure cell apoptosis. The cells were treated with shikonin for 24 h, and were then collected and washed twice with PBS. After treatment with trypsin, cells were fixed with 70% ice-cold methanol at 4 • C for 30 min. Cells were resuspended in binding buffer and stained with 5 μl of AnnexinV-FITC (BD, Mountain View, CA, U.S.A.) and 1 μl of propidium iodide (PI, 50 μg/ml) (BD, Mountain View, CA, U.S.A.). Flow cytometric evaluation was performed within 5 min. Stained cells were analyzed using flow cytometry (BD, FACSCalibur, CA, U.S.A.). The measurements were performed independently for at least three times with similar results. miRNA microarray analysis miRNA microarray analysis was used to evaluate miRNA expression in cells after treatment with shikonin. Total RNA was isolated from cells using TRIzol reagent (Molecular Research Center, Inc., Cincinnati, OH, U.S.A.) and purified by RNeasy MinEluted Cleanup kit (QIAGEN, Germany) according to manufacturer's instructions. After measuring the quantity of RNA using a NanoDrop 1000 (Youpu Scientific Instrument Co., Ltd., Shanghai, China), the samples were labeled using the miRCURY TM Hy3 TM /Hy5 TM Power labeling kit (Exiqon, Vedbaek, Denmark) and hybridized on a miRCURY TM LNA Array (version 18.0, Exiqon, Vedbaek, Denmark). After washing, the slides were scanned using an Axon GenePix 4000B microarray scanner (Axon Instruments, Foster City, CA, U.S.A.). Scanned images were then imported into the GenePix Pro6.0 program (Axon Instruments) for grid alignment and data extraction. Replicated miRNAs were averaged, and miRNAs with intensities ≥50 in all samples were used to calculate a normalization factor. Expressed data were normalized by median normalization. After normalization, the miRNAs that were significantly differentially expressed were identified by Volcano Plot filtering. Hierarchical clustering was used to determine the differences in the miRNA expression profiles amongst different genes and samples using MEV software (version 4.6; TIGR, Microarray Software Suite4, Boston, U.S.A.). RNA extraction and real-time quantitative PCR Total RNA was extracted from culture cells using TRIzol Reagent (Invitrogen) and quantitated with UV spectrophotometer (SmartSpec plus). The High Capacity cDNA Synthesis Kit (Applied Biosystems) was used to synthesize cDNA with miRNA-specific primers. The primers for miR-106b and the internal control RNU44 gene were obtained from Ambion. The real-time quantitative PCR (RT-qPCR) was carried out using TaqMan Gene Expression Assay (Applied Biosystems) on an Applied Biosystems 7500 Real-Time PCR machine. The 2 − C t method was used to determine the miRNAs relative expression. All reactions were performed in triplicate. Western blot analysis The cells were lysed as described previously [22]. The protein concentration was measured using BCA protein assay kit (Pierce, Rockford, IL). Total proteins (60 μg) were separated in SDS/polyacrylamide gels (10% gels) (Sigma-Aldrich, St. Louis, MO) and then transferred on to PVDF membranes (BD Pharmingen, San Diego, CA). After blocking with 5% non-fat milk at room temperature for 1 h, the PVDF membranes were incubated primary antibodies against cleaved-Caspase-3, Bax, Bcl-2, PTEN, p-AKT, AKT, p-mTOR, and mTOR at 4 • C overnight. These antibodies were obtained from Santa Cruz Biotechnology (Santa Cruz, CA, U.S.A.). β-actin (Sigma, St. Louis, MO, U.S.A.) was used as an internal control. Horseradish peroxidase-conjugated (HRP) antibodies (Santa Cruz Biotechnology, Santa Cruz, CA) were used as the secondary antibodies. Subsequently, the protein bands were scanned on the X-ray film using the ECL detection system (PerkinElmer Life and Analytical Sciences, Boston, MA). The alpha Imager software (Alpha Innotech Corporation, San Leandro, CA) was used to measure relative intensity of each band on Western blots. The measurements were performed independently for at least three times with similar results. Luciferase reporter assay The potential binding site between PTEN and miR-106b was identified using TargetScan (http://www.targetscan.org). The miR-106b mimics/inhibitor and corresponding negative control (NC) were synthesized by RiboBio (Guangzhou, China). The wild-type (wt) PTEN-3 -UTR and mutant (mut) PTEN-3 -UTR containing the putative binding site of miR-106b were established ( Figure 5A) and cloned in the firefly luciferase expressing vector pMIR-REPORT (Ambion, U.S.A.). Site-directed mutagenesis of the PTEN 3 -UTR at the putative miR-106b binding site was performed by a QuikChange Kit (Qiagen). For the luciferase assay, Ishikawa cells at a density of 2 × 10 5 per well were seeded into 24-well plates and co-transfected with 0.8 μg of pMIR-PTEN-3 -UTR or pMIR-PTEN-mut-3 -UTR, 50 nM miR-106b mimic/inhibitor or corresponding mimic NC using Lipofectamine 2000 reagent (Invitrogen). The relative firefly luciferase activity normalized with Renilla luciferase was measured 48 h after transfection by using the Dual-Light luminescent reporter gene assay (Applied Biosystems). Statistical analysis All statistical analyses were performed using SPSS 14.0 software (Chicago, IL). Each experiment was repeated at least three times. Numerical data are presented as the mean + − S.E.M. The comparison between data was calculated using Student's t test (between two groups) and one-way ANOVA and Tukey's multiple comparison tests (between multiple groups). P-value of <0.05 was considered significant and <0.01 was considered very significant. Shikonin inhibits the growth of human EEC cells To explore the antiproliferative effect of shikonin on EEC cells, the human EEC cell lines Ishikawa, HEC-1A, KLE, and RL95-2 were treated with various concentrations of shikonin (0, 1, 2, 4, 8, 10, and 20 μM) for 24 h and the cell viability was evaluated by CCK-8 assay. As shown in Figure 1A Ishikawa and HEC-1A cells (P<0.01; Figure 1E). To further explore the molecular mechanisms of shikonin-induced apoptosis, we performed the Western blot analysis to measure the expression levels of the apoptosis-related proteins in Ishikawa and HEC-1A cells treated with 5 μM of shikonin for 24 h. We found that shikonin markedly up-regulated the pro-apoptotic proteins (cleaved-Caspase-3 and Bax) and down-regulated the anti-apoptotic protein (Bcl-2) compared with blank group (P<0.01; Figure 1F,G). Collectively, we first demonstrated that shikonin inhibits proliferation and induces apoptosis in human EEC cells. Shikonin induces the aberrant expression of miRNAs in human EEC cells Recently, many Chinese medicinal herbs were demonstrated to harbor the antitumor effects in various cancers through modulating miRNA expression profiles [17,18]. To investigate whether shikonin suppresses EEC cells' growth via regulating miRNAs expression, we performed microarray analysis to determine miRNA levels in EEC cells after treatment with shikonin (5 μM) for 24 h. As shown in Figure 2A, compared with blank group, shikonin treatment resulted in aberrant expression of miRNAs, and miR-106b is one of the miRNAs being most significantly down-regulated in EEC cells. It is well reported that miR-106b has been identified to act as an oncogene in various cancers including breast cancer, osteosarcoma, and hepatocellular carcinoma [23][24][25]. To explore the role of miR-106b in the suppressive effects of shikonin on EEC cells, the Ishikawa cells were treated with various concentration of shikonin (0, 1, 2, 5, and 10 μM) for 24 h and miR-106b levels were quantitated by qRT-PCR. We observed that shikonin reduced the miR-106b expression in a dose-dependent manner in Ishikawa cells ( Figure 2B). These data suggested that shikonin may exert anticancer effects via suppressing the expression of oncogenic miR-106b in EEC cells. Knockdown of miR-106b inhibits EEC cells apoptosis miR-106b has been demonstrated to function as an oncogene in many cancers [23][24][25], but its role in EEC has yet to be elucidated. To investigate the role of miR-106b in EEC, we performed the qRT-PCR to determine miR-106b expression in EEC tissues and found that miR-106b is dramatically up-regulated in cancer tissues compared with normal tissues (P<0.01; Figure 2C). Moreover, our results further verified that miR-106b is also significantly overexpressed in EEC cell lines (Ishikawa, HEC-1A, KLE, and RL95-2) compared with ESC cell (P<0.01; Figure 2D). These results indicated that miR-106b may play an oncogenic role in the development of EEC. To further explore the effects of miR-106b on EEC cells, the Ishikawa and HEC-1A cells were transfected with miR-106b inhibitor or inhibitor NC, and then cell viability and apoptotic cells were measured by CCK-8 assay and cytometric analysis, respectively. As shown in Figure 3A-C, compared with inhibitor NC, knockdown of miR-106b markedly reduces cell viability and increases apoptotic cells (P<0.01). In addition, our results demonstrated that knockdown of miR-106b up-regulated the pro-apoptotic proteins (cleaved-Caspase-3 and Bax) and down-regulated the anti-apoptotic protein (Bcl-2) compared with inhibitor NC group ( Figure 3D,E). Taken together, these findings suggested that miR-106b is overexpressed in EEC tissues and cells, and function as an oncogene in EEC. Overexpression of miR-106b attenuates the suppressive effects of shikonin Based on the above results, our data demonstrated that miR-106b was down-regulated in Ishikawa cells after treatment with shikonin. Moreover, miR-106b was identified to act as an oncogene in EEC. Therefore, we speculated whether shikonin possesses the antiproliferation effects on EEC cells through modulating miR-106b expression. Then, the Ishikawa or HEC-1A cells were transfected with miR-106b mimics or mimics NC, and treated with shikonin (5 μM) for 24, 48, and 72 h. Subsequently, the cellular proliferation and apoptotic cells were measured by CCK-8 assay and cytometric analysis, respectively. We found that shikonin treatment dramatically inhibits cell proliferation and promotes apoptosis in shikonin + mimic NC group compared with blank group, but these suppressive effects of shikonin on EEC cells were abolished by overexpression of miR-106b in shikonin + miR-106b mimic group (P<0.01; Figure 4A-D). These results indicated that the shikonin may exert suppressive effects on EEC cells via regulating miR-106b expression. However, the precise molecular mechanisms by which shikonin represses EEC cell growth needs further research. miR-106b inhibits PTEN expression by directly targetting its 3 -UTR Previous studies uncovered that miR-106b could post-transcriptionally inhibit PTEN expression in different cancer cells, such as pituitary adenoma, breast cancer, and colorectal cancer [23,[26][27][28], but whether PTEN was a direct target of miR-106b in human EEC cells remains to be further elucidated. Moreover, we further predicted the target genes of miR-106b using TargetScan, and identified PTEN as a potential target of miR-106b ( Figure 5A). Subsequently, we constructed luciferase-reporter plasmids that contain the wt or mut 3 -UTR segments of PTEN ( Figure 5A). The wt or mut reporter plasmid was co-transfected into Ishikawa cells along with miR-106b mimics/inhibitor or NC, and measured the luciferase activity. We observed that miR-106b mimic dramatically suppressed the luciferase activity compared with the mimic NC, but miR-106b inhibitor significantly increased the luciferase activity compared with the inhibitor NC (P<0.01; Figure 5D). Additionally, miR-106b did not inhibit the luciferase activity of the reporter vector containing 3 -UTR of PTEN with mutations in the miR-106b-binding site ( Figure 5D). To further validate that the PTEN level is regulated by miR-106b, the Ishikawa or HEC-1A cells were transfected with miR-106b mimic/inhibitor or NC and Western blot was used to detect PTEN level. As shown in Figure 5B,C, up-regulation of miR-106b reduced the PTEN protein level compared with NC, conversely, knockdown of miR-106b increased the PTEN protein expression. These data indicated that miR-106b suppresses PTEN expression by directly targetting its 3 -UTR in human EEC cells. Overexpression of PTEN inhibits human EEC cell growth To investigate the role of PTEN in human EEC cells, the Ishikawa or HEC-1A cells were transfected with pc-DNA-PTEN or pc-DNA-vector and then cell viability and apoptosis were measured by CCK-8 assay and flow cytometric analysis, respectively. As shown in Figure 6A, the PTEN protein expression was obviously u-pregulated in both Ishikawa and HEC-1A cells transfected with pc-DNA-PTEN compared with pc-DNA-vector (P<0.01). Moreover, our results showed that overexpression of PTEN significantly suppressed proliferation and promotes apoptosis in both Ishikawa and HEC-1A cells compared with pc-DNA-vector (P<0.01; Figure 6B,C). These results indicated that PTEN may act as a tumor suppressor gene in human EEC cells. Shikonin suppresses the PTEN/AKT/mTOR signaling pathway via down-regulation of miR-106b AKT/mTOR signaling which is negatively modulated by PTEN is a key pathway in cell survival, cellular proliferation, and tumor growth [29][30][31]. Recent studies demonstrated that miR-106b promotes the cell proliferation, invasion, and migration in a variety of cancers via modulating PTEN/PI3K/AKT signaling pathway [23,26,27]. Inspired by these studies, we hypothesized whether shikonin-induced miR-106b down-regulation modulates PTEN/AKT/mTOR signaling pathway in EEC cells. To verify this hypothesis, after transfection with or without miR-106b mimics, the Ishikawa or HEC-1A cells were treated with shikonin (5 μM) for 24 h and PTEN, AKT and mTOR were identified using Western blot. Our results showed that shikonin treatment markedly increased PTEN expression and decreased the p-AKT and p-mTOR levels compared with blank group in both Ishikawa and HEC-1A cells, but this shikonin-blocked PTEN/AKT/mTOR pathway was reactivated by overexpression of miR-106b (P<0.01; Figure 7A-D). These data illustrated that shikonin is able to repress the PTEN/AKT/mTOR signaling pathway in human EEC cells, but it could be reactivated by miR-106b overexpression. Taken together, these results suggested that shikonin blocks PTEN/AKT/mTOR signaling pathway via suppressing miR-106b expression in human EEC cells. Discussion Emerging evidence revealed that shikonin exerts the anticancer effect in various cancers [7][8][9][10][11]. However, whether shikonin exhibits such anticancer functions when used in the context of the treatment of human EEC remains unclear. In the present study, we investigated the suppressive effects of shikonin on EEC cells, and explored the underlying molecular mechanisms. We observed that shikonin suppresses proliferation of EEC cells in a dose-dependent manner, and induced apoptosis via regulation of apoptosis-related proteins. Microarray analyses uncovered that shikonin induces a large set of miRNAs dysregulation, and the miR-106b was one of the miRNAs being most significantly down-regulated. Moreover, we confirmed that miR-106b is up-regulated in EEC tissues and cells, and the suppressive effects of shikonin were abrogated by overexpression of miR-106b in EEC cells. More importantly, our results indicated that shikonin represses proliferation and induces apoptosis in ECC cells via modulating miR-106b/PTEN/AKT/mTOR axis. The naturally derived products with anticancer effects have been widely utilized as the source of many medically beneficial drugs, such as curcumin, camptothecin, luteolin, honokiol, isoflavone, matrine, xanthoangelol, and shikonin [7,32,33]. Xanthoangelol, isolated from Angelica keiskei roots, which suppressed tumor growth, metastasis to the liver and lung, and tumor-associated macrophage expression in tumors [33]. Icaritin, a traditional Chinese herbal medicine, which induces sustained ERK1/2 activation, represses human EC cells growth and promotes apoptosis [34]. With regard to shikonin, it has been identified to act as a potential anticancer agent against various cancers, including human lung adenocarcinoma, prostate cancer, and breast cancer [7][8][9][10]. However, the anticancer effects of shikonin on EEC were rarely reported. In the present study, our results showed that shikonin inhibits EEC cell growth in a dose-dependent manner, and induces apoptosis in EEC cells via activating the intracellular apoptotic signaling pathway. These data indicated that shikonin exerts the antiproliferative property in EEC cells and could be developed as a potential therapeutic agent against human EEC. Growing evidence demonstrated that many miRNAs play key role in a variety of cancers, while the anticancer effects of traditional Chinese herbal medicine that operate through targetting miRNAs have also been widely reported. Zeng et al. [35] revealed that camptothecin induces human cervical cancer cells' and human prostate cancer cells' apoptosis via miR-125b-mediated mitochondrial pathways. Liu et al. confirmed that berberine could target the miR-21/PDCD4 axis, and improves cisplatin sensitivity in ovarian cancer cells [17]. In addition, shikonin has been demonstrated to act as a therapeutic agent to treat human glioblastoma via regulating miRNAs expression profiles [21]. In the present study, we performed microarray analysis to identify miRNAs expression in EEC cells treated with shikonin, and found that shikonin alters a large set of miRNAs and miR-106b was one of the miRNAs being most significantly down-regulated. Previous studies have demonstrated that miR-106b function as an oncogene in different cancers, such as breast cancer, osteosarcoma, and hepatocellular carcinoma [23][24][25]. Therefore, the miR-106b attracts us to investigate its role in EEC cells. Our results first demonstrated that miR-106b functions as an oncogene in EEC, and knockdown of miR-106b suppresses cell proliferation and apoptosis via modulating the intracellular apoptotic signaling pathway. Against this background, we hypothesized that shikonin may exert the anticancer effect in EEC through suppressing this oncogenic miRNA. Our results confirmed that the antiproliferative and pro-apoptotic effects of shikonin on EEC cell were abolished by miR-106b overexpression. These data suggested that shikonin may exert the suppressive effects on EEC cells via down-regulating miR-106b. However, the precise molecular mechanism needs further research to be understood deeply. AKT/mTOR signaling plays a central role in cell survival, cellular proliferation, and tumor growth [29][30][31], which was negatively regulated by PTEN [36,37]. Previous studies identified that miR-106b inhibits PTEN expression through directly targetting its 3 -UTR in many cancer cells [23,26,27]. In the present study, our results also verified that PTEN is a target of miR-106b in ECC cells. In addition, overexpression of PTEN also inhibits proliferation and induces apoptosis in both Ishikawa and HEC-1A cells. It is well reported that miR-106b promotes the cell proliferation, invasion, and migration via regulating PTEN/PI3K/AKT signaling pathway in various cancers [23,26,27]. Then, we speculated that shikonin may exert suppressive effects on EEC cells via modulating PTEN/AKT/mTOR signaling pathway by suppressing miR-106b expression. As expected, shikonin treatment could inhibit the PTEN/AKT/mTOR signaling pathway in human EEC cells, but it was reactivated by miR-106b up-regulation. Collectively, these data indicated that shikonin blocks PTEN/AKT/mTOR signaling pathway via inhibiting miR-106b expression in human EEC cells (Figure 8). In conclusion, our results demonstrated that shikonin inhibits cell growth in EEC cells via regulating the intracellular apoptotic signaling pathway. Additionally, we first demonstrated that miR-106b acts as an oncogene by targetting the tumor suppressor gene PTEN in EEC cells. More importantly, our results uncovered that shikonin possessed the suppressive effects on ECC cells via blocking miR-106b/PTEN/AKT/mTOR signaling pathway, suggesting shikonin could act as a promising anticancer agent for EEC treatment.
5,588.8
2018-02-15T00:00:00.000
[ "Biology", "Chemistry" ]
Study of the effect of Kaolin in the mortar of cement matrices by confinement of ion exchange resins Radioactive waste arising as a result of nuclear activities should be safely managed from its generation to final disposal in an appropriate conditioned form to reduce the risk of radiation exposure of technical personnel and of the public and to limit contamination of the environment. The immobilization of low and intermediate level radioactive wastes in cementitious matrices is the most commonly used technique to produce inexpensive waste matrix that complies with regulatory requirements in order to protect humans and the environment against nuisance caused by ionizing radiation. Cement based materials are used in radioactive waste management to produce stable waste forms. This matrix constitutes the first build engineering barrier in disposal facilities. In this work, the kaolin is used to enhance the mechanical performance of the matrix of confinement of ion exchange resins by gradually replacing the sand in mortar with kaolin clay. The Kaolin clay sample was a special pure product, sourced from a foreign country. The maximum quantity of resins that can be incorporated into the mortar formulation without the packages losing their strength is 13.915% which results in a better mechanical strength at 6.7686 MPA compression with kaolin. INTRODUCTION The use of nuclear techniques in various fields such as scientific research, industry and health generates radioactive waste. This waste is composed of different types. Like any developing country, Morocco is responsible for the management of radioactive waste through the National Center for Energy, Science and Nuclear Techniques (CNESTEN). CNESTEN has set up a Radioactive Waste Management Unit (RWMU). This management must be carried out within a rigorous framework in order to guarantee safe solutions for all radioactive waste produced without losing sight of the MATEC Web of Conferences 149, 01056 (2018) https://doi.org/10.1051/matecconf/201814901056 CMSS-2017 CMSS-2017 permanent requirement for the protection of present and future generations and the risk environment of this waste [1][2]. The low-and intermediate-level radioactive waste conditioning process at the UGDR is immobilization in a cement-based matrix, which is the first barrier of this waste, is the most common technique used to produce radioactive waste packages that comply with regulatory requirements [3-4-5]. As a result of extensive research activities focused on the development of materials, the additives are currently among the most recent developments in cement production, as their uses improve the mechanical properties of cementitious materials (mortar and concrete). Indeed, the incorporation of kaolinite into the mortar obviously produces materials similar to ordinary cement mortar but with best characteristics [6][7][8]. The aim of this study is to optimize the formulation of a cement matrix by gradually substituting the proportion of sand by kaolinite as a filler. We then follow the impact of this substitution on the compressibility of the test pieces while keeping the E/C ratio constant. Cement The portland cement used CPJ 35 is a cement whose technical characteristics are in conformity with the Moroccan standard NM 10.1.004. Sand The sand used in a laboratory complies with the Moroccan standard NM 10.1.020. We also use this sand for industrial applications for the confinement of radioactive waste. Water We used drinking tap water to make mortars Ion exchange resins The ion exchange resins [9,10] MDP-15 type, in of clear spherical beads form used in the purification of the reactor water circuits as well as the spent fuel storage pools of the TRIGA MARK II reactor, PUROLITE NRW 37 type, these cations exchange resins are strongly acid, and gel type. The ion exchange is a process by which the ions contained in a solution are removed to be replaced by an equivalent number of other ions of the same electrical charge. The physical-chemical properties of the resin: -Hydrocarbon skeleton: Polystyrenic crossed to the gel type DVB -Functional grouping: R-SO3--Physical form: Dark amber balls, translucent -Ion Shape at delivery: H--Moisture content: 51-55% (H + form) -Maximum swelling: Na + -H +: 5% -Temperature limit: 120 ° C -Limit of PH: From 0 to 14 -Apparent density: Approximately 800 g / l -Actual Density: 1.20 (H + Form) -Total exchange capacity: Min 1.7 eq / l (H + form) Experimental section The different pastes are prepared in a standardized mixer EN-196-1 [11] following the procedure indicated in standard EN-196-3 [12] relating to the normal consistency of pure pasta. Study of the influence of the substitution of sand by kaolinite on the mortar For our experimental approach, four types of mortars were prepared with substitution percentages ranging from 0 to 6% of the kaolinite. The tests were carried out using 10 × 5.5 cm cylindrical specimens, the walls of which were previously washed with oil (Sika type Iron M) which is a demolding agent. the demolding takes place after the following periods: 7 days, 14 days, 21 days and 28 days. The various compositions of the mortars are grouped in Table 1. 3RESULTS AND DISCUSSION 3.1 Result of particle size analysis of kaolinite Figure 1 presents the granulometric analysis of kaolinite which is carried out by series of sieves (88 μm, 74 μm, 62 μm, 37 μm). Quantitative chemical analysis of the used materials. The chemical composition of the samples was determined by The Portable Fluorescence X System with Dispersion of Energy (Bruker S1 Turbo SD). The results obtained are expressed in mass percent. Figure 2 shows X Ray diffractogram of kaolinite. The evolution of the resistances as a function of time shows that for seven days as curing time, the resistance corresponding to all the percentages of kaolinite inserted exceeds that of the control mortar. The evolution of the resistances as a function of time shows that after 28 days the resistance corresponding to 2% kaolinite does not exceed that of the control mortar, while for those of 4% and 6% the resistances increase substantially. Due to the kinetics of the hydration reaction of the constituents of the mixed cement which becomes more and more active. The evolution of the resistances as a function of time shows that for a cure time of 28 days, the best resistance corresponds to the incorporation of 4% of kaolinite in the mortar. This improvement can be explained by the double role of kaolinite [13-14-15]: It permits to fill the tiny voids in the mortar due to its very high fineness. And there is the pozzolanic effect, the silica It will combine with the portlandite resulting from the hydration of the reactive phases of the clinker to form CSH [16][17]. Confinement of resins formulation After finding that optimal formulation corresponds to substitution of 4% of the kaolinite instead the sand, we incorporate a radioactive waste in the matrix in percentages ranging from 3.479% to 13.915% in order to determine the maximum rate of this waste that we can confined in the matrix (Table 3). We observe a decrease in the compressive strength with the increase of the quantity of the ion exchange resins incorporated in the mortar. The maximum quantity of resins which can be incorporated in this formulation without the packages losing their resistances is 13.915 %. Indeed, the compressive strength for this formulation is 6.7686 MPA and seems the most suitable for the conditioning of the resins. . Conclusions The compressive strengths increase with the substitution of sand by kaolinite, thus reflecting the improvement of compactness by three effects which act simultaneously and in a complementary manner: physical, physicalchemical and chemical (pozzolanic) effect. the main result is that the maximum amount of ion exchange resin that can be incorporated into the cement matrix is 13.95%. This precntage corresponds to a compressive strength of 6.7686 MPa. The package keeps a good resistance.
1,776.6
2018-01-01T00:00:00.000
[ "Materials Science", "Environmental Science", "Engineering" ]
ZH production in gluon fusion at NLO in QCD We present fully differential next-to-leading order results for Higgs production in association with a $Z$ boson in gluon fusion. Our two-loop virtual contributions are evaluated numerically using sector decomposition, including full top-quark mass effects, and supplemented at high $p_T$ by an analytic high-energy expansion to order ($m_Z^4, m_H^4, m_t^{32}$). Using the expanded results we also present a study of the top-quark mass scheme uncertainty at large $p_T$. Introduction Higgs boson production in association with a Z boson is a particularly interesting process as it probes both the Higgs boson coupling to Z bosons as well as to fermions. Despite its relatively small cross section, the process pp → ZH in combination with the decay channel H → bb was the "discovery channel" for Higgs boson couplings to bottom quarks [1,2], as while inclusive H → bb suffers from large backgrounds, the Z boson offers straightforward triggering. Recently, associated Higgs production was also used to place limits on the Higgs-charm coupling [3,4]. The loop-induced gluon channel formally enters at next-to-next-to-leading order (NNLO), with respect to the pp → ZH process. However, due to the dominance of the gluon parton distribution function (PDF) at the LHC, this channel is sizeable; it contributes about 6% to the total NNLO cross section and becomes significant in the boosted Higgs regime for p H T 150 GeV [5][6][7][8][9]. For more details about calculations of higher-order corrections to the pp → ZH process we refer to Ref. [10]; here we focus on the gluon channel. Experimental measurements of ZH production [11][12][13][14][15][16][17] still suffer from large statistical uncertainties, however the statistics will improve considerably in LHC Run 3 and at the High Luminosity LHC. On the theoretical side, the scale uncertainties for this process are large. They are dominated by the gg → ZH channel which, being loop induced, enters at its leading order (LO) into the simulation programs [18][19][20][21][22][23][24][25] used by the experimental collaborations. Therefore next-to-leading order (NLO) QCD corrections to the gg → ZH process, calculated at LO in Ref. [26], are important; particularly so in view of providing constraints on anomalous couplings, because of the sensitivity of this process to both the Higgs couplings to fermions as well as to vector bosons. Furthermore, it provides a way to put constraints on the sign of the top quark Yukawa coupling as well as on its CP structure [7,20,21,27], because the cross section has a component where this coupling enters linearly. Recently, it was shown [28] that this process also has the potential to probe anomalous Zbb couplings and thus to shed light on the long-standing discrepancy of the forward-backward asymmetry A b F B measured at LEP, with the Standard Model prediction. The NLO QCD two-loop amplitude for gg → ZH, including full top-quark mass effects, has been calculated numerically in Ref. [29]. The two-loop amplitude has also been calculated based on high-energy (also sometimes called "small-mass") and largem t expansions, supplemented with Padé approximants to improve the description beyond the high-energy radius of convergence [30]. Results for the virtual corrections based on a transverse-momentum expansion are also available [31]. Recently, the virtual corrections to both gg → HH and gg → ZH, based on a combination of transversemomentum expansion and high-energy expansion, have been presented in Ref. [32]. The total cross section and invariant-mass distribution for gg → ZH at NLO QCD has been calculated in Ref. [33], based on a small-(m Z , m H ) expansion [34] of the two-loop amplitude, retaining the full top-quark mass dependence. An expansion of the virtual and real NLO contributions to gg → ZH for large m t has been computed in [35]. In this work we present full NLO QCD results for the gg → ZH process where the two-loop amplitude is based on a combination of the numerical results of Ref. [29] and an extended version of the results from the high-energy expansion of Ref. [30], thereby providing reliable and accurate results in all kinematic regions, in particular in the boosted Higgs regime which is particularly sensitive to new physics effects. A similar combination has already been carried out successfully for Higgs boson pair production [36]. In addition, we consider two renormalisation schemes for the top quark mass, the on-shell (OS) scheme and the modified Minimal Subtraction (MS) scheme, and investigate how the scheme dependence impacts the phenomenological results. This paper is organised as follows. In Section 2 we describe the calculation of the NLO corrections and the combination procedure. Section 3 contains results for the total cross section at different center-of-mass energies as well as ZH invariant-mass and transverse-momentum distributions. The second part of Section 3 is dedicated to the Figure 1: Representative Feynman diagrams for the virtual correction to the ggZH amplitude. We neglect the masses of all quarks except the top quark; therefore, due to the Yukawa couplings, only the top quark gives a non-zero contribution for the box diagrams. All quark flavours contribute for the triangle diagrams, however the contribution from each massless generation is zero due to a cancellation between the up-type and down-type quarks. We calculate in the Feynman gauge and so also include the set of diagrams where the Z-boson propagators are replaced by Goldstone bosons. discussion of the top-quark mass scheme dependence, before we conclude in Section 4. Setup of the calculation In this section we summarise the computation of the individual contributions to the cross section at NLO and describe the combination of the virtual corrections computed in [30] and [29]. Virtual two-loop contributions The calculation of the renormalised and infrared (IR) subtracted two-loop amplitude, called V, is described in detail in Refs. [30] and [29]. For completeness we repeat in the following the most important steps. In Ref. [30] the amplitude of the process gg → ZH has been written as a linear combination of six form factors. At one-loop order it is straightforward to obtain exact results for the form factors. At two-loops expansions for large and small top-quark masses were performed. In this work only the high-energy expansion, for which m 2 H , m 2 Z m 2 t s, |t|, is of relevance. In Ref. [30] an expansion up to order (m 2 Z , m 2 H , m 32 t ) was computed. In this work we extend that result up to quartic order (m 4 Z , m 4 H , m 32 t ) (including also the "mixed" quartic term m 2 Z m 2 H ) and show that including these quartic terms improves the agreement with numerical results. LiteRed [37] is used to expand the integrals appearing in the amplitude, followed by an integration-byparts (IBP) reduction to master integrals using FIRE [38]. The reduction relations are substituted into the amplitude and simplified using FORM [39]. In Ref. [30] it was shown that Padé approximants for the expansion in m t significantly improve the description beyond the radius of convergence of the naive expansion and reliable results can be obtained for p T ≥ 150 GeV (see Fig. 2); we apply the same procedure here. Note that in this approach one can construct the high-energy expansion of V as a function of m H , m Z and m t which allows for a variation of these parameters. Furthermore, it is straightforward to perform a scheme change and convert the top quark mass from the OS to the MS scheme. The subsequent construction of the new Padé approximants requires only negligible CPU time. In Ref. [29], the two-loop amplitudes have been calculated via a projection onto a basis of linear polarisation states as suggested in Ref. [40]. For the IBP reduction of the resulting form factors to master integrals we used Kira [41][42][43] in combination with the rational function interpolation library FireFly [44,45]. For the numerical integration, using a quasi-finite basis [46] of master integrals is beneficial. This basis is related to the default basis by dimension shifts and higher powers of propagators (dots). For the derivation of the set of dimensional recurrence relations which connects integrals in D + 2n and D dimensions, we used LiteRed [37] and Reduze [47]. To evaluate the master integrals, we applied sector decomposition as implemented in the program pySecDec [48,49], using a quasi-Monte Carlo algorithm [49,50] for the numerical integration. In particular, we made use of one of the new features of pySecDec [51] to integrate a weighted sum of integrals such that the number of sampling points used for each integral is dynamically set, according to its contribution to the total uncertainty of the amplitude. We renormalise the strong coupling in the MS scheme with 5 active quark flavours. The top quark mass is renormalised in either the OS scheme or the MS scheme, as indicated. Expanding each renormalised form factor A i=1,...,n in powers of the strong coupling, a s = α s /(4π), we may write and then obtain IR-finite amplitudes using the Catani-Seymour subtraction operator [52] The squared 2 → 2 amplitude, in the helicity basis, can be written as where the squared Born amplitude (B) and the Born-virtual interference (V), are given by The sum/average, denoted by , runs over all helicities and averages over the incoming spin and colour indices. Combination of the two approaches In Fig. 2 we show the difference between the two calculations of the two-loop virtual amplitudes, relative to the LO amplitude. This difference is independent of the IR subtraction scheme used for removing the IR singularities. The plot shows that the Padé-improved high-energy expansion converges on the full result for p T 150 GeV. However, using only the quadratic terms in m 2 Z and m 2 H in the expansion, a difference at the two permill level remains even at large p T . After including the quartic terms in the expansion, most of the points with p T > 200 GeV agree within the numerical uncertainty at the 2 · 10 −5 level (with a few outliers with large numerical uncertainty reaching up to the 2 · 10 −3 level). At low p T the differences increase, reaching up to 0.15% at 200 GeV and up to 2.8% at 150 GeV. Since the two results are consistent for sufficiently large p T , in the following we use the results based on the high-energy expansion for contributions with p T > 150 GeV and use the numerical evaluation of the amplitude only for p T < 150 GeV. The two contributions are then combined at the histogram level. Our results are therefore valid in all phase-space regions, but avoid the costly numerical evaluation in large parts of the phase space. Phase-space sampling The integration of V over the phase space is achieved by a reweighting procedure based on the Born events. Specifically, we use the events of a LO calculation and apply the accept-reject method to obtain a list of sampling points for the virtual contribution distributed according to the probability density function ∼ L gg, where L gg,0 is the gluon-gluon luminosity as defined in Ref. [53] and B 0 is the LO matrix element as used in the LO calculation. The factor dPS is the Jacobian of the phasespace integration and the function f (p T , m ZH ) can be used to enhance the number of sampling points in specific regions. Choosing, e.g., f = f (m ZH ) ∝ dσ B /dm ZH leads to sampling points which are uniformly distributed in m ZH , thus enhancing the number of events in the tail of the distribution, whereas f (p T , m ZH ) = 1 results in sampling points distributed according to the fully differential LO cross section. The virtual contribution to the cross section is then given by where L gg and V are the new gluon-gluon luminosity and virtual matrix elements, whereas L gg,0 , B 0 and σ B,0 are obtained from the original LO calculation, with the total LO cross section σ B,0 . In the following results we use three different sets of sampling points, optimized for the total cross section, as well as the m ZH and p T distributions. While the exact form of f (p T , m ZH ) is not important, a good choice can be obtained with a fit using a Padé ansatz. In the region p T , m ZH ≥ 2 TeV, where a uniform sampling of points is not needed, we keep f constant. In total, we use 1294 numerically evaluated points distributed according to the differential LO cross section. For the m ZH and p T distributions, we combine these results with sets containing an additional 6000 points, optimised for the corresponding distribution, evaluated using the Padé-improved highenergy expansion. Computation of the real radiation contributions The real radiation matrix elements are calculated using the one-loop amplitude generator GoSam [54,55] together with an in-house C++ code, similar to the one used in Refs. [53,56], where the IR singularities are subtracted in the Catani-Seymour scheme [52], supplemented by a dipole phase-space cut parameter α cut [57]. We have checked that our implementation of the dipoles reproduces the matrix element in the soft and collinear limits and that our results are independent of α cut for 0.2 ≤ α cut ≤ 1. To check the numerical precision of our real matrix elements we use several rotation tests (i.e., we perform azimuthal rotations about the beam axis and recompute the phase-space point). We first compute the matrix element at a given phase-space point and a rotated phase-space point in double precision. If the results do not agree to 10 digits, we compute the phase-space point in quadruple precision and check if it agrees with the double-precision evaluations to 7 digits. If the results do not agree we compute a rotated point in quadruple precision and check that the quadruple-precision results agree to 10 digits; a vanishingly small fraction of points failed this test and were discarded. Using the above procedure, we did not find it necessary to apply a technical cut to our 2 → 3 phase space. In Fig. 3 we show examples of the Feynman diagrams included in our real radiation. We include all diagrams appearing in the ggZHg and qqZHg amplitudes (as well as their crossings) which contain a closed fermion loop and have either a Z-boson or Goldstone boson coupled to that loop. We consider n f = 5 massless quarks and a massive top quark running in the fermion loops. Due to the presence of the Yukawa coupling in the upper diagrams, the massless quarks give a non-zero contribution only for the lower row of diagrams. For the diagrams in the second and third column of the lower row, the contribution of each massless generation is zero due to a cancellation between the up-type and down-type quarks. We calculate in the Feynman gauge and so also include the set of diagrams in which the Z-boson propagators are replaced by Goldstone bosons. In Fig. 4 we show examples of Feynman diagrams which are not included in this work; these diagrams do not have a Z boson or Goldstone boson coupled to the closed fermion loop and so belong to the Drell-Yan class of diagrams, which we do not consider. The class of diagrams with the Higgs boson coupled to a closed quark loop, represented by the left figure, is UV/IR finite and separately gauge invariant; it was considered in detail in Ref. [58]. Total and differential cross sections Our results are based on a center-of-mass energy of √ s = 14 TeV unless stated otherwise. We use the NNPDF31 nlo pdfas parton distribution functions [59] with masses determined by the ratios m Results for the total cross section with full top-quark mass dependence at three different center-of-mass energies, including scale uncertainties resulting from the 7-point scale variation, µ R,F = ξ R,F m ZH with ξ R,F = (2, 2), (2, 1), (1, 2), (1, 1), ( Differential results for the invariant mass m ZH = (p Z + p H ) 2 of the Z-Higgs system are shown in Fig. 5 for the central scale choices m ZH and H T , with where the sum runs over all final state massless partons k. For the fully-inclusive case (left), the K-factor is relatively flat with a value of about two, except at very low invariant masses where threshold corrections are significant. The kink in the distribution at m ZH 350 GeV is related to the tt-production threshold. Only a small reduction of the scale uncertainty is observed going from LO to NLO. Note that the quark-gluon channel for this process first opens up at the NLO level. The cuts p T,H ≥ 140 GeV, p T,Z ≥ 150 GeV (Fig. 5 (right)) somewhat decrease the K-factor. The Z-boson transverse momentum distributions at LO and NLO are shown in Fig. 6. In the left plot we observe a K-factor which rises with increasing p T,Z , reaching a value of almost 5 at p T,Z = 1 TeV, it is only slightly tamed by the cuts on p T,H and p T,Z (right plot). Fig. 7 shows the Higgs-boson transverse momentum distributions with and without p T cuts. In the inclusive case (left) an extreme rise of the K-factor with increasing p T,H , up to values of about 20 towards p T,H = 1 TeV, is observed. The cuts p T,H ≥ 140 GeV, p T,Z ≥ 150 GeV decrease this K-factor by a factor of about 3 at large p T,H values. The cuts have such a large effect on the K-factor of this distribution as they remove configurations with a hard jet recoiling against a relatively hard Higgs while the Z boson is soft, this configuration dominates the tail of the distribution but is not present at LO. This behaviour was already reported in Ref. [20] and traced back to diagrams with t-channel gluon exchange, it was further studied in Ref. [60]. The reason why the rise of the K-factor is more pronounced in the p T,H case than in the p T,Z case can be related to the coupling structure of the Z and Higgs bosons to top quarks. In the diagrams where both the Higgs and the Z boson are radiated from a top quark loop, the probability to radiate a "soft" Z boson while the Higgs boson recoils against a hard jet is related to the soft Eikonal factor p µ /(p·p Z ), where p µ generically denotes the radiator momentum. The probability to radiate a "soft" Higgs boson on the other hand is proportional to m t /(p · p H ). The ratio of these Eikonal factors is p T /m t 1, thus at large transverse momentum p T of the radiator it is more likely that the Z boson is soft and the Higgs boson is hard. Investigation of different top quark mass renormalisation schemes We now turn to the discussion of the uncertainties stemming from the use of different top quark mass renormalisation schemes. Such uncertainties have been investigated in detail for the case of Higgs boson pair production in Refs. [60][61][62][63]. For top quark pair production at NNLO, scheme uncertainties have been studied in Ref. [64]. Top quark renormalisation scheme uncertainties also have been investigated for NLO ttH [65] and ttj [66] production at the LHC, as well as for off-shell Higgs production and LO Higgs+jet production [60]. In this section we investigate the top-quark mass renormalisation scheme dependence of ZH production. For this purpose we convert the top quark mass to the MS scheme, which is an appropriate renormalisation scheme in the high-energy region. It is thus sufficient to perform the scheme change in the analytic high-energy expansion of the virtual corrections, where it is straightforward to obtain the corresponding analytic expressions by making the replacement and to apply the Padé procedure as described in Section 2.1. Afterwards the result is combined with the real-radiation contributions where the MS top quark mass is used in the numerical evaluation. We make three different choices for the central value of the top quark renormalisation scale, µ t , T,i + k |p T,k | (k sums over massless partons) and vary µ t up and down by a factor of 2 to obtain an uncertainty estimate. For the conversion of the numerical values of the top quark mass between the OS and the MS schemes we proceed as follows: we first convert the top quark OS mass to the MS scheme at the scale µ t = m t , at four-loop accuracy. For our input values, m t = 173.21 GeV and α s (m Z ) = 0.118, this gives m t (m t ) = 163.39 GeV. We then use the renormalisation group equation, at five-loop accuracy with six active quark flavours, to run from µ t = m t to the desired renormalisation scale for m t . For both the numerical scheme conversion and the running we use the Mathematica and C++ codes RunDec and CRunDec [67,68]. At LO the difference between the schemes is purely parametric and is driven both by the top quark mass appearing in the propagators and the Higgs-top Yukawa coupling. At NLO the OS and MS schemes differ by the parametric choice of m t and a shift proportional to the derivative of the LO, i.e., the mass counterterms, which partly compensates the parametric difference. In Fig. 8 we show predictions at LO (dashed lines) and NLO (solid lines) for µ R = µ F = m ZH with three different choices of the top-quark renormalisation scale, µ t . The red band is generated by varying the scale µ t = m ZH up and down by a factor of 2, keeping µ R = µ F = m ZH fixed. We observe that the scheme choice for the top quark mass has a large impact on the predictions for both the invariant mass distribution (with p T,H ≥ 140 GeV and p T,Z ≥ 150 GeV cuts) and the p T,Z distribution. Focusing on the invariant mass distribution, we observe that at LO for m ZH ∼ 1 TeV the OS result is approximately a factor of 2.9 times the MS result with µ t = m ZH . At NLO the difference between the schemes is somewhat reduced, with the OS result around 1.9 times the MS result with µ t = m ZH , see Table 2. Taking, for example, the difference between the OS and the MS result with µ t = m ZH as a mass scheme uncertainty would result in a +0% −65% uncertainty at LO and a +0% −47% uncertainty at NLO for m ZH = 1 TeV. Alternatively, taking the MS result with µ t = (m ZH /2, m ZH , 2 m ZH ) as an uncertainty gives +26% −21% at LO and +17% −14% at NLO for m ZH = 1 TeV. We observe a similar pattern for large p T,Z , the difference between the OS scheme and the MS with µ t = m ZH scheme at p T,Z ∼ 1 TeV is reduced from a factor of 2.8 at LO to a factor of about 1.9 at NLO. The K-factor, defined as the ratio of the NLO result in a given scheme to the LO result in the same scheme, is typically larger in the MS scheme than in the OS scheme; this feature is also observed in Higgs pair production [63]. The K-factor of the invariant mass distribution is relatively flat for all scheme choices, with the dynamic scale choices µ t = H T and m ZH yielding K ∼ 2.5−2.7 while the OS scheme has K ∼ 1.6 for m ZH ∼ 1 TeV. The MS scheme with µ t = m t (m t ), where the logarithm appearing in Eq. (3.2) vanishes, has a very similar K-factor to the OS scheme; this differs from the HH case where the two schemes had a similar shape but a different normalisation. For the p T,Z distribution the pattern of K-factors for the different schemes is broadly the same as for the invariant mass distribution, but in all cases the K-factors rise with p T,Z reaching up to K = 5 for dynamic µ t choices at p T,Z = 1 TeV. Comparing the results obtained here for gg → ZH to other loop-induced processes, such as off-shell Higgs production, Higgs pair production and Higgs plus jet production, we note that the ZH process has a larger mass scheme dependence at LO. For off-shell Higgs production and Higgs pair production going from LO to NLO approximately halves the uncertainty due to the mass scheme choice; in the ZH case we also observe a reduction in the uncertainty, but by less than a factor of 2. In the HH case, in the high-energy limit, the triangle contribution is suppressed by a factor of 1/s w.r.t. the box form factors. Here, the leading high-energy behaviour of the box form factors has the form [62,69] where the log[m 2 t ] term in A (1) i is due to the renormalisation of m t , and the overall power of m 2 t comes from the Yukawa couplings. Converting to the MS scheme using Eq. (3.2) results in a logarithm of the form log[µ 2 t /s]. In Ref. [62] it was argued that choosing µ 2 t ∼ s minimizes these logarithms and is thus the preferred central scale choice of the Yukawa couplings. However, in the present ZH case, the structure is different. Firstly, the triangle contribution is not suppressed w.r.t. the box form factors, and secondly logarithms involving m t appear in the box form factors already at leading order. Unlike in the HH case, where the overall power of m 2 t in Eq. (3.3) comes entirely from the top Yukawa couplings, in gg → ZH one of the overall m t factors must come from the top-quark propagators, hence the leading term in the small-mass expansion is already power-suppressed by one power of m t . Similar, power-suppressed, mass logarithms have been studied in the context of single Higgs production, see for example Ref. [70] and references therein. The leading helicity amplitudes for ZH in the high-energy limit have the form Conclusions The gg → ZH channel contributes to the pp → ZH process starting at NNLO, accounting for around 6% of the total cross section. However, the gluon-fusion channel suffers from a large scale dependence at LO and is a significant source of theoretical uncertainty for Z boson production in association with a Higgs boson at the LHC [1,2,[12][13][14][15][16]. In this work, we have presented the complete NLO corrections for the loop-induced gluon-fusion channel; they increase the gluon-fusion cross section by about a factor of 2, and reduce the scale dependence. We have investigated the invariant mass distribution and transverse momentum distributions for both the Z boson and Higgs boson. At large transverse momentum, we found that the NLO corrections can be very large, more than 10 times the LO result for p T,H ; the origin of this behaviour can be traced back to extremely large real radiation corrections when a soft Z boson is radiated from a top quark loop [20]. We have also studied the top quark mass scheme uncertainties for this channel, i.e., the difference between results produced with the top quark mass renormalised in the on-shell scheme and the MS scheme. As for other loop-induced processes with a scale above the top quark pair-production threshold [60][61][62][63], we found a large mass scheme uncertainty at LO. At NLO the mass scheme uncertainty is smaller than at LO, but remains at least as large as the usual renormalisation and factorisation scale uncertainties. The inclusion of the NLO corrections to the gluon-fusion channel is essential for correctly describing ZH production at the LHC and HL-LHC. The size of the NLO corrections, especially for the transverse momentum distributions, and the mass scheme uncertainty motivate a calculation of gg → ZH beyond NLO and the study of this process beyond fixed order. [14] ATLAS collaboration, Combination of measurements of Higgs boson production in association with a W or Z boson in the bb decay channel with the ATLAS experiment at √ s = 13 TeV, tech. rep., CERN, Geneva, Sep, 2021. [15] ATLAS collaboration, G. Aad et al., Direct constraint on the Higgs-charm coupling from a search for Higgs boson decays into charm quarks with the ATLAS detector, 2201.11428. [17] CMS collaboration, Combined Higgs boson production and decay measurements with up to 137 fb-1 of proton-proton collision data at sqrts = 13 TeV, tech. rep., CERN, Geneva, 2020.
6,742.4
2022-04-11T00:00:00.000
[ "Physics" ]
Exact Hard Monotonic Attention for Character-Level Transduction Many common character-level, string-to-string transduction tasks, e.g., grapheme-to-phoneme conversion and morphological inflection, consist almost exclusively of monotonic transduction. Neural sequence-to-sequence models with soft attention, non-monotonic models, outperform popular monotonic models. In this work, we ask the following question: Is monotonicity really a helpful inductive bias in these tasks? We develop a hard attention sequence-to-sequence model that enforces strict monotonicity and learns alignment jointly. With the help of dynamic programming, we are able to compute the exact marginalization over all alignments. Our models achieve state-of-the-art performance on morphological inflection. Furthermore, we find strong performance on two other character-level transduction tasks. Code is available at https://github.com/shijie-wu/neural-transducer. Introduction Many tasks in natural language processing can be treated as character-level, string-to-string transduction.The current dominant method is the neural sequence-to-sequence model with soft attention (Bahdanau et al., 2015;Luong et al., 2015).This method has achieved state-of-the-art results in a plethora of tasks, for example, grapheme-tophoneme conversion (Yao and Zweig, 2015), named-entity transliteration (Rosca and Breuel, 2016) and morphological inflection generation (Cotterell et al., 2016).While soft attention is very similar to a traditional alignment between the source characters and target characters in some regards, it does not explicitly model a distribution over alignments.On the other hand, neural sequence-to-sequence models with hard attention are analogous to the classic IBM models for machine translation, which do model the alignment distribution explicitly (Brown et al., 1993). The standard versions of soft and hard attention are non-monotonic.However, if we look at the data in grapheme-to-phoneme conversion, named-entity transliteration, and morphological inflection (examples are shown in Fig. 1), we see that the tasks require almost exclusively monotonic transduction.Yet, counterintuitively, the state of the art in highresource morphological inflection is held by nonmonotonic models (Cotterell et al., 2017)!Indeed, in a recent controlled experiment, Wu et al. (2018) found non-monotonic models (with either soft or hard attention) outperform popular monotonic models (Aharoni and Goldberg, 2017) in the three above-mentioned tasks.However, the inductive bias of monotonicity, if correct, should help learn a better model or, at least, learn the same model. In this paper, we hypothesize that the underperformance of monotonic models stems from the lack of joint training of the alignments with the transduction.Generalizing the model of Wu et al. (2018) to enforce monotonic alignments, we show that, for all three tasks considered, monotonicity is a good inductive bias and jointly learning a monotonic alignment improves performance.We provide an exact, cubic-time dynamic-programming inference algorithm to compute the log-likelihood and an approximate greedy decoding scheme.Empirically, our results indicate that, rather than the pipeline systems of Aharoni and Goldberg (2017) and Makarov et al. (2017), we should jointly train monotonic alignments with the transduction model, and, indeed, we set the single-model state of the art on the task of morphological inflection.1 2 Hard Attention Preliminary We assume the source string x ∈ Σ * x and the target string y ∈ Σ * y are drawn from finite vocabularies Σ x = {x 1 , . . ., x |Σx| } and Σ y = {y 1 , . . ., y |Σy| }, respectively.In tasks where the tag is provided, i.e., labeled transduction (Zhou and Neubig, 2017), we denote the tag as an ordered set t ∈ Σ * t drawn from a finite tag vocabulary Σ t = {t 1 , . . ., t |Σt| }.We define the set A = {1, . . ., |x|} |y| to be set of all non-monotonic alignments from x to y where an alignment aligns each target character y i to exactly one source character in x. 2 In other words, it allows zero-to-one 3 or many-to-one alignments between x and y.For an a ∈ A, A i = a i refers to the event that y i is aligned to x a i , which are the i th character of y and the a i th character of x, respectively.In general, we will shorten the expression A i = a i to a i for brevity. 0 th -order Hard Attention Hard attention was first introduced to the literature by Xu et al. (2015).We, however, follow Wu et al. (2018) and use a tractable variant of hard attention and model the probability of a target string y given an input string x as follows polynomial number of terms where we show how one can rearrange the terms to compute the function in polynomial time. The model above is exactly an 0 th -order neuralized hidden Markov model (HMM).Specifically, p(y i | a i , y <i , x) can be regarded as an emission distribution and p(a i | y <i , x) can be regarded as a transition distribution, which does not condition on the previous alignment.Hence, we will refer to this 2 We write A in the remainder with x and y implicit. 3Zero in the sense of a non-character like BOS or EOS model as 0 th -order hard attention.The likelihood can be computed in 1 st -order Hard Attention To enforce monotonicity, hard attention with conditionally independent alignment decisions is not enough: The model needs to know the previous alignment position when determining the current alignment position.Thus, we allow the transition distribution to condition on the previous alignment p(a i | a i−1 , y <i , x) and it becomes a 1 st -order neuralized HMM.We display this model as a graphical model in Fig. 2. We will refer to it as 1 st -order hard attention.Generalizing the 0 th -order model, we define the 1 st -order extension as follows polynomial number of terms where α(a i−1 ) is the forward probability, calculated using the forward algorithm (Rabiner, 1989) with α(a 0 , y 0 ) = 1, and p(a is the initial alignment distribution.For simplicity, we drop y <i and x in p(y i | a i ) and p(a i | a i−1 ).For completeness, we include the recursive definition of the forward probability: Decoding at test time, however, is hard and we resort to a greedy scheme, described in Alg. 1.To see why it is hard, note that the dependence on y <i means that we have a neural language model scoring the target string as it is being transduced.The dependence is unbounded so there is no dynamic program that allows for efficient computation. A Note on EOS.In the discussion above, we have suppressed the generation of EOS in the autoregressive models we derive for brevity.For example, p(y i | a i , y <i , x) must be a conditional distribution over Σ y ∪ {EOS} in order for p(y | x) to be a well-defined probability distribution. A Neural Parameterization with Enforced Monotonicity The goal of this section is to take the 1 st -order model of §2 and show how we can straightforwardly enforce the monotonicity of the alignments.We will achieve this by adding structural zeros to the distribution, which will still allow us to perform efficient inference with dynamic programming.We follow the neural parameterization of Wu et al. (2018).The source string x is represented by a sequence of character embedding vectors, which are fed into an encoder bidirectional LSTM (Hochreiter and Schmidhuber, 1997) to produce hidden state representations h e j .The emission distribution p(y i | a i , y <i , x) depends on these encodings h e j and the decoder hidden states produced by where e d encodes target characters into character embeddings.The tag embedding h t is produced by where e t maps the tag t k into tag embedding h t k ∈ R dt or zero vector 0 ∈ R dt , depends on whether the tag t k is presented.Note that Y ∈ R dt×|Σt| dt is a learned parameter matrix.Also, h e The Emission Distribution.All of our hardattention models employ the same emission distribution parameterization, which we define below where V ∈ R 3d h ×3d h and W ∈ R |Σy|×3d h are learned parameters. 0 th -order Hard Attention.In the case of the 0 th -order model, the distribution is computed by a bilinear attention function with Eq. ( 1) where T ∈ R d h ×2d h is a learned parameter and A i is a random variable range over the values of the i th alignment. 0 th -order Hard Monotonic Attention.We may enforce string monotonicity by zeroing out any non-monotonic alignment without adding any additional parameters, which can be done by adding structural zeros to the distribution as follows These structural zeros prevent the alignments from jumping backwards during transduction and, thus, enforce monotonicity.The parameterization is identical to the 0 th -order model up to the enforcement of the hard constraint with eq. ( 2). Algorithm 1 Greedy decoding.(N is the maximum length of the target string.) 1: for i = 1, . . ., N do 2: if i = 1 then 3: return y * 1 st -order Hard Monotonic Attention.We may also generalize the 0 th -order case by adding more parameters.This will equip the model with a more expressive transition function.In this case, we take the 1 st -order hard attention to be an offset-based transition distribution similar to Wang et al. (2018): where ∆ = a i −a i−1 is relative distance to previous attention position, U ∈ R (w+1)×2d h is a learned parameter, and w ∈ N is an integer hyperparameter.Note that, as before, we also enforce monotonicity as a hard constraint in this parameterization. Related Work There have been previous attempts to look at monotonicity in neural transduction.Graves (2012) first introduced the monotonic neural transducer for speech recognition.Building on this, Yu et al. (2016) proposes using a separated shift/emit transition distribution to allow a more expressive model.Like us, they also consider morphological inflection and outperform a (weaker) soft attention baseline.Rastogi et al. (2016) offer a neural parameterization of a finite-state transducer, which implicitly encodes monotonic alignments.Instead of learning the alignments directly, Aharoni and Goldberg (2017) take the monotonic alignments from an external model (Sudoh et al., 2013) and train the neural model with these alignments.In follow-up work, Makarov et al. (2017) show this two-stage approach to be effective, winning the CoNLL-SIGMORPHON 2017 shared task on morphological inflection (Cotterell et al., 2017).Raffel et al. (2017) propose a stochastic monotonic transition process to allow sample-based online decoding. Experimental Findings Finding #1: Morphological Inflection.The first empirical finding in our study is that we achieve single-model, state-of-the-art performance on the CoNLL-SIGMORPHON 2017 shared task dataset. The results are shown in Tab. 2. We find that the 1-MONO ties with the 0-MONO system, indicating the additional parameters do not add much.Both of these monotonic systems surpass the nonmonotonic system 0-HARD and SOFT.We also compare to other top systems at the task in Tab. 1. The previous state-of-the-art model, Bergmanis et al. (2017), is a non-monotonic system that outperformed the monotonic system of Makarov et al. (2017).However, Makarov et al. ( 2017) is a pipeline system that took alignments from an existing aligner; such a system has no manner, by which it can recover from poor initial alignment.We show that jointly learning monotonic alignments leads to improved results. The second finding is that by comparing SOFT, 0-HARD, 0-MONO in Tab. 2, we observe 0-MONO outperforms 0-HARD and 0-HARD in turns outperforms SOFT in all three tasks.This shows that monotonicity should be enforced strictly since strict monotonicity does not hurt the model.We contrast this to the findings of Wu et al. (2018), who found the non-monotonic models outperform the monotonic ones; this suggests strict monotonicity is more helpful when the model is allowed to learn the alignment distribution jointly. Finding #3: Do Additional Parameters Help? The third finding is that 1-MONO has a more expressive transition distribution and, thus, outperforms 0-MONO and 0-HARD in G2P.However, it performs as well as or worse on the other tasks.This tells us that the additional parameters are not always necessary for improved performance.Rather, it is the hard constraint that matters-not the more expressive distribution.However, we remark that enforcing the monotonic constraint does come at an additional computational cost. Conclusion We expand the hard-attention neural sequenceto-sequence model of Wu et al. (2018) to enforce monotonicity.We show, empirically, that enforcing monotonicity in the alignments found by hard attention models helps significantly, and we 4 Some numbers were obtained by contacting authors.achieve state-of-the-art performance on the morphological inflection using data from the CoNLL-SIGMORPHON 2017 shared task.We isolate the effect of monotonicity in a controlled experiment and show monotonicity is a useful hard constraint for three tasks, and speculate previous underperformance is due to a lack of joint training. Figure 1 : Figure 1: Example of source and target string for each task.Tag guides transduction in morphological inflection. Figure2: Our monotonic hard-attention model viewed as a graphical model.The circular nodes are random variables and the diamond nodes are deterministic variables.We have omitted arcs from x to y 1 , y 2 , y 3 and y 4 for clarity (to avoid crossing arcs). Thus, computation of the likelihood in our 1 st -order hard attention model is O(|x| 2 • |y| • |Σ y |) by the dynamic program given in the paper. Table 1 : Average dev performance on morphological inflection of our models against single models from the 2017 shared task.All systems are single model, i.e., without ensembling.Why dev?No participants submitted single-model systems for evaluation on test and the best systems were not open-sourced, constraining our comparison.Note we report numbers from their paper.4
3,147.2
2019-05-15T00:00:00.000
[ "Computer Science" ]
Rapid Artificial Intelligence Solutions in a Pandemic - The COVID-19-20 Lung CT Lesion Segmentation Challenge Artificial intelligence (AI) methods for the automatic detection and quantification of COVID-19 lesions in chest computed tomography (CT) might play an important role in the monitoring and management of the disease. We organized an international challenge and competition for the development and comparison of AI algorithms for this task, which we supported with public data and state-of-the-art benchmark methods. Board Certified Radiologists annotated 295 public images from two sources (A and B) for algorithms training (n=199, source A), validation (n=50, source A) and testing (n=23, source A; n=23, source B). There were 1,096 registered teams of which 225 and 98 completed the validation and testing phases, respectively. The challenge showed that AI models could be rapidly designed by diverse teams with the potential to measure disease or facilitate timely and patient-specific interventions. This paper provides an overview and the major outcomes of the COVID-19 Lung CT Lesion Segmentation Challenge - 2020. Introduction The SARS-CoV-2 pandemic has had a devastating impact on the global healthcare systems.As of May 28, 2021, more than 169 million people have been infected in the world with over 3.5 million deaths 1 .COVID-19 is known to affect nearly every organ system, including the lungs, brain, kidneys, liver, gastrointestinal tract, and cardiovascular system.The manifestations of the disease in the lung may be early indicators of future problems.These manifestations have been intensively reported in the adult populations and occasionally in pediatric subjects [2][3][4][5][6] .Since the early days of the pandemic, lung imaging has been critical for both the early identification and management of individuals affected by COVID-19 7 .Imaging also provides invaluable support for the evaluation of patients with long COVID and after the acute sequelae of the diseases.Repeated waves of infection and changes in the disease course require data, including imaging, classification, quantification, and response tools, as well as standardized reliable interpretation as the global society struggles to provide widely available vaccines and faces evolving challenges such as new mutations of the virus. The most common lung imaging modalities utilized for the evaluation of SARS-CoV-2 infections are chest radiographs (CXR) and chest computerized tomography (CT) with ultrasound (US) being used more sparingly.Chest CT is the reference modality that most accurately demonstrates the acute lung manifestations of COVID-19 8,9 .As observed in CT, the most common findings in the chest of the affected individuals were ground-glass opacities (GGO) and pneumonic consolidations.Other manifestations include interstitial abnormalities, crazy paving pattern, halo signs, pleural abnormalities, bronchiectasis, bronchovascular bundle thickening, air bronchograms, lymphadenopathy, and pleural/pericardial effusions.The sensitivity of chest CT to detect these abnormalities in subjects with confirmed COVID-19 was widely variable and somewhat subjective, reported in the range of 44-97% (median 69%) 10 . Beside its role in the identification of patterns of SARS-CoV-2 infections, lung CT is also important in the determination of the severity of COVID-19 6,8,9,11,12 .The presence, location and extension of the lung abnormalities are critical factors for the clinical management of patients to potentially facilitate decisions towards more timely and personalized medical interventions.Quantification of lesions may further provide the tracking of disease progression and response to therapeutic countermeasures.Thus, improving COVID-19 treatment starts with a clearer understanding of the patient's disease state, which must include accurate identification, delineation and quantification of lung lesions and disease phenotypes and patterns. A prior lack of global data collaboration limited clinicians and scientists in their ability to quickly and effectively understand COVID-19 disease, its severity and outcomes.As access to data has improved, quality annotations have remained a limiting factor in the development of useful artificial intelligence (AI) models derived from machine learning and deep learning 13 .Thus, a multitude of AI approaches have been developed, published and indicated great potential for clinical support, but they were often overfit, being trained using proprietary data or from a single site [14][15][16][17][18][19] .Alternatively, federated approaches allow algorithms to access data from multiple sites without the need of sharing raw data, but through this paradigm access is granted to a single algorithm and consortium, with sharing of model weights instead of raw data 20,21 .In particular, deep neural networks were used for the identification and segmentation of abnormal lung regions affected by SARS-CoV-2 infection.These can be grouped into two main classes: classification models that extract the affected region inside the lung area by comparison with data from healthy subjects [22][23][24][25] , and segmentation models that directly extract the abnormal lung areas according to patterns in the image and (typically using fully convolutional networks) 16,18,[26][27][28] . Without access to public data and an adequate platform to evaluate and compare their performance, AI approaches risk being overtrained, irreproducible, and ultimately clinically not useful.Thus, public efforts are needed to accelerate the understanding of the role of AI towards informing manifestations and qualifying impact of health crises such as the COVID-19 pandemic. The COVID-19 Lung CT Lesion Segmentation Challenge 2020 (COVID-19-20) created the public platform to evaluate emerging AI methods for the segmentation and quantification of lung lesions caused by SARS-CoV-2 infection from CT images.This effort required a multi-disciplinary team science partnership among global communities in a broad variety of often disparate fields, including radiology, computer science, data science and image processing.The goal was to rapidly team up to combine multi-disciplinary expertise towards the development of tools to simultaneously both define and address unmet clinical needs created by the pandemic.The COVID-19-20 platform provided access to multi-institutional, multinational images originating from patients of different ages and gender, and with variable disease severity.The challenge team provided the ability to quickly label a public dataset, allowing radiologists to rapidly add precise annotations.Open access was offered to the annotated CTs of subjects with PCR-confirmed COVID-19, and to a baseline deep learning pipeline based on MONAI 29 that could serve as a starting point for further algorithmic improvements.The challenge was hosted on a widely used competition website (covid-segmentation.grand-challenge.org)for easy and secure data access control.This paper presents an overview of this challenge and competition, including the data resources and the top ten AI algorithms identified from a highly competitive field of participants who tested the data in December 2020. Submissions The challenge was launched on November 2, 2020.The training and validation data were released and 1,096 teams registered before the training phase was closed on December 8, 2020.The 225 teams that completed the validation phase were given access to the test data.Ninetyeight teams from 29 countries on six continents completed the test phase.Figure 1 shows the countries of origin of the 98 teams.Test results were released on December 18, 2020, and the statistical ranking of the top ten teams (see Results) was unveiled during a virtual mini symposium on January 11, 2021 30 . Figure 2 shows the demographic information for the team leaders, i.e., age group, sex, highest educational degree, student status and job category, and algorithmic characteristics for the 98 submissions that completed the training, validation and test phases.We requested participants to disclose whether they used external data for training their algorithms or if they used a general-purpose pre-trained network for initialization (e.g., a network pre-trained for another lung disease).The use of public networks pre-trained for the segmentation of COVID-19 lesions was not allowed (e.g., Clara_train_covid19_ct_lesion_seg 31 ). Participants uploaded the results on the validation and test data to the hosting website for evaluation.Only (semi-)automated methods were allowed.Submission of manual annotations was prohibited.For validation, the number of submissions from each user was limited to once-a-day for the purpose of refining their algorithms based on the live performance indicators on the challenge validation leaderboard 32 .Submission of results on the test data was collected without showing the leaderboard and the last submission was used for final ranking.The test phase was open only to participants who had already submitted their results on the validation set.The leaderboard and final ranking are public and hosted on the challenge website 33 . Data sources This challenge utilized data from two public resources on chest CT images, namely the "CT Images in COVID-19" 34,35 (Dataset 1) and "COVID-19-AR" 36 (Dataset 2) available on The Cancer Imaging Archive (TCIA) 36 .CT images were acquired without intravenous contrast enhancement from patients with positive Reverse Transcription Polymerase Chain Reaction (RT-PCR) for SARS-CoV-2.Dataset 1 originated from China, while dataset 2 was acquired from the US population.In total, we used 295 images, including 272 images from Dataset 1 and 23 images from Dataset 2. Of these images, 199 and 50 from Dataset 1 were used for training and validation, respectively.We therefore refer to Dataset 1 as the "seen" data source that participants used to train and validate their algorithms during the first phase of the challenge.The test set contained 23 images each from Datasets 1 and 2 (46 images in total).Hence, Dataset 2 was only used in the testing phase, and we refer to it as the "unseen" data source. Descriptive statistics, such as x-, y-, and z-resolutions and voxel volume in both data sources are shown in Figure 3.We also show the differences in COVID-19 lesion volumes annotated between the two data sources.histograms showing the CT intensity distributions of the "seen" and "unseen" data sources in Hounsfield units (HU).Note, -1000 HU corresponds to air, and 750 to cancellous bone 37 . Annotation protocol All images were automatically segmented by a previously trained model for COVID lesion segmentation 20 that is publicly available 38 .All lung lesions related to COVID-19 were included.These segmentations were subsequently used as a starting point for board certified radiologists (RS, JZ, JM) who manually adjudicated and corrected them.The annotation tool used was ITKSnap 39,40 showing multiple reformatted views of the CT scans, and allowing manipulations and corrections of the initial automated segmentation results in three dimensions. Evaluation metrics We used the three evaluation metrics described below.These metrics were both used to evaluate the performance of different algorithms, and to establish the interobserver variability. 1. Dice Coefficient (Dice).A common evaluation metric of segmentation accuracy defined as the overlap between the volume of the ground truth segmentation and the predicted segmentation volume ; = 2 × ( ∩ ) ∪ . Normalized Surface Dice (NSD). Similarly to Dice, it provides the normalized measure of agreement between the surface of the prediction and the surface of the ground truth 41 .We chose a threshold of 1mm to define an "acceptable" derivation between the ground truth surface and the predicted surface. Normalized Absolute Volume Error (NAVE). The volume of COVID-19 lesion burden inside the patient's lung can play an important role for clinical assessment 42 .Therefore, a measure was chosen that assesses the agreement between the predicted and ground truth lesion volumes, defined as = . Note, we used the negative of this value for ranking purposes as higher values indicate better performance in our ranking approach. Interobserver performance As a benchmark for comparing the AI algorithms with human performance on the lesion segmentation task, we measured the human interobserver agreement.We compared the annotations utilized in Yang et al. 20 from 245 of the 272 cases from Dataset 1 used in the challenge with the ones obtained by our radiologists.The interobserver agreement showed mean ± standard deviation (median) of Dice, NSD, and NAVE of 0.702 ± 0.172 (0.756), 0.538 ± 0.147 (0.563), and 0.601 ± 1.969 (0.180), respectively. Statistical ranking method Recent work on ranking analysis for biomedical imaging challenges has shown that ranking results can vary significantly depending on the chosen type of metric and ranking scheme 43 .Most biomedical challenges use approaches such as "aggregate-then-rank" or "rank-then-aggregate", which do not account for statistical differences between algorithms 43,44 .These findings motivated the development of a challenge ranking toolkit 44,45 that we employed for our evaluation.This toolkit utilizes statistical hypothesis testing applied to each possible pair of algorithms.This allows us to better assess the differences between the evaluated metrics. Following the notation of Wiesenfarth et al 44 , our challenge contained = 6 tasks (Dice, NSD, NAVE on each "seen" and "unseen" test data).The test cases for each task are denoted as , = 1, . . ., .In our case, = 23for each task.A bootstrap approach is used to evaluate the ranking stability of the different algorithms.This means that ranking is performed repeatable on = 1,000bootstrap samples, see Figure 4.The statistical test employed to determine the consensus ranking is the one-sided Wilcoxon signed rank test with a significance level of = 5%, adjusted for multiple testing according to Holm 44 . Each of the m tasks contributed equality to the final consensus using the Euclidean distance between averaged ranks across tasks.We ranked the 98 submitted algorithms using the proposed statistical consensus ranking algorithm to determine the top-10 methods, including the challenge winning algorithm. Summary of top-10 algorithms We show the final ranking of the top-10 performing algorithms in Table 1.All top-10 algorithms were fully-automated methods, and all were based on some variation of the U-Net 46,47 , a fully convolutional network 48 for image segmentation based on the popular encoder-decoder design with skip connections 48,49 .U-Net has dominated the field of biomedical image segmentation in recent years 50 and most challenge participants opted to use one of its implementations.In particular the nnU-Net open-source framework 51,52 , which has shown success in multiple biomedical image segmentation challenges, was a popular choice for challenge participants.The U-Net architectures included 2D, 3D, high and low resolution configurations.One team used the open-source platform MONAI 53 (#68).The majority of algorithms used challenge data only with one method including additional unlabeled data from the public TCIA source (#53), which was done with pseudo labels in a semi-supervised approach.The majority directly targeted the segmentation of COVID-19 lesions, while one participant (#31) targeted multiple outputs, including body and lung masks. A popular loss function for biomedical image segmentation is the Dice loss 54 .In this challenge, most finalists utilized it together with additional cross entropy, top-k 55 , and focal loss 56 .An important strategy for winning image segmentation is model ensembling, the fusion of predictions from several independently trained models.Here, most methods used 5-fold cross validation and model ensemble to arrive at a consensus prediction. A full description of the top-10 finalists' algorithms by their authors is given in Methods. Ranking results Table 2 shows the mean and standard deviation of the Dice coefficients for the top-10 performing algorithms on test cases from the "seen" and "unseen" data sources.Top algorithms performed relatively similar to each other, but all showed a marked decrease when being evaluated on the "unseen" data (Table 2).Figure 5 shows boxplots of the top-10 performing algorithms for each of the = 6tasks.In general, methods present more outliers on the "unseen" test dataset.Figure 6 shows a typical example from the "seen" test data source.The top-performing algorithms (#53 and #38) achieved a mean Dice coefficient >0.734 Dice on the "seen" dataset.Figure 6 shows that most of the COVID-19 related lesions were well segmented by the automated algorithms.In contrast, Figure 7 shows a challenging case from the "unseen" test data source.Both top-performing algorithms (#53 and #38) generated a false-positive segmentation region at a normal lung vessel while missing the real lesion.Their performance dropped to a Dice coefficient <0.598 on the "unseen" dataset.To illustrate the general performance of the top-10 algorithms on the individual test cases, Figure 8 shows podium plots 57 with the performance of different algorithms on the same test case connected by a line. Performance of algorithms Automatic AI algorithms showed great potential to accurately segment the lung COVID-19 lesions from CT images.In the validation phase, 87 out of 225 methods achieved superior Dice coefficients than the interobserver criteria (0.702), with the top team achieving a Dice coefficient of 0.771 (~9.8% improvement).However, their level of robustness is inferior to the radiologist's performance: the top team gets a Dice coefficient of 0.666 on the test data 58 (~5.1% decrease).This discrepancy could be due to various reasons.One reason could be the domain shift as half of the test data is from an "unseen" source that has not been used in the training or validation phases.Another reason could be the limited number of allowed submissions for the testing phase, which mitigates the possibility for overfitting to the test data.Moreover, the limited number of training data could also affect algorithm performance. The evaluation of the analysis of top-10 algorithms revealed that the ensemble of segmentation from various individual automated methods plays an important role compared to other factors such as the complexity of the network architecture, the learning rate, losses, etc.Most 10 top teams used model ensembles to reduce outliers and improved their performance by collecting the consensus segmentation from separately trained models.This observation also shows that the training pipeline can potentially be further improved based on novel concepts like AutoML 59,60 or neural architectures search [61][62][63] algorithms. Use of external training data Only one of the top 10 teams, which was the winning team of the challenge, used external data in their final solution.Using this semi-supervised training approach, they obtained an improvement of 4.27% and 0.86% Dice coefficient on the training and validation data, respectively.Another team did similar work in a student-teacher manner and saw improvement in the validation score.However, they submitted their final results without using the external data after noticing partial overlap between the chosen unlabelled external dataset and the provided training data.Both teams demonstrate that using external data, even unlabelled, could improve the segmentation performance.While this finding clearly calls for larger training datasets, it also shows the great potential of semi-supervised methods to achieve more robust solutions, especially for the healthcare domain where the annotation cost is much higher than in other fields 64 . U-Net dominance All top-10 teams used a 2D/3D U-Net variant with at most minor modifications.While this seems to conflict with hundreds of yearly publications creating new network architectures, it also shows that most existing deep learning algorithms lack the robustness offered by model ensembles to handle large data variations (e.g., resolution, contrast, etc.) when training data are limited.nnU-Net 51 was adopted by 5 out of the 10 teams to build an end-to-end solution while another team used MONAI 65 .Unsurprisingly, these findings show that the majority of participants employed well-validated, open-source resources. Data variability and generalizability gap The challenge was designed to use "seen" and "unseen" data sources and thus evaluate the generalizability of AI algorithms in front of variable clinical protocols.Our data sources varied in provenience (China and US), scanner manufacturers (various, as typical in routine clinical practice) and imaging protocols (image resolution).Figure 3 illustrates that the volumes of the annotated COVID lesions have similar distributions on the two data sources.However, there are substantial differences in the image resolution used for CT reconstruction in the data.These differences in voxel resolution, together with variability in scanner manufacturers and imaging protocols, were likely the main contributors to the generalization gap seen in the performance of algorithms on the "unseen" test cases.Additional factors were related to the variability of manifestations of the disease in the lungs.For examples, in the challenging case from the "unseen" test data source shows in Figure 7, the top-performing algorithms generated falsepositive predictions at a normal lung vessel while missing to segment the real lesion.Domain shifts like the ones observed in the data used in this challenge are still proving to be challenging for current AI models.Disease phase variability may also have broadened the features of what defines a standard or expected set of features.Early disease may not look like later disease cycles on CT, which may have also increased model noise. Potential for clinical use Segmentation and classification models have been postulated to impact diagnosis in outbreak settings with delayed or unavailable PCR, however the point of care classification of COVID-19 versus other pneumonia such as influenzae, could prove of some value during flu season in specific outbreak settings as an epidemiologic tool or as a red flag for patient isolation at the scanner, by early identification, thus expediting or prioritizing interpretation using more conventional radiologist review and verification.AI models have also been proposed to assist in triage or selection of resource-limited therapeutics or critical care, prognostication or prediction of outcomes, or as one data element of a multi-modal model combining clinical, laboratory and imaging data.Standardized response criteria for clinical trials can provide a level "playing field", thus uniformly defining effects of medical and other countermeasures, or specific scenarios for patient-specific therapies.Specific phenotypes may respond to certain therapies, for example.Imaging AI could thus play a role in determining the optimal disease phase for steroid administration or monoclonal antibodies, or even characterize the presence of different disease manifestations according to variant or underlying comorbidity, although many of these clinical or research utilities are quite speculative.AI models in COVID-19 have been justifiably criticized for a lack of generalize-ability, lack of clinical testing and validation, impracticality of model design, "me-too" models and studies, and easy replaceability of functionality with standard clinical tools. Potential clinical impact has yet to match the excitement from the data science and computational community nor realize the promise at the outset of the pandemic.Federated learning and open-source tools and modeling may help address this, especially for specific research questions for clinical trials or radiologist-sparse settings. Limitations The challenge organizers aimed to create a fair and robust evaluation platform for (semi-)automatic AI algorithms.This was a timely effort completed with limited resources, thus several factors could potentially be improved in retrospect.For example, 295 annotated CT images from two different data sources were used in the challenge, which may be suboptimal data quantity for training deep learning algorithms, as performance metrics improve with size of datasets.However, the challenge set a benchmark for the development and evaluation of AI methods to segment lung lesions in COVID-19, the first of its kind to our knowledge, which was reflected by the large number of participants.It is advisable to add more data in future challenges, even if the data are non-annotated as the results of this challenge indicated. Another limitation may be the data annotation.Each case was annotated by one radiologist who rectified the prediction from a publicly available COVID lesion segmentation AI model 66 .Although these initial predictions may be considered as a suggestion from an expert, which is a typical workflow for many AI data annotation solutions, a second verification from another human expert would likely further improve the annotation quality. Finally, the statistical consensus ranking algorithm over multiple tasks, although it overcomes the limitations of ranking based on single evaluation metrics, is computed only at the image level.The ranking does not provide a measurement of the algorithm on the lesion level, thus without consideration of each lesion's clinical relevance.Such information, which was nor available in our data, could be important for clinical diagnosis and tracking of disease progress.It could also provide a more granular interpretation of the strengths and weaknesses of each algorithm, and a guidance on how to improve them. Conclusion The COVID-19 Lung CT Lesion Segmentation Challenge -2020 provided the platform to develop and evaluate AI algorithms for the detection and quantification of lung lesions from CT images.AI models help in the visualization and measurement of COVID specific lesions in the lungs of infected patients, potentially facilitating more timely and patient-specific medical interventions. Over one thousand teams registered to participate in the challenge participating in this challenge reflecting the engagement of the global scientific community to combat COVID-19.The AI models could be rapidly trained and showed good performance that was comparable to expert clinicians.However, robustness to "unseen" data decreased in the testing phase, indicating that larger and more diverse data may be beneficial for training.A more granular interpretation of the strengths and weaknesses of each algorithm might highlight pathways on the road towards a future where AI and deep learning might help standardize, quantify, assess disease response, select patients or therapies, or predict outcomes.But first steps first, as the scientific community builds multi-disciplinary teams to develop new tools and methodology to walk before we run.As more AI applications are being introduced in the biomedical space, it is essential to adequately validate and compare the functionality of these applications through challenges as proposed in this paper. Rank 1: "Semi-supervised Method for COVID-19 Lung CT Lesion Segmentation" Team: Shishuai Hu, Jianpeng Zhang and Yong Xia Affiliation: Northwestern Polytechnical University, China Abstract: We noticed that the dataset provided in this challenge came from the TCIA database.Although the data in the TCIA database are not labeled for the COVID-19 Lung CT Lesion Segmentation task, they can be used as unlabeled data to improve the generalization ability of the segmentation model.To this end, we developed a simple but effective semi-supervised approach to utilize abundant unlabeled infected CT images.Specifically, we employ nnUNet as the backbone of the segmentation network and train it using labeled data at first.Next, we utilize the trained segmentation model to generate the pseudo lesion masks of both labeled and unlabeled infected CT images.Finally, a segmentation network can be trained in a fully supervised manner by feeding the data with generated pseudo labels.We validated our method on COVID-19 Lung CT Lesion Segmentation Challenge.Compared with the vanilla fully-supervised segmentation network, our approach can improve the Dice Similarity Coefficient by 4.27% (from 72.38% to 76.65%) on the training set (5-fold cross-validation). Rank 2: "nnU-Net for Covid Segmentation" Team: Fabian Isensee, Peter M. Full, Michael Götz, Tobias Norajitra, Klaus H. Maier-Hein Affiliation: Division of Medical Image Computing, German Cancer Research Center, Germany Abstract: nnU-Net is a robust out-of-the-box segmentation tool that automatically configures itself for each dataset it is applied to.We use it as a framework to implement five 3D U-Net configurations: 1) a low resolution residual U-Net with extensive data augmentation and batch normalization (BN) 2) a high resolution U-Net with extensive data augmentation and instance normalization (IN) 3) a high resolution residual U-Net 4) a high resolution plain U-Net with extensive data augmentation and IN and 5) a high resolution plain U-Net with extensive data augmentation and BN.High resolution U-Nets have a patch size of 28x256x256 voxels and operate on data resampled to a common voxel spacing of 5x0.74x0.74mm.The low resolution U-Net operates on 5x1.14x1.14mmwith a patch size of 40x224x192.Each configuration is trained as a 5-fold cross-validation.Additionally, 5 random 80:20 data splits are trained for each configuration.We use the standard nnU-Net hyperparameters for training.The configurations listed above were selected based on their cross-validation performance on the training set.We should note that none of these substantially outperformed the nnU-Net baseline.The best performing model was 2) with an average Dice score of 75.43 vs 74.41 for the 3d_fullres baseline.The 10 models from the 5 configurations are all ensembled for the test set prediction (50 models) through softmax averaging.No post-processing is applied.We only use the data provided by the challenge. Rank 3: "Automated Ensemble Modeling for COVID-19 CT Lesion Segmentation" Team: Claire Tang Affiliation: Lynbrook High School, USA Abstract: We developed an automated U-Net model training and optimization pipeline.Our pipeline includes the automated data preprocessing, automated U-Net model training with various data inputs and various loss functions, as well as the automated best combination for ensemble modeling.For data preprocessing, we create both 2D and 3D images.For 2D, we construct each CT slice as training data.For 3D, we construct both low-resolution images via downsampling and full resolution images.Then, the whole training data is split into 5-fold training sets.For U-Net model training, we automatically train the following models: 2D U-Net using 2D images, 3D U-Net using both low resolution and full-resolution 3D images, 3D cascade U-Net which is first trained low-resolution U-Net on low-resolution 3D images and then uses its prediction to further train a full-resolution U-Net.For each U-Net model, we use the following three loss functions: DiceCE loss which combines region-based soft Dice loss and distributional-based cross-entropy loss, DiceTopK loss which combines soft Dice loss and TopK loss, DiceFocal loss which combines soft Dice loss and Focal loss.For ensemble modeling, we automatically evaluate the combination of our trained models by considering the combination of 2 to 4 models.The best model combination is then selected to test on validation and testing dataset.Our results show the best Dice Coefficient via cross-validation results on the training set is 0.7288.Our submitted validation results achieve Dice Coefficient 0.7363. Rank 4: "COVID-19-20 Lesion Segmentation Based on nnU-Net" Team: Qinji Yu, Qikai Li, Kang Dang Affiliation: Shanghai Jiao Tong University, China Abstract: In the COVID-19 Lung CT Lesion Segmentation Challenge, we use nnU-Net which refers to a robust and self-adapting framework for medical image segmentation automatically.In view of the 3D CT data type in the challenge, we choose 3D U-Net to serve as the network architecture.Limited by the amount of available GPU memory, we try to train this architecture on 3D CT patches instead of the optimal whole CT scans.Firstly, we do preprocessing to all the training CT scans including cropping, resampling and normalization.After preprocessing, we divide the total 200 training CT scans into 5 folds randomly to perform 5-fold cross-validation on the training dataset and all models will be trained from scratch.The following augmentation techniques are applied on the fly during training: random rotations, random scaling, random elastic deformations, gamma correction augmentation and mirroring.We trained our networks with a noise-robust Dice loss for 400 epochs.During the inference stage, all inference is done patch-based.For the test cases we use the five networks obtained from our training set cross-validation as an ensemble to further increase the robustness of our models. Rank 5: "Leveraging state-of-the-art architectures by enriching training information -a case study" Team: Jan Sölter (1), Daniele Proverbio (1), Mehri Baniasadi (1), Matias Nicolas Bossa (1), Vanja Vlasov (1), Beatriz Garcia Santa Cruz (2,1), Andreas Husch (1) Affiliation: (1) Univ. of Luxembourg, Luxembourg Centre for Systems Biomedicine, Belvaux, Luxembourg, (2) Centre Hospitalier de Luxembourg, National Dept. of Neurosurgery, Luxembourg City, Luxembourg Abstract: Our working hypothesis is that key factors in COVID-19 imaging are the available imaging data and their label noise and confounders, rather than network architectures per se.Thus, we applied existing state-of-the-art convolution neural network frameworks based on the U-Net architecture, namely nnU-Net [3], and focused on leveraging the available training data.We did not apply any pre-training nor modified the network architecture.First, we enriched training information by generating two additional labels for lung and body area.Lung labels were created with a public available lung segmentation network and weak body labels were generated by thresholding.Subsequently, we trained three different multi-class networks: 2-label (original background and lesion labels), 3-label (additional lung label) and 4-label (additional lung and body label).The 3-label obtained the best single network performance in internal cross-validation (Dice-Score 0.756) and on the leaderboard (Dice-Score 0.755, Haussdorff95-Score 57.5).To improve robustness, we created a weighted ensemble of all three models, with calibrated weights to optimise the ranking in Dice-Score.This ensemble achieved a slight performance gain in internal cross-validation (Dice-Score 0.760).On the validation set leaderboard, it improved our Dice-Score to 0.768 and Haussdorff95-Score to 54.8.It ranked 3rd in phase I according to mean Dice-Score.Adding unlabelled data from the public TCIA dataset in a student-teacher manner significantly improved our internal validation score (Dice-Score of 0.770).However, we noticed partial overlap between our additional training data (although not human-labelled) and final test data and therefore submitted the ensemble without additional data, to yield realistic assessments. Rank 6: "Ensembling 2D and 3D nnU-Net for fully-automated COVID-19-20 lesion segmentation" Team: Tong Zheng, Luyang Zhang, Masahiro Oda, Kensaku Mori Affiliation: Nagoya University, Japan Abstract: Chest CT image processing for COVID-19 cases is becoming a big topic in the medical imaging field.Development and implementation of accurate COVID-19 CT image processing are fascinating challenges.In this COVID-19-20 lesion segmentation challenge, we used U-Net architecture as a baseline segmentation framework.The nnU-Net, which is based on U-Net is an image segmentation framework that automatically adapts its architectures to a given image geometry.We trained 2D and 3D low-resolution nnU-Net on the training dataset (199 cases).Patch size for training 2D nnU-Net was 512×512 pixels (same as the resolution of each axial slice).Patch size for training 3D nnU-Net was 28×256×256 voxels (downsample each axial slice to 1/2 scale).The 2D nnU-Net was trained for 1,000 epochs, and 3D nnU-Net was trained for 200 epochs.We used the Dice loss and the cross-entropy loss in the training process.We merge 2D and 3D low-resolution nnU-Net's outputs in the inference process.The trained 2D and 3D low-resolution nnU-Nets take the CT images as inputs.We obtain prediction results from softmax layers as outputs from 2D nnU-Net (result a) and 3D low-resolution nnU-Net (result b).Prediction results are the same size as the input image (512×512 pixels each slice).We calculate the mean of prediction results I = (a+b) / 2 for evaluation.Then we assign segmentation labels based on the prediction result I at each voxel.If the intensity of a specific voxel in I is larger than 0.5, we assign the foreground label (lesion) to such a pixel.At last, we also removed small connected components from the output.In the experimental results, we obtained a mean Dice coefficient of 0.7456 on the training dataset (2D nnU-Net, five-fold evaluation) and 0.6213 on the validation dataset (2D nnU-Net).On the test dataset (2D nnU-Net + low-resolution 3D nnU-Net), Dice coefficient score was 0.6392, and the mean Hausdorff95-Score was 118.6340. Rank 7: "Semi-3D CNN with ImageNet pretraining for segmentation of Covid lesions on CT" Team: Vitali Liauchuk Affiliation: United Institute of Informatics Problems (UIIP), Belarus Abstract: The utilized network starts with 2D slice-wise convolutions and performs slice-wise extraction of a pyramid of features with use of an ImageNet-pretrained VGG16 model.Then a UNet-like decoder is attached to the feature pyramid.Indecoder, the convolutions are performed with 3D kernels.Max-pooling in the encoder and upsampling in the decoder are performed slice-wise in this version, though optionally can be made 3D as well. The CNN training was performed with the use of the MONAI framework, the training parameters are mostly similar to the default ones.Data augmentation was extended by Gaussian blurring and sharpening and contrast adjustment.The probability of affine transform was increased to 0.3. The training was performed at two stages: 1) Starting with ImageNet weights in the encoder and random weights in the decoder; learning rate: 0.0001; loss: Dice + 10 * Cross-Entropy; ~150 epochs.2) Starting with the checkpoint with the highest Dice on validation subset; learning rate: 0.0001; loss: Dice; few epochs. The Train dataset was split into training and validation subsets ("domestic") in different ways.Depending on the split version the average Dice score on the domestic validation subset varied from 0.751 to 0.765 for the best runs.On the challenge Validation set, the best run resulted in 0.717.The Test set submission was averaged over three trained models resulting from three different custom train/validation splits and resulted in 0.646 average Dice score. Rank 8: "3D Automated Chest CT Image Segmentation of COVID-19 with nnU-Net framework" Team: Ziqi Zhou, Li Kang, Jianjun Huang Affiliation: College of Electronics and Information Engineering, Shenzhen University, China Abstract: Ground glass opacities in CT images are important indicators to diagnose COVID-19, and segmenting them is significant for diagnosis, treatment, and prognosis.In this paper, we propose a simple method based on nnU-Net training pipeline.First, we preprocess CT images using data augmentation to generate enough data for training, thus reducing the risk of overfitting.Then, we train the 3D high-resolution network and the 3D low-resolution network respectively with five-fold cross-validation.After the training, we select the best-performing network of each type for ensemble modeling.This is because we found through experiments that the ensemble of a few premium models is better than that of many mediocre models, since the former makes our model less affected by noise labels and causes false positives.Moreover, the ensemble of different resolutions can complement information at different semantic levels in images.This method uses neither pseudo-labels during the validation and test phases nor extra data.Dice coefficient of our model reaches 0.658 with all test cases, and particularly, 0.723 in the seen domain and 0.593 in the unseen domain. Rank 9: "Segmentation of COVID-19 lung lesions in CT using nnU-Net" Team: Jan Hendrik Moltz, Alessa Hering, Hannah Strohm, Felix Thielke, Volker Dicken, Bianca Lassen-Schmidt Affiliation: Fraunhofer Institute for Digital Medicine MEVIS, Germany Abstract: We used the nnU-Net framework to train a convolutional neural network for segmenting COVID-19 lung lesions in CT in a fully automatic manner.We trained only the 3D U-Net in a single fold on the training data.We achieved a mean Dice coefficient of 0.793 on the training data and 0.744 on the validation data.In medical imaging encoderdecoder, DCNNs have proved to be the best architectures for medical imaging segmentation.Here, we propose to use a coarse-to-fine 3D U-net approach.Firstly, the training images are downsampled and used to train a 3D low-resolution U-net.Next, the segmentation from the lower resolution training is used to crop the region-of-interest.The remaining volume is upsampled, and a new 3D U-net is further trained using the concatenation of high-resolution images with the coarse segmentation result.The two U-net were trained separately with the loss being defined as a combination of DICE and Cross Entropy.Finally, post-processing is applied to remove noncoherent anatomical results, namely lesions detected outside the lungs.Average Dice coefficients of 88.1% and 75.7%, average surface distances of 2.68 mm and 4.8 mm, and 95th quartile of Hausdorff distances of 9.91 mm and 78.3 mm were achieved for the training and validation dataset, respectively. Figure 1 | Figure 1 | The countries of origin of the 98 teams that completed the training, validation and test phases of the challenge. Figure 2 | Figure 2 | Demographic information of the leaders of the 98 teams that completed the training, validation and test phases of the challenge.The top row shows the age group (left), student status (middle) and sex (right) of the participant.The middle row shows the highest degree (left) and job category (right).Bottom row shows the algorithm characteristics for the 98 submissions that completed the training, validation and test phases of the challenge.We report if algorithms were fully-automated (left), used external data for training (middle) or used a general pre-trained network for initialization (right). Figure 3 | Figure 3 | Data variability between "seen" and "unseen" sources; a) Illustration of the differences in the image resolution and voxel volume grouped by training, validation, and test sets.b) Differences in COVID-19 lesion volumes across the image data sources.c) Normalized histograms showing the CT intensity distributions of the "seen" and "unseen" data sources in Hounsfield units (HU).Note, -1000 HU corresponds to air, and 750 to cancellous bone 37 . Figure 4 | Figure 4 | Blob plot visualization of the ranking variability via bootstrapping.An algorithm's ranking stability is shown across the different tasks, illustrating the ranking uncertainty of the algorithm in each task.For more details see 44 . Figure 5 |Figure 6 |Figure 7 |Figure 8 | Figure 5 | Top-10 algorithms performance measured for the = 6 tasks used in the challenge, namely the Dice coefficient (top row), Normalized Surface Dice (middle row), and Normalized Absolute Volume Error (bottom row) on the "seen" (a, c, e) and "unseen" test datasets (b, d, f), respectively.Algorithms are ranked based on their performance from left to right. Table 1 | Top-10 finalists after statistical ranking."Value" represents the average rank the algorithm achieved across all tasks.We also show if methods were automated, used external data for training, the input data dimensions used in the algorithms, and the network architecture.
9,100
2021-06-04T00:00:00.000
[ "Medicine", "Computer Science" ]
Dynamic amplification in a periodic structure with a transition zone subject to a moving load: Three different phenomena The study of periodic systems under the action of moving loads is of high practical importance in railway, road, and bridge engineering, among others. Even though plenty of studies focus on periodic systems, few of them are dedicated to the influence of a local inhomogeneous region, a so-called transition zone, on the dynamic response. In railway engineering, these transition zones are prone to significant degradation, leading to more maintenance requirements than the rest of the structure. This study aims to identify and investigate phenomena that arise due to the combination of periodicity and local inhomogeneity in a system acted upon by a moving load. To study such phenomena in their purest form, a one-dimensional model is formulated consisting of a constant moving load acting on an infinite string periodically supported by discrete springs and dashpots, with a finite domain in which the stiffness and damping of the supports is larger than for the rest of the infinite domain; this model is representative of a catenary system (overhead wires in railway tracks). The identified phenomena can be considered as additional constraints for the design parameters at transition zones such that dynamic amplifications are avoided. Introduction Periodic systems under the action of moving loads have attracted the attention of researchers in the past century. These problems pose academic challenges and are of high practical relevance due to their application in railway, road, and bridge engineering, among others. Despite the numerous studies on periodic systems, few investigations are dedicated to the influence of a local inhomogeneous region, a so-called transition zone, on the dynamic response. In railway engineering, significant degradation is observed in the vicinity of these transition zones, requiring more maintenance than the rest of the structure [1]. This study aims at investigating if the combination of (1) a transition zone and (2) the periodic nature of the structure can lead to undesired response amplification that is otherwise not observed in systems that neglect either (1) or (2). The study of periodic structures goes back to Newton who investigated the velocity of sound in air using a lattice of point masses; for an interesting historical background of wave propagation in periodic lumped structures, see Brillouin [2]. Rayleigh studied for the first time a continuous periodic structure [3], considering a string with a periodic and continuous variation of density along its length. When it comes to a periodic and discrete variation in continuous structures, Mead [4][5][6] was among the pioneers who studied free wave propagation in such systems. Concerning moving loads on such structures, Jezequel [7] and Cai et al. [8] were among the first to study periodically and discretely supported beams acted upon by a moving load. Vesnitskii and Metrikin [9,10] offered an extensive investigation into the behaviour of a periodically and discretely supported string acted upon by a moving load. More recently, there have been numerous studies of periodic guideways acted upon by vehicles, for example [11][12][13][14][15][16][17], and also numerous studies focusing on the vehicle instability caused by the periodic nature of the guideway (i.e. parametric instability or sometimes called parametric resonance), for example [18,19]. Studies using complex models containing periodic structures and transition zones are present in literature, for example [20][21][22][23]; however, these studies concentrate on predicting the transient response in the vicinity of the transition zone and do not treat specifically the influence of the discrete and periodic supports on these results. Moreover, with increased model complexity, the identification and investigation of particular/isolated phenomena becomes very difficult, if not impossible. Therefore, this paper focuses on the identification and investigation of specific phenomena that arise due to the combination of periodicity and local inhomogeneity in a system acted upon by a moving load. The local inhomogeneous region is itself periodic too, but with different parameters than the rest of the structure. To study phenomena in their purest form, a one-dimensional (1D) model is formulated consisting of a constant moving load acting on an infinite string periodically supported by discrete springs and dashpots, with a finite domain in which the stiffness and damping of the supports is larger than for the rest of the infinite domain. The novelty of this research lies in the identification and investigation of three phenomena arising from the combination of periodicity and local inhomogeneity in a system acted upon by a moving load; they have not been yet reported in the literature. The three phenomena are described in detail in Sections 4.1, 4.2, and 4.3, respectively. Although these phenomena are identified in this simple model, they are intrinsic to any periodic system with a local inhomogeneity, and thus, can help understand the potential response amplification in more complex systems that incorporate these two characteristics. Finally, as this model is representative of a catenary system (overhead wires in railway tracks), the three identified phenomena can help understand the fatigue and wear of the catenary systems close to transition zones as well as wear in the energy collector system. Model description The system studied in this paper consists of an infinite string with distributed mass per unit length r that is under tension T ; the string is discretely supported by springs with stiffness k s (x) and dashpots with damping coefficient c s (x); the generic cell is defined at x 2 ½nd, (n + 1)d where n is the cell number and d is the cell width, and the spring-dashpot element is located in the middle of the cell at x = nd with n = n + 1 2 ; this system is acted upon by a moving constant load of amplitude F 0 and velocity v. The stiffness and damping of the supports varies in space in such a way that there is a zone of length l in which the stiffness and damping of the supports is p times larger than for the rest of the infinite domain; the region in the close vicinity to the stiff zone is called the transition zone. Figure 1 presents a visual schematic of the system while its equation of motion reads where primes and overdots denote partial derivatives in space and time, respectively. The supports stiffness is a piecewise function in space and is defined as follows: For simplicity, the spatial distribution of the damping is assumed to be the same as that of the stiffness. The values for the parameters are taken from Metrikine [24]; they represent the parameters for a realistic catenary system. In the remainder of the paper, homogeneous system is used to refer to the system without transition zone while inhomogeneous system is used for the system with a transition zone, even though both systems are inherently inhomogeneous due to the discrete supports. Important to note, transition zone does not refer only to the stiff region, but to the stiff region and its vicinity, as can be seen in Figure 1. Homogeneous system In this section, we present the characteristics of the periodic system without the transition zone. The goal here is to introduce the solution method used throughout this paper, to highlight important characteristics of the periodic and continuous system, and to present the steady-state response to a moving constant load. Note that the system without damping is considered here for clarity in the derivation. To this end, we aim at writing an expression linking the states (displacement and slope) at the two boundaries of a generic cell. First, we apply the forward Fourier transform over time to the equation of motion (equation (1)), thus obtaining the following expression: where the tilde is used to denote the quantity in the Fourier domain and c = ffiffiffiffiffiffiffiffi ffi T =r p is the wave velocity in the unsupported string. We can limit our investigation to a generic cell x 2 ½nd, (n + 1)d and split this cell into two domains, withw 1 to the left of the support andw 2 to the right of it. This allows us to write the solutions in the two domains as follows: where g = v=c is the wavenumber in the unsupported string. Note thatw p is the steady-state solution of an unsupported string acted upon by a moving constant load. The interface conditions between the two domains at x = nd represent displacement continuity and shear force equilibrium, and read. Using the two interface conditions, D 1 and D 2 can be expressed in terms of C 1 and C 2 . Also, we express C 1 and C 2 in terms of the state at x = nd (i.e. displacementw n and slopew 0 n ). The resulting state inside the generic cell reads where f 1, 1 , f 1, 2 , f 2, 1 , and f 2, 2 are piecewise defined functions that relate the state inside the cell to the state at the left boundary (x = nd) whilew ML andw ML 0 are piecewise defined functions that include the influence of the particular solution on the state inside the cell; their expressions are not given for brevity, but they can easily be obtained using a symbolic mathematical software (e.g. Maple). To express the state at x = (n + 1)d in terms of the state at x = nd, one has to evaluate equation (8) at x = (n + 1)d. The resulting relation isw where matrix F is called the Floquet (or monodromy) matrix. Relation (9) is a discrete function that relates the information at the interfaces of an arbitrary cell. To investigate the propagation characteristics of the system, we momentarily focus on the system without the moving load, and it would become clear that the following expression links the state at x = nd to the one at x = 0:w To reveal specific characteristics of the periodic system, we perform an eigenvalue (a) and eigenvector (u) analysis of F. One can express the solution using the so-called Floquet wavenumbers k F = iln(a)=d and it, thus, readsw where a 1 and a 2 are unknown amplitudes that can be obtained from the two boundary conditions that need to be imposed to the system. To determine the Floquet wavenumbers k F , the dispersion equation (obtained from the eigenvalue analysis of F; it is presented by Vesnitskii and Metrikin [10] and a mathematical derivation is given in Appendix 1) needs to be solved for k F ; the dispersion equation reads As can be seen from equation (12), the dispersion relation for the discretely supported string is a transcendental equation. This means that there are infinitely many wavenumbers k F linked to one specific frequency v and the distance between subsequent wavenumbers is 2p=d. These repeating zones are called Brillouin zones [2]. For discrete systems, all dispersion information is contained in the first Brillouin zone (½À p d , p d ) because waves of wavenumber larger than p=d cannot propagate. As the Floquet wavenumbers are derived from a discrete function (equation (10)), they are limited to the first Brillouin zone (i.e. k F 2 ½À p d , p d ). However, the system considered here is a continuous one, and waves with all wavenumbers can propagate. Consequently, the responsew(x, v) will contain wavenumbers from all Brillouin zones and the continuous wavenumber reads k = k F + m 2p d with m = 61, 62, . . .. A repetition occurs also with increasing v; however here, the repetition is not exact due to the presence of v in the denominator of the sine term. This causes the dispersion curve of the periodic system to tend to the one of the unsupported string as v tends to infinity. The dispersion curve is presented in Figure 2. Three Brillouin zones are presented and it may seem that the repetition from one zone to the next is exact. However, the branches closest to the dispersion curve of the unsupported string give rise to waves with more energy compared to all the other branches; these branches form the primary dispersion curve. From a physical perspective, the energy propagated from cell to cell is governed by the Floquet wavenumbers k F and no distinction can be made between different Brillouin zones; however, the propagation inside the cells is governed by the string and wavenumbers from all Brillouin zones can be present dictated by the dispersion equation of the free string. Therefore, the propagation in the continuous and periodic system is a combination of the two, dictating that the waves with wavenumber k closest to g receives most amount of energy. This is demonstrated mathematically in Appendix 2. Also, we can observe that the discrete system exhibits multiple (actually infinitely many [2]) frequency ranges where no propagation is possible; these frequency ranges are called stop bands, while the frequency ranges in which propagation is possible are called pass bands. For comparison, in a continuously supported system the only frequency range in which wave propagation is not possible is below the cut-off frequency. Strictly speaking, stop bands (as well as pass bands) only exist if the system does not have dissipation; however, for small values of dissipation, the stop bands strongly attenuate wave propagation in these frequency ranges. Returning to the problem with the moving load, we still need to impose two boundary conditions to have a fully determined solution. Because we are searching for the steady-state response, we can make use of the so-called periodicity condition [10]. For the considered system (the load does not have an inherent frequency), the response inside each cell is exactly the same as in the previous one but shifted in time by d=v. The two boundary conditions, therefore, read Using equation (13), we can determine the remaining two unknown amplitudes C 1 and C 2 (their expressions can be found in Vesnitskii and Metrikin [10]). The steady-state solution in the Fourier domain is now determined. To obtain the time-domain solution, the inverse Fourier transform is performed numerically (for which to work efficiently, a small amount of damping is introduced.) For a continuously supported string, the steady-state response does not exhibit any wave propagation away from the load (we only consider sub-critical velocities). For the discretely supported string, however, waves are excited from the load every time it passes a support. In the case of a single support, the load generates a continuous wave spectrum when it passes it; this phenomenon is called transition radiation [25][26][27][28][29][30]. In the periodic system, the waves generated at each support interfere (constructively for some frequencies and destructively for others) leading to a discrete frequency spectrum of the radiated waves; this phenomenon is sometimes called resonance transition radiation [26] because the constructive interference of the radiated waves leads to resonance for some system parameters. More specifically, resonance occurs when the group velocity of one generated wave is equal to the load velocity. From Figure 10, we can identify the velocities at which resonance occurs (consider only the black line). As it can be seen, the system has many velocities at which resonance occurs, but some velocities lead to stronger resonance than others. Strong resonance occurs at low frequencies of the generated harmonic and at high velocities of the load [10]. To determine the frequency/wavenumbers of the waves generated by the moving load, next to the dispersion curve, we need another equation that expresses a relation between the frequency, wavenumber, and the load velocity, namely the kinematic invariant. For this system, the kinematic invariant can be determined from the following equation (10) ( a mathematical derivation of the kinematic invariants is given in Appendix 1): Equation (14) shows that there are infinitely many kinematic invariants. The zeroth-order kinematic invariant is given by v = kv that relates to a constant moving load while the higher order kinematic invariants are given by v = kv + m 2pv d with m = 61, 62, . . ., and are related to moving harmonic loads of frequency m 2pv d . Figure 3 presents the dispersion curve together with the kinematic invariants of the current problem. The dispersion curve is slightly different compared to the one in Figure 2 due to the presence of damping. It can be seen that there is no intersection point between the zeroth-order kinematic invariant (thick blue line) and primary dispersion curve (thick black line) because the considered load velocity is subcritical; nonetheless, there are intersection points between higher-order components. The intersections between one of the kinematic invariants and the dispersion curve represent propagating waves emitted by the moving load in the steady state. Moreover, it is important to observe in Figure 3 that the emitted waves form a discrete frequency spectrum, as expected, and that all generated waves have frequencies in the pass bands of the system. Moreover, it is clear from the wavenumber spectrum that the wave pack with frequency v (e.g. v = 19 or v = 51 rad/s depicted in Figure 3 through the dashed green lines) is composed of infinitely many discrete harmonic waves. The reason is that the harmonic wave is not an eigen-solution of the equation of motion; consequently, the eigenfunction is represented as a superposition of infinitely many harmonic waves. Some of these harmonic waves have a negative phase velocity while others have a positive one. Nonetheless, the wave pack (v = 19 or v = 51 rad/s) has a negative group velocity meaning that it travels in negative x direction. Also, the main contribution to the wave pack can be seen to come from the intersection of one of the kinematic invariants with the primary dispersion curve, as explained previously. Figure 4 presents a time-domain snapshot of the steady-state displacement field. It can be observed that in front of the load, the wave is mainly governed by one frequency-wavenumber pair while behind more pairs seem to be influential; also, the amplitude of the wave behind the load is larger than the one in front. The wave in front of the load is mainly governed by the second peak in the frequency spectrum which is associated with a positive group velocity larger than the load speed (so it travels in front of the load; see top plots in Figure 3) while the one behind the load is governed by the first and third peaks which are associated with negative group velocities; this explains the difference in amplitude as well as the frequency-wavenumber content of the waves. Inhomogeneous system In this section, the periodic system with a transition zone (as depicted in Figure 1) is considered. The solution is obtained using a Green's function approach; the moving load is first assumed to act inside only one cell and the response of this system is determined. To obtain the response of the system to the moving load acting on all cells, the individual solutions are superimposed. The drawback of this approach is that the load cannot act from t ! À' since this would imply obtaining and adding infinitely many solutions. Nonetheless, if the load enters far away to the left of the transition zone and if the system has damping, the response in the transition zone should be in the steady state. (This shortcoming could be avoided by imposing the steady state as initial conditions of the system (see Fa˘ra˘ga˘u et al. [29]); this is not done here because the computational cost of the above-mentioned procedure is very low.) The solution procedure starts, as previously, by applying the Fourier transform over time to equation (1). Then, the loading obtained is only considered for one cell; the solution procedure is demonstrated for the situation in which the load is applied to the left of the stiff zone, but the same procedure needs to be followed when it acts inside the stiff zone or to the right of it. The obtained equation of motion is divided in 5 domains: (1) left of the loaded cell, (2) the loaded cell, (3) right of the loaded cell and left of the stiff zone, (4) inside the stiff zone, and (5) to the right of the stiff zone. Their solutions can be written as done in the previous section and read where n is the left interface of the observation cell, n j is the left interface of the loaded cell (i.e. the excitation cell), and the overbar indicates that the quantity is associated to the stiff zone; n a and n b À 1 are the left interfaces of the first and last cells, respectively, in the stiff zone. The boundary conditions at infinity have already been accounted for in these solutions. Also, the signs of the wavenumbers have been chosen as Im (k F 1 )\0 and Im (k F 2 ) . 0. To determine the unknown amplitudes, interface conditions are imposed between the domains in the form of continuity in displacements and forces. The total solution (for the moving load acting on all considered cells) becomes wherew n, n j = ½w 1, n, n j ,w 2, n, n j ,w 3, n, n j ,w 4, n, n j ,w 5, n, n j is the solution for all the cells when the load is applied at n j , N left is the first cell on which the load acts (at t = 0) and N right represents the last cell. N left needs to be chosen sufficiently to the left of the transition zone such that the response is in the steady state in the transition zone. N right can be chosen based on the maximum time of the simulation and it does not introduce any unwanted transients in the response. It must be mentioned that the domain for which the response is determined can be and, for computational efficiency, should be smaller than the domain over which the load is applied. In other words, if N left is chosen far to the left of the transition zone, there is no need to determine the response there; we can restrict our domain of interest (i.e. observation) in the transition zone. The solution is now determined at the interfaces between cells. To determine the solution inside the cells, one simply needs to use equation (8). In the following, three phenomena are described and investigated that occur due to the combination of periodicity with the local inhomogeneity and lead to response amplification. Figure 3 shows that, in the case of a homogeneous system, the frequencies of all emitted waves lie inside the pass bands. However, once there is a change in stiffness of the supporting structure (i.e. a transition zone), the locations of the stop bands are different for the different parts of the infinite domain. Consequently, the frequencies of waves excited by the load in the soft regions can be in the stop band of the stiff zone. This causes the waves to be reflected almost completely by the stiff zone and to interfere with the wave field travelling with the load. This wave interference can lead to amplifications of the response in the transition zone. Wave-interference phenomenon For this mechanism to be pronounced, the amplitude of the waves that are in the stop band of the stiff zone should be significant. This criterion is met when the velocity is close to a resonance velocity. In Figure 10, the strongest resonance in the soft region occurs at a velocity v'26 m/s; consequently, for this investigation, a velocity slightly higher than this one is chosen (i.e. v = 28 m/s). This is done because the excited wave needs to propagate faster than the load such that it has time to reflect from the stiff region. (At resonance, the group velocity of the generated wave equals the load velocity; for a load velocity slightly larger than resonance velocity, the generated wave of interest travels slightly faster than the load.) There are two situations which lead to amplification of the response in the transition zone. First, the forward propagating wave is reflected at the stiff region and propagates backwards interfering with the wave field close to the load. This amplification should be observable at the left of the stiff region. Second, when the load has passed the stiff region, the backward propagating wave is reflected at the stiff zone and propagates forward interfering with wave field close to load. This amplification should be observable to the right of the stiff zone. First, we investigate the region to the left of the stiff zone. The response is evaluated at approximately 5 m to the left of x a (see Figure 1); the frequency and wavenumber spectra of the transient response are compared to the steady-state ones in Figure 5. On one hand, the second peak in the frequency spectrum, corresponding to the forward propagating wave, is amplified in the transient response; because the frequency of this wave is in the stop band of the stiff zone, the wave is reflected almost in its entirety (not completely due to damping and transmission to the right of the stiff region). Moreover, unlike the steady-state response, the wavenumber spectrum of the transient response exhibits an additional wave with wavenumber equal in magnitude but opposite in sign (i.e. opposite direction of propagation) to that of the forward propagating wave, confirming the wave reflection. On the other hand, we can see that the first peak in the frequency spectrum, corresponding to backward propagating wave, is almost completely eliminated; the fact that the response is evaluated very close to the stiff zone (to its left) implies that less time is available to generate this wave (in the stiff zone, this wave is no longer generated), which explains the lower amplitude. When looking to the right of the stiff zone, the opposite is occurring. Figure 6 shows that the first peak in the frequency spectrum is amplified while the second peak is almost completely eliminated in the transient response. A similar reasoning as above can be used to explain these observations. A general picture is obtained when looking at the time-domain response under the moving load, presented in Figure 7. The transient response is amplified significantly to the left and right of the stiff region. The response for the equivalent continuously supported system with a transition zone is also presented to show, that in that case, there is no visible amplification (due to the relatively low velocity). It is now clear that this significant amplification is caused by the periodicity of the system together with the transition zone; if any of these two characteristics are removed, the amplification vanishes. The question arises how this mechanism is affected by the length of the stiff zone. If the stiff zone has a very short length, the tunnelling effect (similar to the quantum tunnelling [31]) can occur leading to energy being tunnelled to the soft domain to the right of the stiff zone. As a short investigation, we consider the same system, but an incident wave coming from the left is used instead of the moving load. The solution to that problem (cf. equations (15)- (19)) reads where A i , A r , and A t are the amplitudes of the incident, reflected, and transmitted waves, respectively; A 1 and A 2 are the amplitudes of the waves inside the stiff zone. Equation (23) together with the continuity conditions at the interfaces of the three domains can be used to express the amplitudes of all waves in terms of the amplitude of the incident wave A i . This allows us to study the reflected and transmitted waves depending on the frequency/wavenumber of the incoming wave and on the length of the stiff zone. Figure 8 presents the coefficients jA r, t j 2 of the reflected and transmitted waves, respectively, for three lengths of the stiff zone, where the length of the stiff zone is defined as l = rd (i.e. an integer number of cells). (The coefficients jA r, t j 2 are presented and not the amplitudes themselves because jA r j 2 + jA t j 2 = jA i j 2 in the absence of damping [31]). Both coefficients are very small for frequencies below the cut-off frequency of the soft zone; in this frequency range, the incoming wave is evanescent, leading to no energy input. For frequencies in the pass band of both domains, the coefficient of the transmitted wave is dominant while that of the reflected one is low. In the frequency range between the cut-off frequencies of the two domains, the outcome depends highly on the length of the stiff zone. For a large r, the coefficient of the transmitted wave is zero while the reflected one is almost 1 (it is not exactly 1 due to the presence of damping). For a smaller r, energy is tunnelled to the right side and the transmission increases while the reflection decreases. The energy tunnelling to the right domain can be observed in the top panel of Figure 9. Returning to the problem with the moving load, the frequencies of the two dominant waves excited by the moving load (in the scenario studied previously; see Figure 5) are both in between the cut-off frequencies of the two domains. For a large r, both waves experience almost full reflection, and thus, the significant amplification observed previously. For r = 1, the forward propagating wave will not any more fully reflect, as can be inferred from the right panel of Figure 8 (the forward propagating wave is indicated through the top green dashed line), while the backward propagating wave is still almost fully reflected (the backward propagating wave is indicated through the bottom green dashed line). This scenario should lead to a smaller amplification to the left of the stiff zone and the same amplification to the right of the stiff zone. This is confirmed in the bottom panel of Figure 9, where the amplification to the left of the stiff zone is slightly smaller for r = 1 than for r = 10, while the amplification to the right of the stiff zone is almost the same (there is a shift in time and space due to the different lengths of the stiff zone, so one needs to compare peaks at 115 m (orange) and 205 m (green)). Nonetheless, the amplification to the left of the stiff zone can be clearly seen even for r = 1. It is important to mention that the wave-interference phenomenon is not sensitive to the stiffness difference between the stiff and soft domain, provided that the generated waves are in the pass band of the stiff zone. Simulations have been performed also for p = 5 instead of p = 2 and the amplification turned out to be very similar in magnitude. Also, it must be mentioned that the wave-interference mechanism occurs also in the continuously supported system subject to a moving constant load, leading to amplification of the response as shown for a beam by Fa˘ra˘ga˘u et al. [28]; however, for a continuously supported system subject to a moving constant load, this mechanism is influential only for velocities close to the critical velocity while here it can lead to a significant response amplification for much lower velocities of the load. Passing from non-resonance velocity to a resonance velocity As discussed in Section 3, there are several load velocities that can lead to resonance in the periodic system. When designing the catenary system, its properties should be chosen such that these resonance velocities are far away from operational velocities of trains. However, even if the operational velocity is far from resonance velocities outside transition zones, it can be close to a resonance velocity inside the stiff region of the transition zone if this is not designed having this criterion in mind. In this section, the situation is investigated in which the load passes from non-resonance velocity in the soft region to a resonance velocity inside the stiff region. Note that the velocity of the load is kept constant and just the velocity at which resonance occurs changes due to a change of the support stiffness. Figure 10 presents the resonance velocities for the soft and stiff regions (here the stiffness ratio is p = 2). For a load velocity of v = 33:5 m/s, the response is non-resonant in the soft region while in the stiff region it is expected to get amplified due to the occurrence of resonance. The fact that this velocity causes resonance in the stiff region can also be seen in the dispersion curve presented in Figure 11; one kinematic invariant (the first-order one) is tangential to the dispersion curve of the stiff region meaning that the group velocity of the generated wave is equal to the load velocity, which leads to resonance. The amplification of the response in the stiff zone can be observed in both the frequency spectrum and wavenumber spectrum. Moreover, the frequency and wavenumber spectra exhibit additional large peaks at the frequency and wavenumber, respectively, corresponding to the wave generated inside the stiff zone. Figure 12 presents the displacement under the moving load. The amplification in the stiff zone is observed clearly with a drastic increase compared to the response in the soft region. The increase in response requires a few cell lengths to develop, characteristic to resonance; for short stiff zones, resonance might not have time to develop, but for longer ones, strong response amplification can develop. It is important to mention that the phenomenon of passing to resonance velocity has an equivalent in the continuously supported system subject to a moving constant load, but there are important distinctions. First, in the continuous system, resonance can only occur at the critical velocity (the boundary between sub-critical and super-critical velocities) that usually is much larger than the operational train velocities. For example, the continuous system equivalent to the periodic one considered in this section has a critical velocity of around 115 m/s while the velocity that leads to the considered resonance in the stiff region is 33.5 m/s. Second, to go from sub-critical to critical velocity, the stiffness of the supporting structure needs to decrease (if all other parameters are kept constant); this is much less common in practice because transition zones are usually regions with stiffer structures. Wave trapping inside the stiff zone The stiff zone has a finite length l, and consequently, the incoming waves generated by the moving load in the soft region could get trapped inside. Wave trapping could lead to response amplification inside the stiff zone even when the moving load is relatively far away. To mathematically derive the conditions for wave trapping, a system without damping is used, while in the graphical results a small amount of damping is present; however, the change in the wave-trapping conditions caused by a small amount of damping is negligible. The amount of damping imposed in this subsection is one quarter of that used in the rest of the paper to be able to present this mechanism in its purest form; for larger amounts of damping, although the mechanism can still be seen, it is less pronounced. An approximate condition for wave trapping is that q half-wavelengths of the wave inside the stiff zone is an integer fraction of l. Mathematically this can be written as where l is the wavelength. This would only be exact if the stiff zone was simply supported at both ends, which is not the case for the considered system. An exact condition for the considered system can be derived by using the phase-closure principle (see Mead [32]) to determine the modes of vibration of the stiff zone. However, this exceeds the purpose of the paper and the approximate condition is sufficient to observe the mechanism. From relation (24) the wavenumber k tr for the wave to be trapped is determined and it reads In order to find the corresponding frequency, the wavenumber in the first Brillouin zone is chosen because the waves with most energy generated by the moving load are located in the first pass band (the higher harmonics have significantly less energy) and the first Brillouin zone. The frequency v tr corresponding to k tr can be found by numerically solving the dispersion equation for v tr A wave with wavenumber k tr given by Eq. (25) and frequency v tr would be trapped inside the stiff zone. The wavenumber k tr, 2 of the generated wave in the soft region (the frequency remains the same v tr ) reads Clearly, the wave with wavenumber k tr, 2 and frequency v tr generated in the soft region will give rise to a wave in the stiff region that is trapped. One can easily check this by considering the system with a harmonic load (acting at a location in the open track) instead of a moving one, in which case the wave trapping can be clearly observed (this result is not presented here for conciseness). To observe the same behaviour for the moving load, one first needs to determine the velocity of the load at which this wave (wavenumber k tr, 2 and frequency v tr ) is generated. To this end, we substitute k = k tr, 2 and v = v tr in the kinematic invariant, equation (14). Because sub-critical velocities are considered, the zeroth-order kinematic invariant cannot intersect the primary dispersion curve; therefore, we look at the first-order kinematic invariant, and the velocity of the load corresponding to this situation reads The frequency and wavenumber spectra evaluated at a position inside the stiff zone are presented in Figure 13. The frequency spectrum of the transient response exhibits a large peak at v tr corresponding to the trapped wave. Moreover, the wavenumber spectrum shows that the wave in the soft region with wavenumber k tr, 2 (represented by the black line) is transformed in the stiff region into two peaks at k tr and Àk tr that represent the trapped (standing) wave inside the stiff zone. The two peaks are not equal in magnitude as would be the case for a true standing wave. One reason is that, as the source is on the left of the stiff region, the wave travelling in negative x direction (which is a reflection of the wave travelling in positive x direction) is damped more; another reason is that energy is transmitted to the right of the stiff domain. Figure 14 presents a snapshot of the displacement field where the trapped wave can be clearly observed. The amplification is not drastic, but it is clear. In Figure 14 can also be seen that energy is transmitted to the right domain, meaning that even in the absence of damping, the amplification in the stiff domain will not be infinite. Moreover, it is important to realize that only the wave corresponding to the second peak (in the frequency domain) is trapped; all other waves can pass through. Finally, the amplification disappears for slightly different velocities, as can be seen in Figure 15, or different lengths of the stiff zone (provided that it is not another multiple of the wavelength). Relation to the continuously supported system with a harmonic moving load An easier problem to solve that could also capture the three phenomena discussed in Section 4 is the continuously supported string subject to a moving harmonic load. The solution of this problem can be obtained by applying the Fourier transform over time to the governing equations and solving the resulting ordinary differential equation in the Fourier-space domain. This has been done in, for example [28,33], for a moving constant load and can easily be extended to a moving harmonic load. The frequency of the harmonic load can be chosen such that the first two peaks in the frequency spectrum (e.g. Figure 3) are accurately represented; by choosing O = 2pv d , the kinematic invariant in the continuously supported system coincides with the first-order kinematic invariant from the periodic system. Moreover, for the responses of the two systems to match, the moving load must have two components: a constant one (zero frequency) and a harmonic one; this way, the response is not symmetric with respect to the zero displacement line, but is shifted downwards as seen in Figure 7. Thus, the expression for the moving harmonic load reads ÀF 0 (p 1 + p 2 cos (Ot))d(x À vt), where p 1 and p 2 need to be tuned such that the overall steady-state displacement field matches the one of the periodic system. Figure 16 presents the comparison of the periodic and continuous systems. It can be seen that the frequency spectra of the two systems agree well for the first two peaks, and the continuous system does not exhibit more peaks than these two. One can introduce more peaks in the response of the continuous system by imposing multiple harmonic components to the moving load (i.e. p 3 cos (2Ot), etc.). The bottom panel in Figure 16 shows that the time-domain displacement fields also agree well. For this set of parameters, p 1 = 1 and p 2 = 0:1 lead to the best fit overall; it must be emphasized that these tuning parameters change with system properties (e.g. load velocity, support spacing, support stiffness, etc.), and they cannot be determined without the response of the periodic system. First, when it comes to the wave-interference mechanism, Figure 17 shows that the transient response of the continuous system exhibits qualitatively the same behaviour as the periodic one. However, the response in the stiff region differs considerably between the two systems because parameters p 1 and p 2 have been chosen such that the responses match in the soft region, not in the stiff one. This is one drawback of the equivalent model if one is interested in the response inside the stiff region. Also, the waveinterference mechanism can be reproduced in the continuous system only if the waves (in the periodic system) with most energy are located in the first stop band of the stiff region; if these waves were in the second stop band, then they would be able to propagate through the stiff zone of the continuous system because, unlike the periodic one, it only has one stop band. When it comes to the tunnelling effect, this can also occur in the continuously supported system and will lead to a decrease in the response amplification caused by the wave-interference phenomenon. Second, for the wave-trapping phenomenon, Figure 18 shows that the continuous system exhibits a similar behaviour as the periodic one, and the agreement between the two is very good. If one wants to investigate this mechanism in detail, the continuous system can be an option. The fit between the transient responses can be further improved by changing the scaling factors p 1 and p 2 , but this would require to have the transient response of the periodic system in advance, defeating the purpose of using the continuous system. Finally, for passing from non-resonance velocity to resonance velocity, the continuous system cannot be used at all. The continuous system has one resonance velocity, the critical velocity; the value of that critical velocity is much higher than the one leading to resonance in Section 4.2. Consequently, this phenomenon can only be investigated in the periodic system. Conclusion This paper investigated three phenomena that can lead to response amplification in a continuous and periodic system with a local inhomogeneity (i.e. a transition zone) described by an increase in support stiffness. These phenomena are investigated using an infinite string periodically supported by discrete springs and dashpots, acted upon by a moving constant load; this model is representative of a catenary system in railway tracks. Nonetheless, the phenomena described in this paper can occur also in other continuous and periodic systems, such as a beam and membrane. The phenomena are the product of a periodic system and a local inhomogeneity, and if one of these characteristics is omitted, the phenomena will not occur. The first phenomenon is the wave interference that can lead to response amplification to the left and to the right of the stiff region. The waves generated by the moving load outside the transition zone are reflected almost entirely by the stiff region if one of the frequencies of the waves are located in a stop band of the stiff region. This almost complete reflection leads to wave interference close to the moving load, which in turn leads to response amplification. Results show that this mechanism is of importance when the velocity of the load is slightly higher than one of the resonance velocities in the soft regions. For small lengths of the stiff zone energy can be tunnelled to the soft domain causing a reduction in the reflection coefficient which in turn leads to a reduced amplification. The second phenomenon is the passing from non-resonance velocity in the soft region to a resonance velocity in the stiff region. This causes resonance to occur inside the stiff region leading to a drastic amplification of the response mainly inside the stiff region. Results show that this mechanism leads to the biggest response amplification between the three phenomena. The third phenomenon is the wave trapping inside the stiff region. For specific values of the wavenumber and frequency of the waves generated in the soft region, waves can get trapped inside the stiff zone potentially leading to response amplification around and inside the stiff zone. Results show that this mechanism leads to amplification inside the stiff region even when the moving load is relatively far away from it. However, for reasonable values of damping, this mechanism is not as pronounced as the other two. The possibility of capturing these phenomena using a simpler model, a continuously supported string acted upon by a moving harmonic load, was also studied. The wave-interference and wave-trapping phenomena observed in the periodic system can be seen in the continuous system too, while the resonance phenomenon cannot be replicated using the continuous model. To obtain similar results for the continuous system, the static and harmonic components need to be tuned to the steady-state response of the periodic system. Once this tuning is satisfactory, the transient responses match quite well and the two phenomena are qualitatively well captured. However, the tuning parameters, in principle, are not known before-hand and need to be updated for each change of the system properties, which makes it difficult to use the continuous system in practical situations. Finally, the amplification of stresses and displacements in the transition zones can lead to numerous fatigue and wear problems in the catenary system and in the energy collector of the train. Moreover, accounting for the low (mean) contact force between wires and carbon strip, the dynamic response of the system can also lead to force fluctuations that are large enough to cause arching (occurs when the contact force is too low) or loss of contact. The three investigated phenomena can be considered as additional constraints for the design parameters at transition zones such that amplifications are avoided, especially because all three phenomena occur in the range of operational train speeds. Funding Here we present a detailed derivation of the dispersion equation (equation (12)) and of the kinematic invariant (equation (14)). For clarity of the derivations, a system without damping is considered. First, the dispersion curve is derived. The eigenvalues a 1, 2 are obtained from an eigenvalue analysis of the Floquet matrix. The Floquet matrix is obtained by evaluating the right-hand side of equation (8) (excluding the particular solutions) at x = (n + 1)d, and reads The determinant of the Floquet matrix is 1, and, thus, the eigenvalues of the Floquet matrix read where B is half the trace of F. The relation between the Floquet wavenumber k F and the eigenvalue a (we restrict the following derivation to one eigenvalue; the derivation is analogous for the other one) is given as follows: Depending on the frequency (B is frequency dependent), there are three possible scenarios. The first scenario is that B 2 . 1 meaning that a is real-valued and positive. From equation (31), this leads to the wavenumber k F to be purely imaginary; the corresponding frequency ranges represent the stop-bands in the dispersion curve. The second scenario is when B 2 = 1 resulting in repeated eigenvalues. These locations correspond to the transition points between the stop and pass bands. For the third scenario, B 2 \1 resulting in complex-valued eigenvalue a corresponding to the pass-bands in the dispersion curve; in this scenario, waves are propagating without attenuation meaning that k F is real-valued. Consequently, equation (31) can be rewritten as This leads to the following set of conditions for the Floquet wavenumber If the first condition in equation (33) is satisfied, then the second one is also satisfied. Any of the two conditions can be selected as the dispersion equation (we selected the first one due to its concise form). Second, the kinematic invariants are derived. The kinematic invariants ensure phase equality of the emitted harmonic waves and the load at the contact point [10]. The phase of a harmonic wave with frequency v and wavenumber k is The phase of a harmonic wave is constant for an observer moving together with the wave, resulting in the following relation between frequency and wavenumber: The change of position with time (i.e. dx=dt) of the moving load is its velocity v, and since the kinematic invariant ensures phase equality of the emitted harmonic waves and the load, we have This is the kinematic invariant for a homogeneous system (without periodic supports) subject to a moving constant load. For the system studied in this paper (continuous system with discrete and periodic supports), a harmonic wave (with phase given by equation (34)) is not a solution of the equation of motion; the equation of motion allows for solutions in the shape of summations of harmonic waves that have the following expression for the phase: where m = 61, 62, . . . . In this case, infinitely many kinematic invariants are necessary to ensure phase equality between the moving load and the infinitely many generated waves. The expression of the kinematic invariants reads This expression is analogous to equation (14).
11,725
2022-05-10T00:00:00.000
[ "Engineering", "Physics" ]
CELLULAR AUTOMATA: TOWARDS POSSIBLE APPLICATIONS IN URBAN DESIGN EDUCATION AND PRACTICE This article investigates how Cellular Automata have been applied to the design process in the fields of architectural and urban design. It begins by systematically mapping published applications of Cellular Automata in the design process in order to outline the state of the art. The methods employed in the reviewed papers are analyzed and contrasted in order to develop a conceptual framework to guide future applications of Cellular Automata as a tool in the urban design process in academic environments, aiming at future applications in actual practice INTRODUCTION In the field of architecture and urbanism the word complexity is widely used to describe cities (HEALEY, 2007;KASPRISIN, 2011) as being formed by multiple layers and interconnected structures.However, less numerous are the authors who approach it as a Complex System, from the Complexity Theory point of view, a field of study with numerous scientific ramifications.Cities are not only complicated, they are complex in the sense that they are characterized by emergent behaviors that cannot be accounted for by the sum of the parts (HOLLAND, 2014).396| M.N.P. Oliveira e SOuSa & M.G.C. CelaNi|Cellular automata in urban design|http://dx.doi.org/10.24220/ 2318-0919v16n2a4211 Oculum ens.| Campinas | 16(2) | 395-408 | Maio-agosto 2019 Johnson (2003) and Batty (2007) credit Jacobs (1961) for the introduction of concepts of the complexity theory into the repertoire of urbanism.They refer to passages in which the author argues that the city is a problem of organized complexity.This concept was successively addressed in other seminal texts of the area, such as A Pattern Language (ALEXANDER et al., 1977) and Space Syntax (HILLIER et al., 1976).Johnson (2003) compares cities to anthills, where the social organization of the whole emerges from individual decisions taken by each ant based on their contact with neighbors in their direct vicinity.This idea resonates with Batty's (2007) definition of the city as a self-organizing entity.According to him the structure of a city emerges from bottom-up actions taken by its individuals, and that top-down decisions only affect the whole if individual agents decide to adopt them, thus configuring a complex adaptive system.To simulate this bottom-up emergence of organization in a city, Batty (2007) employs Cellular Automata based tools to model how basic elements can lead to large scale organization patterns in a city, by influencing only its direct vicinity. Cellular Automata (CA) are a method for describing complex behavior.Its origins are usually traced back to Von Neumann (1966), who applied it to model self-replicating machines, and Ulam (1972), who used it to model crystal growth.CA are formed by a discrete grid of cells in one, two, or three dimensions, where each cell is attributed a state, or value.These states can be expressed as colors, binary code, numbers, or even images assigned to individual cells.A set of rules determines how a cell can transition to another state, based on the states of the cells in its neighborhood, in the next discrete time step. The neighborhood that affects a cell is also a predetermined feature in this type of model and is usually represented by a group of adjacent cells (WOLFRAM, 1983;SHIFFMAN, 2012).Commonly used neighborhood types are Von Neumann and Moore.In the former, only cells that share edges with the central cell are considered, and in the later the corner cells are also part of its neighborhood.Depending on the purpose of the model, larger radii can also be adopted, with two or more cells, or even different shapes (BATTY, 2007). The need for more reliable simulations led to a transition from regional models to smaller scale problems and microsimulation.This, in addition to the lack of data for model calibration, paved the way for the introduction of CA in the field of Urban Studies, which can be credited to Tobler (1970) and Couclelis (1985).The concept was later developed by White andEngelen (1993), andMichael Batty, Couclelis andEichen (1997) and Batty (2007).CA-based models in urban studies can be divided into three main purposes were classified by O' Sullivan and Torrens (2000) in the following categories: studies on land use dynamics; regional scale urbanization studies; urban socio-spatial segregation studies; location analysis; urbanism; and studies on urban growth and sprawl.The advantage of the application of CA to urban modelling is its decentralized approach, as it enables the simulation of bottom-up processes, the simplicity with which outcomes can be visualized, and the high level of abstraction that allows its employment to a wide range of phenomena (O'SULLIVAN & TORRENS, 2001). The simulation of urban processes with CA, where cities are represented as an array of cells and urban transitions as rules is almost intuitive in urban studies.The grid of cells can be translated to pixels in a satellite image, broadly used in urban planning.However, this type of analogy is a profound simplification of urban processes, as it overlooks the agents that led to the city's transitions.According to O' Sullivan and Torrens (2001), the original structure of CA is not well suited to model urban phenomena, and before it can be applied to this task it must undergo "radical modification". To deal with these limitations, relaxations had to be made to the original structure of CA, in order to adapt it to the construction of urban models that were better suited According to Santé et al. (2010), even though these relaxations from the original structure of CA may allow a more realistic prediction of urban processes, the initial simplicity, which is central to the idea of emergence, is lost in the course.Instead of a simple set of states and transitions leading to complexity, most models are already very complex from the start.In some cases, the departures are so extensive that the resulting models 398| M.N.P. Oliveira e SOuSa & M.G.C. CelaNi|Cellular automata in urban design|http://dx.doi.org/10.24220/2318-0919v16n2a4211 Oculum ens.| Campinas | 16(2) | 395-408 | Maio-agosto 2019 barely resemble a CA at all.According to the authors, transition rules should be implemented according to standard methods, but it remains difficult to define simple rules that are able to represent all the variety of processes that take place within a city.Software for urban modelling based on CA usually require the user to have knowledge of computer programming, which hinders their application as a widespread tool for urban design. This article focuses on the question of how to apply CA in the practice of urban design as defined by Gunder (2011, p.184): "a subfield of urban planning particularly concerned with urban form, livability and aesthetics".Thus, urban design can be understood as the part of planning that is concerned with practice, which leads to the question of how CA can be applied to the urban design process.CA has been widely used to implement theoretical models to simulate a large number of urban phenomena, and many researchers have been focusing on calibration in order to apply these models to urban planning, but very few practical examples have been published (SANTÉ et al., 2010), and this number seems to shrink even further when the subject is urban design. The following study begins with a Systematic Mapping of Literature (SML) on applications of CA to the design process in Architecture and Urban Studies in order to outline the state of the art in the field.The papers gathered in the previous phase were later examined in order to extract data on how CA has been applied and interpreted in the design process.Finally, through the analysis of the discussions and conclusions presented in the reviewed papers, a conceptual framework was developed to guide future applications of CA as a tool in the urban design process.The objective of this research is to aid the application of CA in academic environments, aiming at future applications in actual practice.| 16(2) | 395-408 | Maio-agosto 2019 Through an initial Exploratory Review of Literature, a great number of articles that studied how to use CA to model a wide range of urban phenomena was collected.However, few of them discussed how to apply this method to the design process.The articles by Herr and Kvan (2007) and Araghi and Stouffs (2015) were evidences that significant work had been done in the field, yet both were more concerned with matters of architecture than urban design.To further investigate how CA had been used in the design process, a Systematic Mapping of Literature was conducted.The criteria adopted in this SML were the search for articles, books, chapters, and conference papers that studied the applications of CA in the design process in urban design, as well as in architecture.Both fields were used in this survey in order to search for methods adopted for the latter and that could be transposed to the first.This SML used a Boolean search for the following terms and connectors organized in the form of the following string: "cellular automata" AND "design process" AND (architecture* OR urban*).The SML was divided into three search rounds.Firstly, the Boolean search was undertaken in 9 search engine indexes: Scopus, CumInCAD, Science Direct, Compendex, Web of Science, Sage Journals, JSTOR, IEEEXplore, and Wiley Online Library.This search generated 444 hits.Secondly, the hits from the first round were manually selected, based on the titles and abstracts, as to whether they were related to the fields of Architecture and Urban Studies, generating 137 hits.Grey literature, indexes, summaries, book reviews, citations, and patents were all excluded.In the third-round, papers were skimmed through to see if they actually fit the SML criteria and had usable examples of applications of CA in the design process (Table 1).References from these papers that contained the word "cellular automata" were also reviewed and, when applicable, were added to this research as snowballing results.Google scholar was searched for last and 10 relevant hits were added to the study. REVIEW OF THE APPLICATIONS OF CA IN THE DESIGN PROCESS IN ARCHITECTURE AND URBAN DESIGN One the difficulties in applying CA to urban planning is choosing an appropriate model. Applying it to the design process becomes an even more difficult task, because it involves translating a very abstract model into an actual project for a specific place (SANTÉ et al., 2010;PATT, 2015).Another problem is that urban models seldom allow for user customization and, even if they do, an extensive knowledge of computer programming is required (FISCHER & HERR, 2007;SANTÉ et al., 2010;JENSEN & FOGED, 2014). Urban models tend to focus on a large scale and it becomes difficult to relate their results to design problems (KOENIG, 2011;PATT, 2015).Furthermore, finding an appropriate set of rules for a model can be a daunting task due to the large number of possibilities and 400| M.N.P. Oliveira e SOuSa & M.G.C. CelaNi|Cellular automata in urban design|http://dx.doi.org/10.In the field of architecture, CA have been commonly used to explore building form directly through three-dimensional CA (COATES et al., 1996;FRAZER, 2002;KRAWCZYK, 2002;FISCHER & HERR, 2007; VAN DER ZEE & DE VRIES, 2008;DEVETAKOVIC et al., 2009;ARAGHI & STOUFFS, 2015).In these cases, the influ- two-bedroom apartments, depending on the number of cells.The translation of these diagrams into the actual design is done manually.The form of the resulting buildings strongly resembles the diagrams generated with CA.Koenig and Bauriedel (2009) and Koenig (2011) apply CA to generate optimal street configurations and site massing for new developments.The model is divided into mutually dependent layers.There are larger scale layers that model urban development and layers with a higher resolution for urban design.CA are used to generate buildings in a street design generated in the lower resolution layer.The issue of scale and the applicability of urban models in design is a problem recurrently addressed by the authors reviewed in this study.Patt (2015) believes that the regularity of the grid used in traditional CA is one of the main problems in relating models to reality.He advocates that the use of irregular meshes adapted to topography and land use are an important relaxation to minimize arbitrary interpretations of urban models in the practice of urban design.Jensen and Foged (2014) propose the use of CA as a method to explore time-based design proposals.With this in mind, they created a 2D model that recorded every time step, so that the user could compare the results of design decisions through time.Their main goal was to stimulate users to focus on overall design intents rather than absolute design solutions.They believe a large number of cell states and a low-resolution model are the best way to stimulate creative design solutions, with high levels of complexity.Berger DISCUSSION Herr and Ford (2016) believe CA are used as a tool in the architectural design process much more often than suggested by the literature.Designers fail to perceive the value of their experimental uses of CA for others because it has to be modified and adapted to each specific use in the design process and, in the end, the focus falls on the design and details of the process are lost.This lack of documentation leads those who are trying to apply CA to the design process to constantly repeat the errors of their predecessors and, thus, little ground is covered in each experience, which, in turn, leads to the underdevelopment of CA as tool for design.Herr and Kvan (2007), Herr andFischer (2013), andHerr (2015) suggest that, in order to take CA out of the realm of simulations and prediction of future scenarios, which is where it is generally applied in the field of urban studies, CA must become a tool for speculation and form-finding.Therefore, tools should allow high levels of interference by the designer, as well as relaxations in the original structure of CA.To this type of CA-based tool, the authors give the name of second-order CA (HERR & FISCHER, 2013;HERR, 2015).However, differently from architecture, in the field of urban design, simulating, analyzing, and predicting are still important matters in the development of a project.But they do not stand alone, as they are allied to the speculative activity of designing.Thus, time cannot necessarily be taken out of the equation (CANEPARO, 2007;JENSEN & FOGED, 2014).Specific methods for applying CA to the task of urban design should, ideally, serve as a means to predict the outcomes of design decisions through the bottom-up development of a project, while stimulating creative thinking and problem solving.Such a tool would enable designers to guide the self-organization of cities through local interventions.The tool developed by Koenig and Bauriedel (2009) and Koenig (2011) are an example of how this can be accomplished through models that combine different scales for simulation and design interventions, thus simulating impacts of design decisions throughout its interconnected layers.When applying CA to design, the design intent is the main question that has to be defined at the very beginning.This is a shift from the traditional design process where the focus tends to be on a final and static design proposal (JENSEN & FOGED, 2014). Despite the need for practical methods to inform decision making, and to model future scenarios, one of the main difficulties in applying CA to the architectural design process is that the outcome generated by a set of transition rules is nearly unpredictable and, therefore, constructing rules with desired outcomes in mind can be ineffective (HUA, 2012;HERR & FISCHER, 2013).Hua (2012) suggests that, in order to give a user more control over the outcome of a CA model and make the method more useful for urban design, it should allow the user to select potential outcomes from a variety of different simulations and use an evolutionary strategy to combine these results, by crossover and mutation, so as to generate a larger set of desirable results from which to choose. Another recurrent issue in the application of CA to the design process is the influence exerted by the regular grid of cells on the final form.This is clearly noticeable in examples from the literature, such as the experiments presented by Coates et al. (1996), Krawczyk (2002), Herr and Kvan (2007), and Araghi and Stouffs (2015). Works by Herr and Karakiewicz (2007) and Anzalone and Clarke (2003) demonstrate interesting approaches to the problem, using CA to generate diagrams that would later be translated into design, the former by manual interpretation by the designer and the later through an algorithm that substituted cells with a structural grammar according to the cells in their neighbourhood. The possibility of using shape grammars as an approach to generate rules for CA In a seminal work in which generative design systems are defined, Mitchell (1975) categorizes them using representation methods.According to him, iconic representation is the most commonly option used by architects.However, generative systems based on analog or symbolic representations can deepen the understanding of a phenomenon and are more easily dealt with by the computer. In the applications of CA reviewed in this paper, it was possible to notice a tendency to translate spatial aspects into the model in an iconic way, rather than a symbolic or analog way, which often led to very literal interpretations of CA-based models into designs.Perhaps this tendency could explain why we have found so many papers related to the application of CA to direct graphic design, such as building form and street networks (COATES et al., 1996;WATANABE, 2002;KRAWCZYK, 2002;HERR & KVAN, 2005;ARAGHI & STOUFFS, 2015), and not as many to a more abstract modeling of urban emergent behaviors, such as those published by Batty (2007). Another decisive factor to keep CA in the realm of symbolic generative systems is the scale of the model.Many documented experiences that apply CA to architecture tended to assign a cell the scale of a room or an apartment in the building.When applying CA to urban design, the best scale for modelling seems to be that of the city or region (BATTY, 2007), and then, to zoom in to the site and use what was generated to make informed design decisions without the influence of shapes.It is possible that the same idea could be of value for architectural design.Instead of using room sized cells, very small cells could be used to generate clouds of cells that can later be translated into architectural concepts in a symbolic way. FINAL CONSIDERATIONS From what was seen in the literature presented above, the use of CA in urban design is still on the verge of its practical applications.One of the main problems noticed was that, when using CA software, probably unconsciously, designers are apparently drawn in by squares and cubes generated on the interface, resembling rooms and buildings, so that even if the final design barely resembles the model on which the design is based, the cubes appear in the final form as a prevalent reminder of the use of the tool.This needs to be taken into account when applying CA to the design process, especially in academic environments, to avoid the influence it exerts on form.Tools that combined CA models with generative design methods, such as shape grammars and ontologies, to process the CA into design, were generally more successful to avoid this influence on form.Further applications of the method in urban design studios are needed to test if the proposed methods are effective for generating better solutions for urban problems. In summary, it is possible to conclude that CA can be a powerful tool in urban design, considering the complexity of cities and the necessity to deal with specific problems from the bottom-up, since top-down strategies have proved not to be so efficient in dealing with local issues.With the present availability of data and development of computation there is no more need to use general approaches to deal with urban problems, as cellular automata can be used to implement -or at least to inspire -a better customized urbanism. ( NORTE PINTO & PAIS ANTUNES, 2007): 1) Exploring Spatial Complexity; 2) Researching economic, sociological, and geographical aspects of space; 3) Designing operational tools for planning.The main approaches to the simulation of urban phenomena with CA-based models |397 M.N.P. Oliveira e SOuSa & M.G.C. CelaNi|Cellular automata in urban design|http://dx.doi.org/10.24220/2318-0919v16n2a4211Oculum ens.| Campinas | 16(2) | 395-408 | Maio-agosto 2019 to represent real urban processes.The most conspicuous departures from the original structure of CA are listed below, as reviewed by Santé et al. (2010): a) Incorporation of more complex transition rules through the use of artificial intelligence to change the rules during the simulation, or fuzzy logic to add a layer of uncertainty to the model in order to better represent human behavior; b) Relating cell space to urban space, where cells are compared to the size of preassigned areas; Use of irregular grids, with cells of different shapes and sizes in the same model; c) Use of non-uniform cell space, where cells are characterized not only by their state, but also by external constraints, not inherent to CA; d) Growth of the area subject to state transition constrained by parameters external to the CA simulation; e) Extended neighborhoods with the incorporation of a distance-decay effect; f) Non-stationary neighborhoods that change according to the cell state; g) Non-stationary transition rules that change during the simulation; h) Variable time steps within the same simulation, according to cell state or location, or triggered by events in the simulation. FIGURE 1 - FIGURE 1 -Publications by year and type.Source: Elaborated by the author (2017). |399M .N.P. Oliveira e SOuSa & M.G.C. CelaNi|Cellular automata in urban design|http://dx.doi.org/10.24220/2318-0919v16n2a4211Oculum ens.| Campinas Figure I illustrates the distribution of the reviewed papers by year and type of publication showing a constant interest in the theme in the past decades with a recent increase. Anzalone and Clarke (2003) describe two applications of CA to generate space-truss structures.The first applied a one-dimensional CA to design space-truss structures.Cells generated by the CA were then interpreted into nodes and bars with different lengths and angles to create the three-dimensional truss design.Depending on how the cells were interpreted through an algorithm, the same CA generated different trusses.The second started with a three-dimensional CA based on Game of Life rules to generate buildings after the interpretation made by another algorithm.Similarly,Caneparo et al. (2007) used CA integrated with an ontology of urban typologies to generate urban morphologies based on optimal relationships to create mixed-use neighborhoods.Herr and Kvan (2007) propose CA as a method in which design steps are translated into transition rules.During the simulation, the designer can manually interfere in the outcome in order to generate the desired result.The location and distribution of buildings and the definition of height and density were then chosen by the designers without the aid of CA.Similarly,Araghi and Stouffs (2015) use CA to generate a variety of diagrammatic building forms for a high-density housing.The authors created the rules for the CA following accessibility and lighting requirements to generate floor plans.The simulation starts from fixed cells that represent the accessibility cores of the building.Cells grow around these cores and are interpreted as rooms in units that range from studios to 402| M.N.P. Oliveira e SOuSa & M.G.C. CelaNi|Cellular automata in urban design|http://dx.doi.org/10.24220/2318-0919v16n2a4211Oculum ens.| Campinas | 16(2) | 395-408 | Maio-agosto 2019 et al. (2015) voxelate 3D scenarios and use the resulting model to run a CA-based simulation of urban heat distribution in order to support the urban design process and evaluate the outcome of design decisions in regard to heat distribution in the urban environment. Figure 2 Figure 2 illustrates the conceptual framework to develop a CA model, as deduced from the works reviewed in the SML, and how it can be integrated to other generative design methods to create tools for urban design. FIGURE 2 - FIGURE 2 -Conceptual framework for applying CA to the Urban Design Process.Source: Elaborated by the author (2017).
5,446
2019-05-29T00:00:00.000
[ "Computer Science", "Education", "Engineering" ]
Guiding heat in laser ablation of metals on ultrafast timescales: an adaptive modeling approach on aluminum Using an optimal control hydrodynamic modeling approach and irradiation adaptive time-design, we indicate excitation channels maximizing heat load in laser ablated aluminum at low energy costs. The primary relaxation paths leading to an emerging plasma are particularly affected. With impulsive pulses on ps pedestals, thermodynamic trajectories are preferentially guided in ionized domains where variations in ionization degree occur. This impinges on the gas-transformation mechanisms and triggers a positive bremsstrahlung absorption feedback. The highest temperatures are thus obtained in the expanding ionized matter after a final impulsive excitation, as the electronic energy relaxes recombinatively. The drive relies on transitions to weakly coupled front plasmas at the critical optical density, favoring energy confinement with low mechanical work. Alternatively, robust collisional heating occurs in denser regions above the critical point. This impacts the nature, the excitation degree and the energy content of the ablated matter. Adaptive modeling can therefore provide optimal strategies with information on physical variables not readily accessible and, as experimentally confirmed, databases for pulse shapes with interest in remote spectroscopy, laser-induced matter transfer, laser material processing and development of secondary sources. Introduction Energy coupling is a key element of many laser-matter interaction fields, ranging from secondary radiation sources or extreme matter states to laser material processing and remote sensing. Within this frame-work, increased attention is given to ultrafast laser sources, with the prospect of increasing the precision of interaction. The latter applications mentioned above mostly require controlled excitation and material removal events in ablation ranges (several J cm −2 ), as well as evolution paths that deliver heat confinement or highly excited matter. Specifically the generation of critical states that can spontaneously decompose into ionized phases is examined in applications where the emission of the laser ablation products has to be particularly intense, e.g. remote laser breakdown spectroscopy or radiation and particle sources [1,2]. This usually imposes the use of high-energy doses reflected in the photon cost. These states can nonetheless be achieved upon moderate laser exposure by correctly balancing the energy supply or the conversion between thermal and mechanical loads. The purpose is to obtain maximum effects from minimal energy costs related to the laser interaction. However, despite the fact that general ablation scenarios are well comprehended, a complete understanding of the recipe on how to obtain improved laser coupling and maximal excitation with its particular fine-tune control factors requires further investigation. A fundamental question is how to optimize the interaction by identifying ideal laser pulses for desired evolution scenarios and irradiation results [3]. Specifically related to ultrafast laser exposure, where matter properties vary from solid to plasma phases on electronic (τ e ) or ionic relaxation (τ hydro ) scales, impacting particularly the optical properties, the challenge is to improve the energetic balance beyond the standard material response. This can be regulated by acting on absorption channels and transitions dynamics with a modulated energy feedthrough. Well-defined thermomechanical paths can be induced and ablation aspects can be mastered to a considerable degree if irradiation is judiciously adjusted on these timescales. Time manipulation of laser pulses has already been proven to have strong potential for achieving new synergies between light and matter. Temporal field gradients can determine enhanced plasma coupling and improved shock design [4][5][6]. Light confinement below the diffraction limit and nanostructuring can occur by energetically controlling carrier populations 3 with chirped pulses [7]. Coherent phonon populations may impact heat transport under the influence of multipulse sequences in phase with the vibration beat [8]. In laser ablated silicon Stoian et al [9] and Dachraoui and Husinsky [10] indicated a manyfold increase in ion emission with minimal energy costs under tailored ultrafast irradiation. The improvement was related to two factors: fast onset of absorptive liquid phases on picosecond scales or controlled generation of carrier plasmas on sub-picosecond scales. A similar ion enhancement was noted for metals [11] that involves hydrodynamic material expansion on tens of ps. This suggests a general effect of energy confinement assisting the phase transitions. Fast succession of density phases related to rarefaction was further evoked as a possible control source for nanodroplets in the ablation products [11,12]. This was related to Rayleigh-Taylor instabilities driven by the lifetime of the expelled liquid [12,13]. Equally, pulse forms were used to influence the excitation degree in laser-produced plasmas, either designed in simple double-pulse sequences or complex shapes determined by spectrally controlled feedback loops [12,14,15]. Significant spectral enhancement was thus obtained [12] relying on light exposure during the plasma formation stage and coupling in the incipient plasma phase. Alongside ablation products, pulse sequences could influence deformation rates down to almost complete evanescence of the rarefaction waves with minimal spallation [16,17]. In all these cases, laser-induced heat load and the thermomechanical balance play a fundamental role. The consequences of its eventual control are important for pressure-induced phase transitions, ablation characteristics and material removal efficiency, and we explore here global control options from a theoretical perspective. In practice, experimental approaches offer limited access to primary evolution steps. In these conditions, a predictive theoretical method can provide extended information on how to achieve optimal interactions. Nonetheless, the complexity and lack of coherence of intermediate processes make it difficult to predict accurately essential improvement factors except by trials and educated guesses. Optimal control theories were then developed for controlling complicated parameter landscapes with light and have already been applied for regulating complex systems with coherent radiation [18][19][20]. Assuming the ansatz of controllability, i.e. the probability that a control target can be achieved with a laser controller [21], this allows, on the one hand, calculating optimal energy delivery [22,23] in the time domain for given objectives and, on the other hand, investigating active control knobs from temporal electric field distributions. We extend the control approach to ablation-specific non-coherent phenomena occurring on picosecond scales and propose an adaptive hydrodynamic method by integrating numerical feedback loops into a hydrodynamic simulation code. With a focus on the temperature state parameter and its control, the procedure offers a predictive capability concerning thermodynamic evolutions with regulation laws not experimentally accessible. It will be shown that a numerical approach is powerful in predicting initial excitation conditions 4 required to increase and maximize the heat load in the ablation products, with major consequences for the ejected material states: excited species. The interest is twofold: an inquiry into regulation mechanisms and practical relevance for controlled ablation and excitation products. We are particularly interested here in ways of guiding laser energy towards obtaining higher energetic plasma phases in moderate irradiation regimes, beyond the levels obtainable by standard ultrafast laser ablation. A recipe for obtaining optimal pulse (OP) forms will be given and the 4 underlying physics will be discussed. This is motivated by the potential impact on the nature and on the energy content of the ablation species, with consequences for related applications. The paper is organized as follows. The computational method is described in section 2, where details of the hydrodynamic model are given and the numerical feedback loop is described. The discussion section starts by evaluating the results of standard short pulse (SP) irradiation in terms of the limits of the achievable temperature levels. The results of the adaptive runs based on pulse time-design are then introduced, indicating the OP forms maximizing the temperature of the ablated matter in a significant range. This is accompanied by a discussion of the main mechanisms concurring with the optimal results and of the possible follow-ups in terms of applied and fundamental knowledge. Particular thermodynamic evolutions of ionized or dense plasma layers are followed, emphasizing heating scenarios in weakly coupled plasmas or supercritical fluids. The consequences of energetic states are pointed out. The conclusion section concludes the paper. Hydrodynamic model We modeled the nonequilibrium heating and expansion of aluminum (Al) under SP laser irradiation using a one-dimensional (1D) two-temperature non-equilibrium hydrodynamic code (Esther) [12,[24][25][26]. The use of a 1D model is justified for our intensity ranges and for spatial and temporal dimensions smaller than those associated with a Rayleigh-Taylor instability (µm, ns). As detailed hereafter, the Lagrangian approach solves the fluid equations for mass, momentum and energy conservation for electronic and ionic species upon energy absorption. Thus it allows a hydrodynamic and thermodynamic view of the evolution of the excited material. The phenomenological treatment is presented in detail in [25,26] and we only point out here the major points. The main steps of the hydrodynamic code are given below. The interaction between the polarized pulsed laser field (λ = 800 nm) and the metal target is described by the Helmholtz wave equation in inhomogeneous media. The incident temporal laser electric fieldẼ(z = ∞, t) = |Ẽ inc (t)|e iω 0 t is injected as a boundary condition in the equation at normal incidence, being propagated towards negative z values: (1) Here˜ is the inhomogeneous complex dielectric permittivity of the metal, calculated as a function of electronic temperature T e , the ionic temperature T i , the density ρ and the average ionization Z . The electric field envelope can take various forms as will be seen in the next section. Let us first discuss the optical response of the metal via its dielectric function. In considering absorption and the energy deposition rate, Al is viewed as a model case with parallel bands and a quasi-free-electron behavior in the solid phase. However, a deviation from a freeelectron-like behavior occurs around 1.5 eV: our irradiation conditions. The solid metal-light interaction includes therefore contributions from intraband (D, Drude-like free-electron heating) and interband (IB, Ashcroft-Sturm model [27]) electronic transitions, followed by free-electron absorption in the rarefied phases. The transient dielectric function˜ and the optical conductivitỹ σ are defined as where ω p = n e e 2 /m e 0 is the plasma frequency and ν eff is the effective (non-equilibrium) collision frequency. The latter is given by the sum of the electron-electron [ν ee (T e , n e )] and electron-phonon/ion [ν eph (T e , T i , n e )] [30] contributions in the solid phase and by [ν ei (T i , T e , n e )] in the plasma phase. We typically consider here the plasma phase as an ionized high-temperature low-density gas phase where the ionization is higher than 0.1. In calculating ν eff in the solid phase, the ν ee contribution originates from (e-e) umklapp collisional processes [25]. The electron-phonon contribution to scattering ν eph involves (e-ph) interactions in situations where the electronic temperature is larger than the ionic one. ν eph starts from a phenomenological collision frequency ν 0 eph which is fitted to match simultaneously tabulated data of n and k given by [28] and we add evolution laws depending on T e and T i [25]. The (e-ph) estimation follows the first-principle calculations from Lin et al [29] and the formalism of Kaganov et al [30]. In the plasma phase, ν ei = ν eq ei (T i , n e ) + ν neq ei (T e , n e ), indicating that ν ei scattering frequency is composed of an equilibrium and a non-equilibrium term. The equilibrium part ν eq ei (T i , n e ) is determined from the tabulated values of conductivity at equilibrium [31]. These tabulated values were shown to match well experimental situations dealing with detecting optical conductivities [32]. The second term corresponds to the nonequilibrium contribution to the collision frequency. For ν neq ei (T e , n e ), a Spitzer-like dependence has been assumed where the Coulomb logarithm has been corrected for high-density plasmas by the approach given in [34]. To ensure a convergence towards tabulated equilibrium values, we subtract the equilibrium contribution from the non-equilibrium term. In this manner, the contribution of electrons out of equilibrium is treated as a correction of the known values of the equilibrium conductivity. Subsequently, equation (1) including the optical transient response (2) is jointly solved with the thermodynamic equations for the interacting electron and ion subsystems written in the Lagrangian form. The code explicitly considers the time-dependent thermal non-equilibrium (equations (3a) and (3b)). The specific energy conservation equations that take into account variations of the time-dependent energy exchange quantities in the z-direction are given below: with the terms detailed below: Here ε and K are the specific energy and thermal conductivity for electrons (e) and ions (i), respectively, and m is the mass of the Lagrangian cell. Additionally, V = 1/ρ is the specific volume of the calculation domain (cell) and p e−i the electrons or ions pressure. 6 The intervening terms shown explicitly in equations (4a)-(4e) are discussed below. Acting as the input source, Q L (z, t) is the specific optical power density source generated by the inverse bremsstrahlung absorption of the pump laser pulse, before being dissipated in the material. The energy transport and redistribution accounts for several influences [12]: (I) losses by diffusion Q e,i th ∼ ∂ z T e,i , (II) electron-phonon/electron-ion (e-i) collisional energy coupling with its specific exchange relaxation rate Q e−i , (III) the consumed mechanical work rate Q e,i mech = − p e,i ∂ t V and (IV) recombination kinetics Q rec following variations in the ionization states initiated by successive ionization/relaxation events. The absorption, relaxation and transfer energy terms [12] are material state dependent as explained below. The optical conductivitỹ σ relies on the permittivity˜ with [σ ] ∼ m [˜ ] determining the laser absorption.Ẽ represents the local field in interaction with a varying electronic population n e = n i Z . This path makes up an important energy exchange channel. As the electron density number varies, the extra energy is either supplied from the laser source (intrinsically considered via the optical conductivity) or released to the surrounding. This way it contributes to heating, with Q rec (z, t) indicated above accounting for the rate of specific energy recombination. One important aspect should now be noted. Being driven essentially by temperature, these interaction terms affect the energy deposition and the transformation speed as several processes have antagonistic effects (i.e. temperature-driven collisional absorption or temperaturedependent effectiveness of heat diffusion). Just as an example, a femtosecond pulse exposure leads to high electronic temperatures that increase heat diffusion, in spite of the short duration of irradiation. This shows the importance of temperature control in the overall interaction scenario and, at the same time, suggests a possible optimization potential. Two quantities have a determining role in establishing the thermal conversion and transport: the collisional e-ph/e-i coupling efficiency and a time-varying ionization state, both being strongly dependent on temperature and density. During evolution, the e-ph coupling initially determined from first-principle calculations [33] departs from the solid value [12] as γ (T e , n e ) = γ 0 (T e ) n e /n 0 e 1/3 where '0' denotes the solid phase (γ 0 (300 K) = 2.45 × 10 17 W m −3 K −1 ) and the symbols are usual quantities. For the dilute, hot and therefore weakly coupled plasmas, the e-i relaxation relies on the Spitzer model as discussed in [34]. This formalism accounts then for the relaxation-specific energy exchanged by e-ph and e-i collision Q e−i (z, t). At the same time, the average ionization Z varies during the electron heating phase consuming photons upon increase and releasing the energy by non-radiative recombination ∂ t Z | τ r during the quasi-adiabatic expansion. We recall that this impacts strongly on the absorption mechanisms and therefore on the energy storage. Assuming a strong offset from local equilibrium, the electronic energy drives the ionization values influencing thus the Q rec (z, t). The Z values as a function of density (ρ) and electronic temperature (T e ) are taken from [31], being smeared by a characteristic decay time (τ r ) where a lower limit of 1 ps was imposed [35]. The energy that is delivered to the particle system as Z goes down corresponds to a variation in the electronic thermal energy and is reflected in the ionic temperature. The post-recombination states correspond to bound states in thermodynamic equilibrium with the surroundings. The model does not take into account the energy transfer and redistribution on specific atomic excitation levels, nor their particular bound-bound radiative relaxation, considered minimal in the irradiation conditions mentioned above. With the above dependences, we note that the macro-evolution of the ablation process is dominantly influenced by the specific energy content and finally by the ionic temperature (T i ), an essential factor that drives and, at the same time, is directly influenced by the succession of laser-induced effects. During relaxation, the material thermodynamic properties are described by an upgraded version of the Bushman-Lomonosov-Fortov multiphase equation of state spanning a large range of densities and temperatures between condensed states and hot plasmas. The values were reconstructed by Commissariatà l'Energie Atomique (CEA), France, based on [36]. Surface motion, induced shock wave and plasma expansion are considered by the hydrodynamic part of the calculation, satisfying the continuity relations given below: where u is the fluid velocity. Numerical adaptive loop To derive the best pulse form for correlated matter-light systems, a non-deterministic evolutionary search strategy [18] was developed. This relies on spectral phase modulation for tailoring the pulse shape [37]. An adaptive loop based on spectral phase modulation of an initial pulse E (ω) e iϕ(ω) with its corresponding time envelope E (t) was inserted in the hydrodynamic code. This is schematically shown in figure 1. A Fourier transfer function in the time domain is used to design the corresponding pulse forms for the code. Typically, the temporal envelopeẼ inc (t) of the laser electric field is obtained from the inverse Fourier transform of a Gaussian spectrum (full-width half-maximum (FWHM) λ = 2π c ω 2 ω = 6.2 nm, corresponding to a Fourier limited pulse duration of 150 fs), centered at the laser frequency ω 0 (recall that λ inc = 800 nm) and modulated by the spectral phase ϕ(ω) [37]: 8 Normalization to the laser fluence F is performed in order to ensure that, energetically, the same irradiation conditions are delivered by each tailored pulse, with c being the speed of light and 0 the dielectric vacuum permittivity. The adaptive phase manipulation technique is briefly indicated below. The Fourier domain phase ϕ (ω) encompassing the spectral content of the pulse is considered as a string of 640 discrete values (pixels) with a spectral resolution of δλ = 0.05 nm. These values match typical values of standard commercial spatial light modulators (SLM). Being linked to the chosen spectral resolution, the shaping window extends to 42 ps, corresponding to pulses that can be designed in the laboratory with current optical modulators. As mentioned, an evolutionary adaptive loop was generated between the programmable pulses and the hydrodynamic code, to improve the laser-generated outputs. As we are interested in the specific energy content, the desired optimal state is defined using a single physically relevant thermodynamic parameter, the ionic temperature T i , where experimental determination remains a challenge. The optimization algorithm is applied to examine the search space of the cost function f cost = max[T i (z, t)]. The optimality criterion, the maximum T i (irrespective of time and excitation zone) achieved in an irradiation sequence, serves therefore as a fitness parameter for the optimization procedure. Consequently, the phase is manipulated iteratively using genetic propagators, supporting the code with corresponding temporal forms that gradually impacts the temperature level. Via a pixel binning method, increasing time domains were explored, from less than 1 ps up to the maximal value. The convergence of the numerical procedure forecasts irradiation conditions for achieving the hottest states at constant energy requirements, allowing for prediction of the adequate control law. The result of the simulation does not only depend on the final state of the system but is also recursively determined by the history of the evolution paths during the regulated energy transport. Standard short pulse (femtosecond) irradiation Before approaching the optimization search, one has to establish the limits of the standard SP excitation. A first step in profiling the search space was made by exploring standard heating scenarios induced by femtosecond laser pulses (150 fs) using the hydrodynamic code. Typical temperatures achieved in different fluence ranges are thus calculated. Figure 2 shows the evolution of the maximal temperature T MAX of the hottest layers (HL) for a standard SP of 150 fs with increasing laser fluence F, mainly in ablation domains. Above 0.34 J cm −2 (the vaporization threshold at normal pressure) the material follows an ablative behavior with a sub-linear temperature increase. The value reached for a fluence of 3 J cm −2 , well in the ablation range, is approximately T SP = 22 × 10 3 K. The limited increase, localized at the expansion front, is particularly linked to the sudden SP energy deposition. Here the energy relaxation occurs primarily before the hydrodynamic expansion, at relatively constant quasisolid relaxation parameters, notably high specific heat that moderates the temperature gain. It has already been shown [12,38] that the strong initial compression supported by SP determines a swift mechanical expansion into the two-phase region, consuming the thermal energy in both expansion and the nucleation of the vapor phase. Optimization results The analysis of standard irradiation puts forward the question of whether, for the same energy input, the temperature can be further controlled. Under these circumstances the adaptive procedure was applied to find the extreme values that can be achieved by T i (z, t) when the pulse form and the energy delivery rate vary in time. As the temperature shows both time and space variations, we concentrated on the hottest zones and tried to influence the local maximal T i values in HL, identifying the highest and the least efficient pulses. An incident fluence of 3 J cm −2 in the ablation range was chosen. The results of the adaptive run show notably that, using the same energy input, it is possible to reach a far wider range of maximal temperatures, between 18 × 10 3 K (the lowest maximal value, denoted sometimes as minimal) and 55 × 10 3 K (the highest maximal value). The extended range of the temperature variation and the present discussion is specific to ablative regimes although observable effects in changing the highest temperature were obtained at lower fluences, below the threshold for achieving gasphase states. The pulses that deliver the low or high temperatures close to the extremes of the temperature ranges at this input fluence are depicted in figures 3(a) and (b). They indicate either a ps envelope with an impulsive ending for the highest T MAX or a short sequence for the lowest T MAX levels, respectively. We note therefore that an OP design (depicted in figure 3(a)) can determine a 2.5 times increase in temperature (see the OP level in figure 2). This relies on an energy delivery timescale up to the limit of the shaping window, in conditions where the incident fluence has not changed. Although the results are obtained at 3 J cm −2 input fluence, the observed pulse spread is also preserved in irradiation regimes beyond the chosen range to higher fluences, with apparently similar effectiveness (tested up to 10 J cm −2 ). The particular timescale suggests a departure from stress confinement conditions as the scale is limited by τ hydro λ D /c s , approximately 10 ps. Here λ D is the characteristic electronic diffusion length and c s is the speed of sound. This high temperature would require 10 times more energy with the SP at similar densities and shows 35% more efficiency than typical double-pulse sequences [12,15,16] or long Gaussian envelopes [12] within the same time frame. The lowest maximal temperature for the excited material is reached with an impulsive excitation, similar to SP, accompanied by some low-intensity wings ( figure 3(b)). This suggests a similar evolution to the one discussed in the first paragraphs, in conditions of shock confinement and mechanically driven expansion. To capture the spatio-temporal dynamics, the evolution of the temperature in the expanding layers is given in figures 3(c) and (d) for the highest (MAX) and the least (MIN) efficient pulses. These graphs depict the levels of temperature achieved in the excited/ablated matter as the material expands in space and time. We discuss below the case of the highest temperature. For the high temperature case the evolution of the highest temperature layer HL is indicated in figure 3(c). This shows that the maximum temperature is confined to the front of expansion. The highest value is obtained at the end of the sequence, a few ps after the maximum e-i non-equilibrium (marked as NE in figure 3(c)). As Al does not show strong optical variations as it makes the transition to the liquid phase [39], this points to an alteration of the properties related to the onset of the vapor phase. The gas-phase involvement is supported by the space evolution of the expanding matter on hundreds of nm, and it will be confirmed in the next paragraphs. We state, however, that the region of the highest temperature is not equivalent to the highest internal volume energy density (ρε i , marked as HE V ) located deeper towards the condensed phase. Apart from HL, initially placed at a depth of 2 nm (the initial position of the corresponding Lagrangian cell), a second expanding layer trajectory (10 nm) is given (SL) that illustrates a passage in the vicinity of the energy concentration region, reaching 42 × 10 3 K. The discussion concentrates first on HL as it locks the highest temperature. The particular relevance of the SL will become apparent later in the text (sections 3.3.2 and 3.3.4). Optimal pulse characteristics: hydrodynamic scenario The fundamental question is related to the specific OP time features that enable a maximal temperature, namely the low-intensity preliminary phase and the final impulse. The OP sequence shows a long pedestal of noisy low-intensity envelope for the first 41 ps. The apparent complicated energy spread can be approximated by a smooth constant supply of energy, as typically the sequence of low-intensity spikes are smoothed by a convolution with the ps material response. With an integral absorption twice as efficient as SP, this part of the pulse (80% of the total energy) is sufficient to raise the temperature up to 70% of the final OP value. A sudden well-pronounced peak is observed at the end of the sequence that achieves a final increase. Different optimization runs allow us to determine similar solutions where the obtained maximum temperature depends slightly on the number of iterations, within 2%. This suggests parameter space topologies with large maxima. The chosen solution has, nevertheless, the major characteristics of the ideal pulse. We note that the optimal solution here was obtained with a constraint on the shaping window (limited to 42 ps). This is related to the practical feasibility of pulse shapes in current SLMs based on spectral filtering, with sub-nm spectral resolution. Improved solutions can be obtained on time scales extending to 150-200 ps, relying roughly on similar phenomena. In the next paragraphs, we indicate the particular states that enable this strong heating from a dynamic and energy balance standpoint. A glance at the hydrodynamic progress of the excited material is given in figure 4. In parallel with the OP form ( figure 4(a)) superimposed on the instant absorbed power map (Q L ), it describes the sequence of density states ( figure 4(b)) and indicates for specific trajectories (HL, SL) the behavior of the speed of sound (figure 4(c)) and ionization degree (figure 4(d)), as indicators for the material state. The corroborated temperature and density information allows us to decompose the evolution into several stages. 3.3.1. Gas-phase evolution. The first stage (0-10 ps) prepares the hydrodynamic advance to the gas phase (figures 4(a) and (b)). About 20% of the sequence energy is consumed to initiate vaporization (0.6 J cm −2 , approximately two times the threshold), equivalent to an absorbed fluence of 78 mJ cm −2 . The energy is deposited mainly in condensed phase over a depth √ 2Dτ = 28 nm, above the optical skin depth. This is sufficient to achieve an average specific energy in the range of 10 7 J kg −1 , energetically enough to initiate the plasma phase with a lowdensity evolution above the binodal (critical point (CP) energy density E C = 1.1 × 10 7 J kg −1 ). The density information ( figure 4(b)) at the end of this first stage of the laser exposure shows now the onset of several notable regions that continue to develop at later times: a low-density front and an increasingly dense material towards the depth. The vapor layer z sc , visible at about 10 ps and affecting the first nm of material, is optically transparent at 800 nm, with a subcritical electron density (n e < n c 10 21 cm −3 , z c being the critical density border at 800 nm). With its buffer function between the ablation products and the vacuum, it has the role of slowing down the evolution of the subsequent layers. As the expansion is repressed and the neighboring matter compressed, the underlying layers z int (among them HL) experience moderate heating for another 10 ps under the incoming energy and preserve a density high enough to allow nonnegligible e-i coupling. The HL optical density remains always superior to the critical value. To understand what happens specifically in this time domain, we examine the dynamics of the phase transition by monitoring the material state-dependent speed of sound c s = √ (∂ p/∂ρ) S (figure 4(c)). This physical quantity reflects the ion pressure and serves as a sufficiently accurate probe for the eventual phase transitions. For HL c s stays roughly constant in the first picoseconds and then it experiences a drastic decrease at 9 ps as it penetrates into the two-phase region [40] towards vapor transformation. We note then a swift rise about 20 ps from a value of c s ∼ = 0 characteristic of the (practically non-ionized) liquid-vapor mixed phase in the vicinity of the CP to values typical of ionized gases, suggesting a rapid transition to an excited plasma phase characterized by low heat capacity and high optical conductivity. A similar dependence has the gas-phase expansion velocity mirroring the mechanical work. The HL enters the plasma region at the earliest time with an overall energy consumption in the liquid-vapor domain of 23% (the energy content of the pulse envelope between 9 and 20 ps corresponding to the liquid-vapor passage) under quasi-constant delivery that prevents cooling and allows an electronic temperature gain. During this, the liquid-vapor characteristic internal energy/particle augments due to e-i coupling, while the mechanical work stays low, comparable to thermal energy. As HL goes rapidly into a low-density high-temperature plasma phase with Z >1 (20 ps), this phase at n e n c further absorbs energy at double pace as light delivery continues between 20 and 42 ps. This stage involves 57% of the input sequence and accounts for 85% 13 of the total absorbed energy in the layer. With temperature, Z increases to quasi-solid values (figure 4(d)) in spite of the now accelerated expansion. The plasma is at present weakly coupled, with a descending scattering time. The bremsstrahlung absorption rises, acting in turn on energy deposition, increasing the accumulation. A positive feedback effect occurs reflected on the ultimate, maximal value of the ionic temperature around the end of the laser sequence. Here the effect of the final peak is deciding, releasing an impulsive heating of the plasma phase at the moment where absorbtion peaks (integral absorption at this moment is 68%). A short excitation is required to counteract the accelerated expansion of the hot plasma (still two times lower than in the case of SP excitation). Even if conductivity is high in this region, promoted by e-i nonequilibrium, the sudden ionization energy relaxes to the atomic system at the end of the sequence via recombination as Z decreases, being the primary factor for the temperature increase. This is equally supported by a moderate value of the heat capacity. This thermodynamic path facilitates then transient changes in the energy transport with the possibility of attaining states at T > T C . Evolution of dense hot regions. We have indicated before a second typical layer, SL, which advances not in the high-temperature zone but in the region of highest energy concentration HE V . As this maximizes the volume energy density accumulation which can be a reservoir for further evolution, its behavior is of interest and will be discussed below. The evolution of the SL and its adjacent regions follows a different path as compared to HL. Being localized deeper in supercritical electron density regions, where the laser light is screened (the electric field is a few times smaller (three times on average), as screening shows a progressive evolution from the solid phase (E HL /E SL 2) to the plasma phase (E HL /E SL 4)), the intermediate electronic heating stage is reduced. This happens despite a favorable hydrodynamic regime for increasing T related to a higher density. However, the energy density storage remains significant, particularly due to the high density. With restricted expansion due to z int , the SL does not show a typical gas-phase transition by crossing phase coexistence interfaces, as c s preserves a finite value, but evolves towards the plasma phase as a decomposing hot fluid with supercritical temperatures (see figure 3(c), T C = 6300 K for Al). The SL denotes therefore a typical supercritical behavior. Ionization Z remains limited, less sensitive to non-equilibrium, and the e-i collisions are the main factor for heating into warm dense states via supercritical paths. Pulse analysis. To investigate the importance of the presence of several peaks in the pulse form, we designed a specific method for analyzing OP, emphasizing the two types of energy-maximization behaviors discussed above: via thermodynamic gas-phase transitions and via supercritical paths. We defined a rectangular gate of approximately 2.5 ps which was scanned along the OP sequence replacing at each moment the local peaks with an equal but constant envelope energy domain. The results, given in figure 4(e), indicate that the small-amplitude peaks have little influence, mainly in keeping short-living non-equilibrium states. Nevertheless, the absence of the final peak impacting non-equilibrium T e is lowering the T MAX corresponding to the hottest HL temperature range, as this was related to plasma states at highest Z , even though the energy content of the laser sequence was not changed. This shows that the final contribution is related to a variation of Z and points out the importance of recombination in heating the HL, proceeding faster than the low-density collisional coupling. The SL is relatively insensitive to the final peak, confirming the alternative collisional heating mechanism. We have therefore indicated that the high-temperature regions in the proximity of HL and the high-energy-density SL regions have different driving elements. A summary of the relative importance of the various heating mechanisms (recombination or collisional heating) is given in figure 5, validating the scenario proposed above. This synthesizes the accumulation of specific energy via the different paths discussed so far, the ionization/recombination path ( t 0 Q rec (t ) dt ) and the electron-ion collisional path ( t 0 Q e−i (t ) dt ). In the superficial but optically dense HL layers ( figure 5(a)), the collisional heating mechanism is essential in the short-living solid phase, where the electronic energy relaxes via e-ph coupling in the first picoseconds after the interaction. The high electronic density favors a stronger collisional relaxation than recombination up to approximately 10 ps. The tendency is reversed as the material density drops during the liquid-gas transformation. The two contributions have similar weights in the liquid-vapor phase and at the beginning of the plasma phase. Here it is interesting to note that, between 10 and 25 ps, Q e−i and Q rec have similar magnitudes in equation (3a). This time window corresponds to relaxation rates of the order of about 10 ps as ∂ t Z /Z ∼ γ /c e where c e is the electronic specific heat capacity. An increase of the efficiency of the recombination path for heating the ions becomes visible after 25 ps for the OP. For the profound SL layers at higher densities ( figure 5(b)), despite the screened radiation, globally the collisional heating is most important as the accumulated effect shows at all times. This tendency is not affected by particular time moments where, for short instants, the instantaneous rate of recombination can become quite important, particularly at the last peak (not shown). It becomes evident that the dominant contribution for HL comes therefore from variation of the ionization degree and recombination, while the denser states of SL favor collisional transfer. 3.3.4. Thermodynamic phase space. We have discussed above variations of local parameters that we try now to position in the phase space in order to establish a global evolution. The global OP progress is summarized in the thermodynamic pressure-density-temperature p(ρ, T ) phase space ( figure 6(a)). The initial HL evolution deviates slightly from an isochoric transition due to its position close to the surface, which facilitates advancement without strong impediments. In addition, the pressure wave generated by the OP imposes a stress load distributed over the whole sequence (not shown), that does not force strong expansion up to the binodal crossing. Then, HL, although showing an evolution in the liquid-vapor zone, is able to rapidly leave the twophase zone re-intersecting the binodal at 20 ps and reaching a critical low-density plasma phase (ρ < ρ C ). The evolution crosses diagonally the space above the binodal for the last exposure part (half of the exposure time) into ionized absorbing states. Noteworthy is the SL behavior. The SL circumvents the CP after half of the irradiation sequence as high-pressure (GPa) supercritical fluid (justifying the acronym SL for a supercritical layer) but lacks the momentum for a strong evolution into the plasma phase. Following OP exposure 80% of the ablated matter follows supercritical paths as compared to SP (20%). A relevant indication of the OP supercritical paths (labeled as the initial positions of the Lagrangian cells) is given in figure 6(b), showing that, in addition to maximal heat loads, the OP equally favors supercritical paths as efficient ways towards the plasma phase. Consequences of optimal coupling The consequences of creating critical states are multiple, particularly in generating highly excited environments as requested e.g. in laser breakdown spectroscopy but also in secondary source development, accessing at these temperature values in EUV ranges. An example is offered in figure 7 showing the emitted integrated spectral energy density of the electrons within the first 100 ps. For this particular example the calculations were performed post-optimization by including a radiative transfer module. The integrated emissivity is determined by radiation transport. The equation of the radiative transfer in an expanding plasma is calculated with the assumption of an instantaneous distribution of emission and absorption sources (spatial zones that are radiating at different temperatures) determined by the local plasma conditions at each fluid element. A multigroup radiation transfer algorithm has been used with 100 energy groups of tabulated local thermodynamic equilibrium opacities extrapolated between the values of [28,41]. A spectral blueshift of 5 eV and a peak increase of more than one order of magnitude are to be seen for the OP. The optimal path found here could be general, as it relies on the achievement of high-conductivity plasma phases or supercritical expansion rather than on particular material properties. Essentially the pulse shows a material expansion and heating phase that is being driven into ionized domains where the rest of the energy is being deposited. This scenario also indicates a certain robustness against uncertainties in the code parameters. If the temperature level is sensitive to the chosen parameters which are known with some accuracy, the OP form drives essentially the same phenomena and preserves similar characteristics. Experimental confirmation of the benefits of longer, ps laser pulses determined by adaptive techniques based on either time-of-flight mass spectrometry or spectral emission have already been reported, confirming the effect of efficient plasma excitation [11][12][13] and the possibility of achieving high emissivity or spectral emission shifted towards higher energies. In these experiments, the monitored effect was a global (temperature-and densitydependent) plasma excitation behavior, spatially and temporally averaged for several hundreds of ns after the excitation, as probing temperature on the short time and scales reported here remains a challenge. However, they indicate as well the increased light coupling related to plasma interaction, and they validate thus the present results with the increased heating effect of longer envelopes on the emerging plasma. Secondly, as experimentally shown, favoring heat with respect to mechanical load and distributing it over a longer scale reduces tensile stress and cavitation. This impacts the sizes and density of nanodroplets in laser deposition techniques [12,13]. Thus, complementing the experimental approaches, the numerical optimal procedure supports a deeper understanding of the excitation control and access to primary relaxation steps. Finer excitation features are revealed, as no experimental noise or other limitations are to be considered. Depending on the chosen application, more developed fitness functions can be chosen to optimize the complex mix of several parameters. As a plus, the numerical adaptive loop allows us to address local parameters and reveals the fine details of excitation. It equally permits introspection into evolution paths non-accessible in experiments, giving a complementary view of the initial relaxation stages. Conclusions In conclusion, using a numerical adaptive approach, we have indicated that phase transitions can be guided by suitable energy delivery to maximum heat loads. The method focuses on primary relaxation events that establish further evolution in optimal conditions. Hot thermodynamic states are reached with minimal energy expenses by creating spatio-temporal correlations between flux delivery, absorption transients and relaxation, resulting from ps pedestals with impulsive endings. A plasma-based recipe was given which relies on early achievement of thermodynamically critical plasma phases at optical density. The role of recombination was outlined as a material-heating mechanism. Enhanced collisionally heated supercritical states, rich in volume energy, follow in denser regions, with a complementary collisional energy deposition mechanism. Adaptive modeling enables optimal irradiation strategies for applications in material processing and spectroscopy with new evolution paths and the prospect of reconfigurable irradiation tools.
9,910.6
2012-01-20T00:00:00.000
[ "Physics" ]
Structure Prediction: New Insights into Decrypting Long Noncoding RNAs Long noncoding RNAs (lncRNAs), which form a diverse class of RNAs, remain the least understood type of noncoding RNAs in terms of their nature and identification. Emerging evidence has revealed that a small number of newly discovered lncRNAs perform important and complex biological functions such as dosage compensation, chromatin regulation, genomic imprinting, and nuclear organization. However, understanding the wide range of functions of lncRNAs related to various processes of cellular networks remains a great experimental challenge. Structural versatility is critical for RNAs to perform various functions and provides new insights into probing the functions of lncRNAs. In recent years, the computational method of RNA structure prediction has been developed to analyze the structure of lncRNAs. This novel methodology has provided basic but indispensable information for the rapid, large-scale and in-depth research of lncRNAs. This review focuses on mainstream RNA structure prediction methods at the secondary and tertiary levels to offer an additional approach to investigating the functions of lncRNAs. Except for tRNAs and rRNAs, ncRNAs have been traditionally disregarded as "transcriptional noise" [8]. Although proteins have long been considered to carry genetic information, emerging evidence implies that ncRNAs are also involved in the regulation of gene expression that impacts the growth and development of organisms [9][10][11]. Compared with short RNAs (<200 nt), highly transcribed long noncoding RNAs (lncRNAs) (>200 nt) may perform more complex biological functions [12][13][14]. These RNAs have been implicated in the regulation of gene expression at the transcriptional or posttranscriptional level exerting effects on dosage compensation, chromatin regulation, genomic imprinting, nuclear organization, alternative splicing of pre-mRNA and many other biological processes [15][16][17]. Considering the participation of lncRNAs in various aspects of gene expression affecting the differentiation and development of organisms, it is not surprising that the dysregulation of lncRNAs has been involved in disease [18,19]. According to a genome-wide association study, 43% of reported trait/disease-associated SNPs (TASs) were intergenic, suggesting essential roles for ncRNAs in common diseases [20]. Furthermore, Chen et al. [21] created lncRNADisease, a database of 166 lncRNA-associated diseases. lncRNADisease collected nearly 480 entries of experimentally validated lncRNA-disease associations. The recognition of the important roles of lncRNAs in human disease has provided novel diagnostic and therapeutic opportunities [22]. Given the wide range of biological functions in which lncRNAs have been implicated, we predict that many more lncRNAs will be determined to have important functions. For many RNAs, there is a close relationship between structure and function [23][24][25]. Their structural diversity allows for RNA to perform various functions, including catalytic, organizational and other regulatory functions [26,27]. Generating structural models of these RNAs that are faithful to their native structures is essential because the structure of RNA influences its transcription, splicing, cellular localization, translation and turnover [28]. Thus, acquiring structural information for RNA is often the first step towards exploring its function [29]. Review This review focuses on lncRNAs, which comprise the least understood class of ncRNAs. Their functions, mechanisms, roles in epigenetics and relationships with diseases are introduced. Moreover, ncRNA structure prediction methods such as Foldalign [30], Pfold [31], Mfold [32], RNAfold [33], RNAshapes [34], RNAstructure [35], NAST [36], iFoldRNA [37], and 3dRNA [38] are reviewed ( Figure 1). Furthermore, the theories underlying each method as well as the advantages and pitfalls of their applications are provided. Based on this summary, another step in the understanding of lncRNAs can be achieved. As the secondary/tertiary structures of several functionally understood lncRNAs have been predicted (or experimentally verified), RNA structure predictions may help identify additional functional lncRNAs and may thus offer clues for the design of targeted small molecule therapeutics to promote drug development and the treatment of diseases [39]. Roles of ncRNAs and the Mechanisms Involved in Their Functions Many ncRNAs remain undiscovered, and the functions of the majority of previously discovered ncRNAs are not yet known. Furthermore, a low evolutionary conservation of these RNAs has been verified. All of these indications suggested that ncRNAs do not possess biological function. However, mounting evidence suggests that the lack of sequence conservation does not necessarily symbolize a deficiency in function [40]. Increasingly, studies have revealed that ncRNAs are involved in gene expression at almost every level of organismal differentiation and development, impacting processes including transcriptional/post transcriptional regulation, chromatin architecture, translation, alternative splicing of pre-mRNA and many other biological processes [15][16][17]60]. Several mechanisms by which ncRNAs regulate gene expression have been discovered. ncRNAs in Diseases and Clinical Diagnosis Because ncRNAs regulate various levels of gene expression and are involved in numerous biological processes, the dysregulation of ncRNAs is linked to diseases. It has been reported that ncRNAs exert significant effects on the immune response, inflammatory lung diseases [67], neurodevelopmental disorders [68] and cancer [69][70][71]. In general, abnormal tissues are obtained by invasive methods for the detection of biomarkers in the diagnosis or clinical treatment of tumors. However, due to the introduction of an external source, this is not the optimal choice for diagnostic and therapeutic applications. The characteristics of stability, specificity, sensitivity, predictability and accessibility are required for quantifiable indicators of diseases [72]. Some ncRNAs have been demonstrated to have potential as biomarkers and therapeutic targets for diseases due to their stabilities and accessibilities without invasive obtainment methods [73]. miRNAs are stable and have been found in biological fluids such as urine, serum, saliva and plasma, allowing miRNAs to be easily detected via non-invasive methods [74,75]. The detection of aberrant expression of miRNAs has been applied to the diagnosis and prognosis of cardiac diseases [76] and autoimmune diseases [77]. A genome-wide analysis has revealed only a fraction of lncRNAs are unstable and surprisingly, intronic, intergenic and cis-antisense lncRNAs are highly stable with a half-life of more than 16 h [78]. Some serum-derived lncRNAs have been used as biomarkers for hepatocellular carcinoma and colorectal cancer with high stability, reproducibility and specificity [79]. Moreover, snoRNAs serve as potential biomarkers for the diagnosis of non-small cell lung cancer (NSCLC) [80] and osteoarthritis progression after anterior cruciate ligament (ACL) injury [81]. Undoubtedly, the understanding of ncRNA function contributes to the development of biomarkers for the prognosis and clinical treatment of diseases. Long Noncoding RNAs Long noncoding RNAs (lncRNAs) consist of at least 200 nucleotides [82]. The structural conservation of lncRNAs is stronger than the conservation of their nucleotide sequences. It has been recognized that lncRNA transcription regulates the expression of genes in close genomic proximity in a cis-acting manner [83][84][85][86][87][88] and targets distant transcriptional activators or repressors in a trans-acting manner [89,90]. Additionally, various mechanisms involved in the transcriptional regulation of lncRNAs have been elucidated (some examples are shown in Figure 2) [83,84]. Moreover, lncRNAs also participate in epigenetic gene regulation [91,92]. Models of their functions are shown in Figure 2, where lncRNAs are depicted as playing a variety of roles in cellular networks. Therefore, it is inevitable that the dysregulation of lncRNAs is closely associated with diseases [18,19,93]. Evolutionary Conservation of lncRNAs Ken C. Pang et al. [40] investigated several types of noncoding RNAs that have been demonstrated or predicted to possess functionality, including miRNAs, lncRNAs and snoRNAs. As expected, lncRNAs are less conserved than miRNAs and snoRNAs. However, their findings imply that this lack of conservation does not necessarily dictate a lack of function. Due to the absence of conservation at the nucleotide sequence level, functional studies of lncRNAs are challenging. A number of researchers have uncovered a structural conservation [49]. Some specific structural regions of lncRNAs seem to play regulatory roles, while other regions consisting of exact sequences serve only as linkers between different functional modules [50][51][52]. conservation at the nucleotide sequence level, functional studies of lncRNAs are challenging. A number of researchers have uncovered a structural conservation [49]. Some specific structural regions of lncRNAs seem to play regulatory roles, while other regions consisting of exact sequences serve only as linkers between different functional modules [50][51][52]. The expression of the p15 antisense RNA, the lncRNA of a tumor suppressor gene, results in the silencing of the p15 gene through the induction of heterochromatin formation, which persisted after the p15 antisense RNA was turned off; (c) lncRNA binds to the major DHFR promoter and IIB, a general transcriptional factor, to form a stable and specific complex to dissociate the preinitiation complex from the major DHFR promoter; (d) As a response to stress, the RNA-binding protein TLS, under allosteric modulation via lncRNA upstream of CCND1, binds to chromatin-binding protein (CBP) and inhibits CBP/P300 HAT activities on CCND1; (e) The lncRNA Evf2, a crucial co-enhancer of regulatory proteins involved in transcription, cooperates with the Dlx2 protein to activate the Dlx5/6 enhancer in a target gene; (f) In response to heat shock, the lncRNA HSR1 (heat shock RNA-1) promotes the trimerization of HSF1 (heat-shock transcription factor 1), and consequently the translation factor EIF interacts with HSR1 and HSF1 to forms a complex to facilitate the expression of heat-shock protein (HSP); (g) NFAT is nuclear factor of activated T cells. The lncRNA NRON (noncoding repressor of NFAT) may form a complex with importin proteins to regulate the subcellular localization of NFAT. The knockdown of NRON increases the expression and activity of NFAT; (h) The lncRNA metastasis-associated lung adenocarcinoma transcript 1(MALAT1) has been shown to be abnormally expressed in many human cancers. The nascent MALAT1 transcript is cleaved by RNase P to produce the 3′ end of the mature MALAT1 transcript and the 5′ end of the small RNA; (i) Several studies have elucidated that some lncRNAs can act as microRNA sponges to competitively bind to microRNAs and decrease microRNA-induced tumorsphere differentiation. The expression of the p15 antisense RNA, the lncRNA of a tumor suppressor gene, results in the silencing of the p15 gene through the induction of heterochromatin formation, which persisted after the p15 antisense RNA was turned off; (c) lncRNA binds to the major DHFR promoter and IIB, a general transcriptional factor, to form a stable and specific complex to dissociate the preinitiation complex from the major DHFR promoter; (d) As a response to stress, the RNA-binding protein TLS, under allosteric modulation via lncRNA upstream of CCND1, binds to chromatin-binding protein (CBP) and inhibits CBP/P300 HAT activities on CCND1; (e) The lncRNA Evf2, a crucial co-enhancer of regulatory proteins involved in transcription, cooperates with the Dlx2 protein to activate the Dlx5/6 enhancer in a target gene; (f) In response to heat shock, the lncRNA HSR1 (heat shock RNA-1) promotes the trimerization of HSF1 (heat-shock transcription factor 1), and consequently the translation factor EIF interacts with HSR1 and HSF1 to forms a complex to facilitate the expression of heat-shock protein (HSP); (g) NFAT is nuclear factor of activated T cells. The lncRNA NRON (noncoding repressor of NFAT) may form a complex with importin proteins to regulate the subcellular localization of NFAT. The knockdown of NRON increases the expression and activity of NFAT; (h) The lncRNA metastasis-associated lung adenocarcinoma transcript 1(MALAT1) has been shown to be abnormally expressed in many human cancers. The nascent MALAT1 transcript is cleaved by RNase P to produce the 3 1 end of the mature MALAT1 transcript and the 5 1 end of the small RNA; (i) Several studies have elucidated that some lncRNAs can act as microRNA sponges to competitively bind to microRNAs and decrease microRNA-induced tumorsphere differentiation. Mechanisms of lncRNA Function The known mechanisms involved in the function of lncRNAs are as follows ( Figure 2): (a) To induce transcriptional interference, lncRNAs spanning downstream promoter regions of protein-coding genes interfere with transcription factors via binding to their activators and repress the expression of these protein-coding genes [94]; (b) To initiate chromatin remodeling, the transcription of lncRNAs may induce heterochromatin formation and DNA methylation, thus leading to the silencing of tumor suppressor genes [47,95]; (c) lncRNAs bind to basal transcription factors to inactivate their promoters and thus repress the expression of target genes [96]; (d) lncRNAs activate accessory proteins to repress gene expression [83,97]; (e) lncRNAs activate transcription factors to promote the expression of target genes. This reveals a novel mechanism involving the cooperative actions of an lncRNA and a homeodomain protein to regulate transcription [98]; (f) The formation of a trimer containing an activator protein, a translation elongation factor and an lncRNA accelerates the expression of target genes [99]; (g) lncRNAs interact with importin proteins to regulate the subcellular localization of transcription factors. [100]; (h) lncRNAs act as the precursors of small RNAs to perform functions [101]; (i) lncRNAs bind to small RNAs to modulate their activities [102]. Epigenetics It is reported that lncRNAs participate in the epigenetic regulation of gene expression [103][104][105][106], and recent studies suggests a unified model of their mechanism of action. The lncRNAs may directly or indirectly recruit protein complexes involved in chromosome modifications, which results in epigenetic regulation [91]. In accordance with the relative positional relationship between lncRNAs and their target genes, mechanisms by which lncRNAs regulate target genes can be considered cis [84][85][86][87][88] or trans [83,89]. For those lncRNAs regulating target genes in cis, it was found that the RNAs can form a nuclear complex that is closely linked to the silenced genes. It is suggested that the lncRNAs may bind to epigenetics modifiers to mediate gene silencing [107]. The HOTAIR lncRNA inactivates genes in trans and interacts with Polycomb Repressive Complex 2 (PRC2) to mediate transcriptional silencing of the HOXD locus [90]. LncRNAs and Disease As mentioned above, an increasing number of studies have demonstrated that lncRNAs participate in th As previously mentioned, an increasing number of studies have demonstrated that lncRNAs participate in the regulation of protein coding genes at the transcriptional and posttranscriptional levels [108]. It is reported that the dysregulation of lncRNAs seems to be the primary cause of many complex human disease processes [109,110], including the development and progression of many types of cancer [111], such as colon cancer [112], prostate cancer [113], breast cancer [114], liver cancer [115], gastrointestinal cancer [116] and other cancers [12,117]. Moreover, some studies have shown aberrant lncRNA expression in neurological diseases [118,119]. Further, mounting studies have suggested potential roles for lncRNAs in immunity [120,121]. lncRNA Structure and Function Similar to mRNAs, distinct mature ncRNAs can be obtained from primary non-protein coding RNA transcripts via alternative splicing in various differentiated cells, developmental stages or physiological states. It has been estimated that 95% of human primary transcripts of genes containing multiple exons are regulated by alternative splicing [122]. Alternative splicing produces transcript diversification [123]. Alternative splicing of pre-mRNAs generates circular RNA (circRNA) isoforms, ncRNAs with circular structures formed by covalent bonds without a 5 1 terminal cap or a poly A tail [124]. In general, canonical splicing processes pre-mRNA sequentially in a 5 1 to 3 1 direction. The processing involves two transesterification reactions to form the intron lariat, followed by the orderly linkage of upstream and downstream exons [125]. However, in the models of the formation of circRNAs, the presence of a non-canonical transcription start determines that an orphan upstream 3 1 exon splice site could be generated and then paired with a downstream 5 1 exon splice site with introns being excised, which produces a circRNA with a circular structure [124]. Trans-splicing and exon skipping are two potential mechanisms by which circRNAs can be generated [126]. Alternative splicing produces many isoforms of the new discovered lncRNA ANRIL associated with different expression patterns and single nucleotide polymorphisms (SNPs). In general, introns are rapidly excised after transcription. However, more than 100 human introns have their 3 1 tails degraded but retain their 2 1 ,5 1 -phosphodiester bond at the splice site without being hydrolyzed. The reserved introns accumulate to form circular intronic lncRNAs (ciRNAs). At the 5 1 and 3 1 ends of ciRNAs, there are snoRNA structures that replace the 5 1 cap and poly A tail and facilitate the accumulation of ciRNAs [127]. Existing evidence has shown that ciRNAs play cis-regulatory roles in the transcription of their parental genes through an interaction with the Pol II machinery [128]. The early discovered lncRNA Nuclear Enriched Abundant Transcript 1 (NEAT1) (MEN ε/β) has been shown to generate distinct isoforms (MEN ε and MEN β) by the alternative processing of the NEAT1 3 1 end. MEN ε is characterized by poly A at its 3 1 end, whereas, similar to the lncRNA (MALAT1), the 3 1 end of MEN β consists of a triple helix structure [129]. Intriguingly, the structure of MEN β is more stable in various species, and the reason for this is currently under investigation [130]. It is currently accepted that the explanation for the various functions of lncRNAs lies in their multiple structures. Mounting evidence has revealed that some lncRNAs and circRNAs can serve as miRNA sponges and inhibit the binding of miRNAs to their target mRNAs to perform their functions [131]. Maternally expressed gene 3 (MEG3), which is highly expressed in the human pituitary, is an imprinted gene that can exist as 12 different transcriptional isoforms due to alternative splicing. All of the MEG3 isoforms have been recognized to inhibit tumor cell growth. The secondary structure motifs M1, M2 and M3 were observed in all of the MEG3 isoforms, and the M2 and M3 motifs have been shown to be closely involved in the activation of P53 and the inhibition of tumor cell growth [132]. However, some lncRNA isoforms perform opposing roles in biological processes. It is reported that the tumor suppressor gene PTEN is regulated by its pseudogene (PTENpg1) through the miRNA sponge action of PTENpg1. To further investigate this regulatory mechanism, two PTENpg1 antisense RNAs (asRNAs) were discovered to play opposing roles in the regulation of PTEN [133]. X-chromosome inactivation (XCI) is a common phenomenon in epigenetic processes. The lncRNA Xist (X-inactive specific transcript) is reported to act as a critical suppressor of X-chromosome inactivation (XCI) [134][135][136]. Several tandem repeat units composed of two stem-loop structures at the 5 1 end of Xist have been shown to be essential for the initiation of XCI [51]. Circular ANRIL (cANRIL) is an ANRIL isoform whose circular structure is a by-product of pre-mRNA alternative splicing. Previous studies suggest that alterations of the structure and/or expression of ANRIL isoforms regulate the expression of INK4/ARF and are associated with atherosclerotic vascular disease (ASVD) [137]. MALAT1, also called nuclear-enriched transcript 2 (NEAT2), has been used as a prognostic marker for the occurrence and development of several types of tumors [138][139][140]. At the post-transcriptional level, the specific secondary structure at the 3 1 end of the MALAT1 primary transcript can be recognized by RNase P and RNase Z, generating a triple helix structure that stabilizes MALAT1 and enables MALAT1 to perform its functions [129,141]. The ncRNA growth arrest-specific 5 (Gas5) is predicted to contain several specific hairpin structures and to be involved in starvation-induced cell survival and metabolic activities through the regulation of glucocorticoid receptor (GR) transcription [142]. Structural Prediction of ncRNAs To elucidate the functions of lncRNAs and to further investigate the question of whether nucleotide sequences serve as functional units or simply linkers of different functional modules, it is necessary to study the structures of lncRNAs and the interaction between their structure and sequence. RNA possesses a unique ability to form complex secondary and tertiary folds [29]. It has been gradually recognized that the structural flexibility of RNA enables it to perform organizational, catalytic and regulatory functions [25,142,143]. It is now becoming feasible to obtain the functional annotation of transcriptomes based on RNA structure [28]. Traditional methods to investigate RNA structure include chemical probing [144], X-ray crystallography and NMR [145][146][147][148]. However, an increasing number of lncRNA molecules have been discovered. Due to the rapid degradation and difficult crystallization of RNA molecules, it is difficult to determine their stereo-chemical structure with these traditional approaches [28]. It is necessary to develop powerful computational methods to predict RNA structure. In this section, various structure prediction methods for noncoding RNAs are reviewed. Prediction of ncRNA Secondary Structure The folding process of the majority of RNA molecules represents a transition from secondary to tertiary structure [149]. Therefore, obtaining the RNA secondary fold is the first step in exploring the functions of ncRNAs [29]. In recent years, various methods have been proposed for predicting RNA secondary structure. These methods are based on two distinct ideas: multiple sequence alignments and the minimum free energy model [28]. Multiple Sequence Alignments Methods based on comparative sequence analysis rely on the fact that the structural conservation is greater than the sequence conservation of RNA [150,151]. Comparative sequence analysis compares several RNA sequences with similar secondary structures to search for conserved secondary structural units and predicts the secondary structure of an unknown RNA sequence [152]. Foldalign Foldalign [30], simplified from Sankoff [153], utilizes a dynamic programming algorithm to find the highest scoring local alignment between a sequence and an alignment of other sequences or between two sequences [154]. The correlation coefficient [155] ranges from 0.8 to 0.9 between the verified database and the predicted structural alignments. Foldalign compares each sequence with every other sequence, and the numbers of the highest scoring alignments are saved. It can effectively perform on RNA sequences less than 300 nt. In addition, the time associated with this method is significantly reduced compared with the Sankoff version and other variants. However, the speed and efficiency of Foldalign require improvements [154]. The web server can be accessed at http://rth.dk/resources/foldalign/ [156]. Dynalign Dynalign [157], which is based on a dynamic programming proposed by Sankoff, searches for a structure with low free energy common to two sequences without sequence identity by combining comparative sequence analysis and free energy minimization. Compared with free energy minimization alone, the average accuracy of this algorithm is improved from 47.8% to 86.4% for 5S rRNAs. It can predict a set of suboptimal secondary structures and create dot plots to read the information contained in suboptimal structures. Moreover, enzymatic cleavage data [158] and chemical modification probing experiments [159] can be applied to increase the prediction accuracy. However, it cannot predict pseudoknots, and the calculation is limited to sequences whose lengths are less than 400 nt [160]. Pfold Pfold [161] is based on the KH-99 algorithm [162], which combined evolutionary information and a probabilistic structure model. Pfold can accommodate larger numbers of sequences, which can compensate for the limitations of the KH-99 algorithm. Due to its high computational speed and prediction accuracy, it is able to predict RNA secondary structure when long sequences and large numbers of homologous sequences need to be analyzed. With six sequences, an accuracy of 75% is attainable. In addition, many more sequences can be accommodated by Pfold, allowing for even higher accuracies [31]. However, there is still much room for this method to be improved, such as the introduction of a grammar to describe native-like RNA structures, stacking interactions and other models for base-pair evolution [161]. In addition, it cannot predict pseudoknots. Pfold is available through the web-based server www.daimi.au.dk/~compbio/pfold [163]. Alifold The Alifold service [164,165], an extension of Zuker's algorithm [166], uses modified dynamic programming algorithms combined with a covariance term to compute the consensus secondary structure of a set of aligned RNA sequences. It can predict minimum free energy structures and pair probabilities. The current limit for the length of the alignment is 3000 nt [165]. The advantages and limitations of Alifold are almost identical to those of RNAfold. This service can be accessed via the Vienna RNA web server at http://rna.tbi.univie.ac.at/cgi-bin/RNAalifold.cgi [167]. MARNA MARNA [168], a non-probabilistic approach [169], performs pairwise alignments considering both the primary and secondary structures. It folds sequences using the minimum free energy and then provides structural alignment among a set of homologous sequences. When the conservative sequence regions are invisible, MARNA is an appropriate option to predict RNA secondary structure. Users can designate individual parameters that can set the weight for either sequence or structural properties. However, the total length of sequences should not be longer than 10,000 nt. MARNA can be used online on the following webpage: http://rna.informatik.uni-freiburg.de/MARNA/Input.jsp [170]. A large number of studies and experiments have demonstrated that comparative sequence analysis processes higher prediction probabilities when the RNA sequence templates have high similarity [171]. However, because comparative sequence analysis depends on the prior knowledge of sequences, this model is unfit for single RNA sequences or sequences from considerably different sources [152]. In addition, comparative sequence analysis is time-and internal storage-consuming, which limits its application for predicting longer RNA sequences [28]. Minimum Free Energy Model When no prior knowledge is available and only a single sequence is offered, an accurate and popular method is to search the minimum free energy model through thermodynamic computation [172]. This model utilizes efficient dynamic programming algorithms to search for a secondary structure with the minimum free energy [166]. However, true RNA secondary structure may not be the structure with the minimum free energy. Zuker et al. [173] developed the concept of suboptimum structures. All suboptimum structures must be further identified by biology researchers. Mfold Mfold [32] divides RNA secondary structures motifs into the stem area, bulge loop, internal loop and hairpin loop. Different computational methods are used to calculate the free energy of different motifs. Then, the motifs are assembled through dynamic programming algorithms, and the secondary structure with the minimum free energy can be obtained. Using this method, prior knowledge can be specified before the prediction; the structure of circular RNA sequences is predictable, the maximum for internal or bulge loops can be set, and the maximum distance between paired bases can be artificially determined. Many studies have proposed that RNA secondary structure affects splicing activity [174]. Yun Yang et al. [175] discovered that the inherent intronic elements are underlying mechanisms for the pre-mRNA splicing process. These elements have been found conserved at the RNA secondary structural level. In their studies, the Mfold program was used to predict intronic pairings. However, Mfold can only predict the secondary structure of single stranded RNA. The portal for the Mfold webserver is http://unafold.rna.albany.edu/?q=mfold [176]. RNAfold RNAfold [33], which is based on dynamic programming algorithms and computations of the equilibrium partition functions and base pairing probabilities, uses the minimum free energy model and multiple sequence alignments when given single stranded RNA sequences and several stranded RNA sequences, respectively. RNAfold is a reliable option regardless of whether the base pairing of G and U is acceptable or not. Moreover, the sequences can contain incorrect characters. Furthermore, the program can predict single stranded and several stranded RNAs. Humann et al. [177] discovered differentially expressed lncRNAs in the larval ovaries of honeybee caste by using the RNAfold program and other biological technologies. They named the newly discovered lncRNAs lncov1 and lncov2. The secondary structures of both RNAs consist of several consensus hairpin motifs lacking coding potential. However, it is worth noting that the length of the sequence should not be more than 300 nt. When predicting several stranded RNAs, the program can only produce the consensus structure as opposed to the secondary structure of each sequence. In addition, the total length of the sequences cannot exceed 10K nt when predicting the consensus structure. The portal for the RNAfold web server is http://rna.tbi.univie.ac.at/cgi-bin/RNAfold.cgi [178]. RNAshapes RNAshapes [34], based on the abstract shapes approach [179], is a new method that combines three RNA analysis tools: the analysis of shape representatives, the consensus shapes approach and the calculation of shape probabilities. Compared with other current RNA folding algorithms, RNAshapes only describes classes of structures from concrete secondary structures. These structures fall into different shape categories. Within a shape class, every representative is the secondary structure with the minimum free energy. Using this package, the single-stranded RNA, the sequence files and the multi-sequence files are all predictable. For a given threshold value, the number of shapes is less than the number of structures, and the native structures are among the shape representatives. Therefore, users can avoid researching redundant suboptimal structures [179]. However, because the folding kinetics are not considered, the minimum free energy prediction may be incorrect. RNAshapes is freely available at http://bibiserv.techfak.uni-bielefeld.de/rnashapes [180]. RNAstructure RNAstructure [35] utilizes the most recent set of thermodynamic parameters to implement the nearest neighbor parameters as determined by the Tuner group [181,182] based on dynamic programming algorithms and Sankoff, which allow sequence alignment and structure prediction to proceed simultaneously. The user interface is friendly and powerful. Its "Max % Energy Difference" and "Max Number of Structures" can be modified to limit the number of suboptimal structures predicted. Moreover, experimental data can be added to constrain the structures. Furthermore, it can predict both single stranded RNA and a structure common to two sequences. This method has been widely used in research. Ding et al. [183] compared the structural features of mRNAs in vivo with predicted structures (determined by RNAstructure) in silico and revealed that mRNAs related to stress responses have structural features, such as longer maximal loop length and more single strandedness, that allow for easy conformational changes under various environmental conditions. SPRY4-IT1, the lncRNA that regulates invasion and apoptosis, was predicted to contain long hairpin motifs (by RNAstructure), suggesting that SPRY4-IT1 may function as an RNA molecule [184]. The package is available for downloading at http://rna.urmc.rochester.edu/RNAstructure.html [185]. The information regarding the methods described above is summarized in Table 1. Apart from the mainstream methods mentioned above, Sfold [186,187], Contrafold [188], and MPGAfold [187] are also available to solve problems when predicting RNA secondary structure. Although there has been remarkable development in the methods to predict RNA secondary structures, the methods based on the free energy parameters proposed by Zuker et al. [32,173] still represent the mainstream. Prediction of ncRNA Tertiary Structure The formation of specific tertiary structures is essential for the functioning of noncoding RNAs in many biological processes [189]. RNAs can alter their tertiary structure under different conditions, enabling them to interact with other RNAs, ligands, proteins or themselves [28]. In this section, methods to predict the tertiary structure of ncRNAs are reviewed. FARNA FARNA [190], derived from the Rosetta methods of protein tertiary structure prediction [191], utilizes coarse-grained models as dummy atoms to replace the center of each base and seek RNA tertiary structure with the minimum free energy. The prediction accuracy of the main chains can reach a 4 Å root-mean-square-deviation (RMSD) [192] for short RNA sequences with a length less than 30 nt. The prediction accuracy of this method can be further improved by combining it with experimentally determined secondary structure information [193]. In recent years, Baker et al. [194] have introduced all-atom items to FARNA, which has allowed FARNA to become an all-atom structure prediction method. FARNA is characterized by a better computational efficiency in comparison with numerous sampling strategies. However, FARNA can only predict the tertiary structure of small RNA molecules (<40 nt). Challenges remain in accommodating RNA molecules of longer lengths or with complex topological structures. NAST NAST (The Nucleic Acid Simulation Tool) [195], based on coarse-grained models, uses knowledge-based energy functions to automatically predict RNA tertiary structure. NAST requires secondary and tertiary contact information for target RNA molecules to direct folding. It has a mean RMSD of 8.0˘0.3 and 16.3˘1.0 Å for the yeast phenylalanine tRNA and the P4-P6 domain of the Tetrahymena thermophila group I intron, respectively. Plausible RNA structures can be created with empirical RNA geometric distributions, a relatively high modeling speed can be achieved by using single-point-per-base models, and the capacity to constrain and filter models with experimental data improves the prediction accuracy of NAST. Due to computational complexity, modeling large RNA molecules remains difficult. The software package is freely available at https://simtk.org/home/nast [196]. iFoldRNA iFoldRNA [37] uses discrete molecular dynamics (DMD) to rapidly explore RNA tertiary conformation [36,197]. Compared with traditional dynamic molecule simulations, the rapid conformation sampling ability of DMD contributes to its rapid structure prediction [198]. Low RMSDs (2-3 Å) are observed in the predictions of iFoldRNA. iFoldRNA can predict the tertiary structure of small RNA molecules (<50 nt) with simple topological structure. When predicting larger RNA molecules (>50 nt), a longer time is required to sample conformational space, which exponentially increases. Recently, parameters including base pairing, base-stacking, and hydrophobic interactions obtained from experiments have been integrated into iFoldRNA to constrain the structures of larger RNA molecules [199]. BARNACLE BARNACLE [200], a probabilistic model of RNA structure, provides sampling of RNA conformations in continuous space. The current state of prediction methods such as FARNA are primarily based on combining short fragments obtained from experiments to construct reasonable native-like tertiary structures. However, there are some computational sampling problems associated with these methods. It is possible for BARNACLE to efficiently sample 3D conformations of RNA on a short length scale. BARNACLE can accurately predict RNA tertiary structure when the length of the RNA sequence is less than 50 nt (10 Å RMSD). Nevertheless, structure sampling becomes difficult due to too many degrees of freedom with longer RNA molecules or with those that harbor complicated topological structures. Moreover, the sequence and evolutionary information of BARNACLE needs to be extended. CG Model The CG model [201] models RNA structures with molecular dynamics based on a new statistical coarse-grained potential. The statistical analysis of 688 RNA experimental structures has been applied to parameterize the CG potential [202]. The computational efficiency is greater than that of the all-atom model because of the reduction in the number of angles, bonds and torsion calculations. Fifteen RNA molecules with a length of 12 to 27 nt have been tested through molecule dynamics simulation, this shows that 75% of RNA molecules can be led to native-like structures with at least one out of multiple pathways using the simulated annealing method. If secondary or tertiary structure interaction information is provided, all of the RNA molecules will successfully be folded into structures with an RMSD less than 6.5 Å. Similar to other methods, this method is restricted to predicting small RNA molecules with simple topological structures. RNA2D3D RNA2D3D [203], different from other structure prediction methods, is based on unpaired bases derived from Assisted Model Building with Energy Refinement (AMBER) [204] and canonical base-pairings of the A-form helix to model RNA tertiary structure. However, overlapping atoms, covalent bond disassociation and other structural problems that exist in the RNA tertiary structure are automatically generated by RNA2D3D. Therefore, further optimization is necessary to obtain a reasonable RNA tertiary structure. After the adjustment and optimization of RNA2D3D, the pseudoknot structure of the telomerase RNA, with a length of 48 nt, has been successfully built by Shapiro et al., and the RMSD reached 7 Å [205][206][207]. Vfold Model The Vfold model [208] is a physics-based method for predicting larger and more complex RNA molecules from nucleotide sequences. This method uses a multi-scaling strategy in which secondary and tertiary structures are obtained in a serial fashion. Compared with other methods, the Vfold model can predict larger RNA molecules, for example the 122-nt 5S rRNA domain (RMSD 7.4 Å). The most significant advantage of the Vfold model is its statistical mechanical calculations for the conformational entropy of RNA tertiary structures. In addition, the model can be used to predict all low-lying tertiary structures in the energy landscape. However, this method does not consider the sequence-dependent tertiary contacts, such as general loop-loop and loop-helix interactions, in loop-free energy minimization. RSIM RSIM [36], a fully automated application, is an improved approach to predict RNA tertiary structure using the fragment assembly method based on RNA secondary structure constraints. It overcomes the pitfalls of FARNA, such as the reduction of the size of the sampled conformational space and the reasonable base-pairing constraint using the fragment assembly method. Monte Carlo simulations, a statistical potential and a diverse fragment library are further used to refine the tertiary structures obtained by RSIM. During the refinement, the stimulation paths can be tracked. RSIM can accommodate RNA molecules with a length over 40 nt (RMSD 4.8 Å). However, RSIM cannot automatically predict the tertiary structure of RNA molecules with pseudoknot structures. RSIM is available at http://www.github.com/ jpbida/rsim [209]. 3dRNA 3dRNA [38], based on RNA sequence and secondary structural information, is a method for the rapid and automated building of RNA tertiary structure. It is a hierarchical approach to the construction of RNA tertiary structure [210]. Compared with other methods, 3dRNA can obtain RNA tertiary structural templates from different RNA families. It is found that the conformations of the backbone of RNA structural templates of the same sequence are similar to each other. These changes contribute to a high average prediction accuracy of 3.97 Å RMSD. 3dRNA is not limited to predicting the tertiary structures of small RNA molecules or those with simple topology. For RNA molecules of a large size and complex topology, the predicted tertiary structures have an average RMSD of 5.7 Å. The research conducted in Qian's lab in Northwestern Polytechnical University has predicted the tertiary structures of 5 lncRNAs with 3dRNA and uncovered important roles for these lncRNAs in bone formation when MACF1 (Microtubule actin cross-linking factor l) is down-regulated (data not shown). The package is available at http://biophy.hust.edu.cn/3dRNA/3dRNA-1.0.html [211]. The methods mentioned above are widely used to predict RNA tertiary structure. Furthermore, MC-Fold/MC-Sym [212], based on the nucleotide cyclic motif (NCM), is a first-order object to represent nucleotide relationships in structured RNAs. ASSEMBL [213] is an interactive graphical tool based on human-computer interactions to analyze and build 2D and 3D RNA models. In general, the prediction accuracy of RNA tertiary structure will be largely improved by the addition of structural information, such as RNA secondary structure, distance, rotation angle, dihedral angle and other tertiary structural information [214]. However, Liang and Schlick [215,216] accessed these existing RNA tertiary structure prediction methods and found that they are restricted to analyzing short (<50 nt) or topologically simple molecules with RMSD less than 6 Å. When predicting larger (50 to 130 nt) or more topologically complex RNA molecules, the tertiary structure can be obtained with a mean RMSD of 20 Å. Moreover, the existing prediction methods for RNA tertiary structure require human-computer interactions for further adjustment to optimize the obtained RNA tertiary structure. Therefore, the proposal of 3dRNA is a significant step forward in the prediction of RNA tertiary structure. The various methods for predicting RNA tertiary structure are summarized in Table 2. Conclusions With an increasing number of studies focused on lncRNAs, an increased understanding of lncRNAs has been achieved. lncRNAs play biological roles in organisms, and their dysregulation is strongly linked to the occurrence and development of various diseases [217]. However, the in-depth knowledge of the function of lncRNAs is a developing but difficult field due to the diversity and complexity of the mechanisms underlying lncRNAs. As RNA function is closely associated with its structure [24], analyzing RNA structure provides a new approach to the study of lncRNAs [28]. Before major progress in the determination of ncRNA structure using physical methods is achieved, the structural prediction of ncRNAs will be a hotly debated issue. At present, the prediction of pseudoknots is very difficult [218]. Our knowledge of thermodynamics [182,219] and algorithms to model RNA molecules undergoing conformational changes [28] is incomplete. These represent problems that need to be addressed for the secondary structure prediction of ncRNAs. Moreover, with the ongoing improvements in the accuracy of ncRNA structural prediction, it is possible to reliably predict the tertiary structure of small RNA molecules; however, predicting the structure of large RNA molecules or those with complex topological structures [38] remains challenging. Moreover, tackling the structure of non-canonical base pairings in the prediction of RNA tertiary structure remains a difficult problem [38]. Furthermore, to elucidate the complicated mechanisms of actions of lncRNAs, the use of experimental data as constraint information is inevitable. It is expected that issues occurring in the structural prediction of lncRNAs will be addressed in the future and that additional techniques will be applied to studies of lncRNA function, which will allow the further analysis of their functions, molecular regulation and pathological mechanisms in diseases. In the future, lncRNAs may serve as drug targets and provide new opportunities for the treatment of diseases.
9,289
2016-01-01T00:00:00.000
[ "Biology" ]
Circulating insulin-like growth factor-1 and risk of lung diseases: A Mendelian randomization analysis Background Insulin-like growth factor-1 (IGF-1) display a vital role in in the pathogenesis of lung diseases, however, the relationship between circulating IGF-1 and lung disease remains unclear. Methods Single nucleotide polymorphisms (SNPs) associated with the serum levels of IGF-1 and the outcomes data of lung diseases including asthma, chronic obstructive pulmonary disease (COPD), lung cancer and idiopathic pulmonary fibrosis (IPF) were screened from the public genome-wide association studies (GWAS). Two-sample Mendelian randomization (MR) analysis was then performed to assess the independent impact of IGF-1 exposure on these lung diseases. Results Totally, 416 SNPs related to circulating IGF-1 levels among 358,072 participants in UK Biobank. According to a primary casual effects model with MR analyses by the inverse variance weighted (IVW) method, the circulating IGF-1 was demonstrated a significantly related with the risk of asthma (OR, 0.992; 95% CI, 0.985-0.999, P=0.0324), while circulating IGF-1 showed no significant correlation with CODP (OR, 1.000; 95% CI, 0.999-1.001, P=0.758), lung cancer (OR, 0.979, 95% CI, 0.849-1.129, P=0.773), as well as IPIGFF (OR, 1.100, 95% CI, 0.794-1.525, P=0.568). Conclusion The present study demonstrated that circulating IGF-1 may be causally related to lower risk of asthma. Introduction Insulin-like growth factor 1 (IGF-1), a 70-amino acid polypeptide, is formerly known as somatomedin C for similarity of sequence and structure to insulin, which is mainly responsible for a series of biological functions, such as cell division, differentiation, apoptosis as well as metabolism (1,2). The production of IGF-1 is mainly derived from the liver and controlled by growth hormone (GH), which also fulfils an endocrine function (3). IGF-1 acts primarily by binding to cysteine-rich regions of the IGF-I receptor (IGF-IR) subunit, which leads to conformational changes in the receptor that subsequently interact with adaptor proteins, such as members of the insulin receptor substrates (IRS), Granzyme B (GrB), and Src homology collagen (SHC) families, to initiate intracellular signal transduction cascades, containing phosphoinositol-3-kinas (PI3K)/protein kinase B (PKB/Akt) pathways and Ras-mitogen activated protein kinase, thereby involving in cell proliferation and apoptosis, respectively (4)(5)(6). Furthermore, IGF-1 can also influence metabolism of carbohydrate and lipid under the physiological and pathological conditions (7). Interestingly, metabolism of liver glucose may in turn directly regulate the transcription of IGF-1 gene (8). Immunological and genetic analysis confirm the expression of IGF-1 in pathological or physiological lung tissue, such as airway cells, alveolar macrophages as well as lung fibroblasts. Lung fibroblasts have been shown to synthesize IGF-1 (9). Minuto et al. found that the expression of IGF-1 in tumor tissue were higher than in adjacent normal lung tissue (10). Moreover, activated alveolar macrophages can express IGF-1 (11). Taken together, IGF-1 may play a critical role in lung disease, especially in inflammatory disease, cancers, and lung fibrosis. However, so far, the relationship between circulating IGF-1 and lung diseases such as asthma, chronic obstructive pulmonary disease (COPD), lung cancer and idiopathic pulmonary fibrosis (IPF) remains unclear, and the available data cannot infer causality. Mendelian randomization (MR) analysis, an epidemiological method, employs genetic variation as an instrumental variable for exposure, which can effectively reduce residual confusions and minimize reverse causal bias, thereby strengthening causal reasoning for exposion-outcome associations (12). At present, MR analysis has been favorably exerted to a wide range of observational associations, such as comprehending correlations between physiological indicators and evaluating the causal effects of various behaviors, especially the causal effects of biomarkers on diseases. In this context, we conducted MR analysis to determine the association between serum IGF-1 levels with asthma, COPD, lung cancers and IPF. Method and material Study design The MR analysis uses genetic variation randomly assigned at meiosis and is therefore independent of potential confounders to act as a proxy for risk factors in the instrumental variable analysis. Beyond that, to be considered an effective tool, the gene variant must be closely related to the risk factor of interest and not directly influence the outcome, but only through exposure (13). Since observational studies tend to reverse causality and untested confounding, this study designed a two-sample MR analysis to investigate the causal effect of circulating IGF-1 levels on pulmonary diseases, including asthma, COPD, lung cancers, and IPF. Data on the correlation between single-nucleotide polymorphisms (SNPs) and circulating IGF-1 as well as the correlation between SNPs and pulmonary diseases were obtained from the GWAS database ( Figure 1). Genetic instrument selection Based on a GWAS containing 358072 participators of European descent from UK Biobank, a total of 416 SNPs was selected as genetic instrument for the level of circulating IGF-1 with P value of the genome-wide significant threshold less than 5 × 10 -8 as well as a linkage disequilibrium threshold of R 2 <0.01. The phenotypic FIGURE 1 Flow chart of Mendelian randomization in this study. Data sources of lung diseases The data associated with lung diseases in the present study were publicly available from Browse the IEU OpenGWAS project (mrcieu.ac.uk), which retrieves a series of large-scale GWAS. Summary-level data for asthma (GWAS ID: ukb-b-18113) were obtained from the MRC Integrative Epidemiology Unit at the University of Bristol (MRC-IEU) consortium with 53598 cases and 409335 non-cases. Summary-level data for COPD (GWAS ID: ukb-b-13447) were obtained from MRC-IEU consortium with 1605 cases and 461328 non-cases. Summary-level data for lung cancers (GWAS ID: ieu-a-966) were obtained from the international lung cancers consortium (ILCCO) with 11348 cases and 15681 non-cases. Furthermore, we subdivided lung cancer into lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC), and analyzed the correlation between IGF-1 and them respectively. Summary-level data for IPF (GWAS ID: finn-b-IPF) were obtained with 1028 cases and 196986 non-cases. The above data are summarized in Table 1. Statistical analysis Inverse variance weighted (IVW) MR analyses based on the random effects model was used as the main analysis. IVW can be weighted to mean the approximate variance of the reciprocal of these single causal estimates (16). Cochran's Q statistic is calculated to determine the heterogeneity between the analyses obtained from individual SNPs. Egger's regression analysis and MR-PRESSO global test (17) were used to evaluate the potential directional pleiotropy. In addition to IVW, we also performed other methods to test the consistency of test results. Maximum likelihood is a traditional method with low standard error. It estimates the probability distribution function of the probability distribution parameter. Although there may be deviation sizes in a limited sample, the deviation is so small as to be biologically negligible (18). Simple median method, which offer consistent effect estimates if at least 50% of the genetic variation is a valid tool, while weight median method can account for differences in estimate accuracy and provide consistent estimates even if 50% of the information comes from invalid SNPs (19). All the results were considered statistically significant at P < 0.05 with two-tailed testing. All analyses were conducted in R (version 4.1.2) based on R package "TwoSampleMR". IGF-1 and asthma According to a primary casual effects model with MR analyses by the IVW method, IGF-1 has been demonstrated to be significantly associated with the risk of asthma (OR, 0.992; 95% CI, 0.985-0.999, P=0.0324, Figure 2). The results of maximum likelihood and simple median were consistent with IVW (OR, 0.992, 95%CI, 0.988-0.996, P <0.001; OR, 0.992, 95%CI, 0.984-0.999, P =0.0276, respectively). Although the results of MR Egger and weighted median showed that there was no statistically significant between IGF-1 exposure and asthma (OR, 0.995, 95% CI, 0.977-1.013, P=0.595; OR, 0.996, 95% CI, 0.988-1.004, P=0.292, respectively), the direction was in line with the results of the main analysis, especially IVW. Heterogeneity analysis suggested that there may be heterogeneity (P<0.001, Table 2), however, this did not affect the results of IVW and the conclusion was still reliable and acceptable. Moreover, horizontal pleiotropy was not found in the MR results (P=0.221). The scatter plot was presented in Figure 3, which showed the consistency of the results. heterogeneity, the results were still reliable. There was no evidence of directional pleiotropy existing according to MR-Egger intercept, while MR-PRESSO global test showed the contrary result. We further discovered that the outlier-corrected analysis showed similar results (Table S2). Discussion In the present study, the two-sample MR analysis applying a series of large sample GWAS data indicated that the level of circulating IGF-1 may contribute to the inverse risk of asthma, while there was no obvious evidence to testify the relationship between the level of IGF-1 and COPD, lung cancers as well as IPF. Interestingly, despite the lack of epidemiological as well as clinical studies of circulating IGF-1 in relation to lung disease, substantial preclinical evidence has revealed the potential effects of IGF-1 in the pathobiology of asthma, COPD, lung cancers, and IPF. Our findings showed consistency with several but not all previous studies of the role of IGF-1 in asthma. A study among British population cohort suggested that the level of serum IGF-1 was associated with a lower risk of asthma, which was in line with our study (20). However, a case-control study of 50 participants reported a positive association of serum IGF-1 with asthma. Nevertheless, the study was limited by a small sample size (21). Indeed, IGF-1 may play a crucial role in asthma, especially in airway hyperresponsiveness, airway inflammation, as well as smooth airway hyperplasia. A study of bronchial biopsy in asthmatic patients showed significantly elevated mRNA levels of IGF-1 and was associated with subepithelial fibrosis (22). These studies suggested that IGF-1 may involve in airway inflammation and remodeling. Supporting this hypothesis is the expression of airway IGF-1 was upregulated in ovalbumin (OVA)-induced asthmatic mouse models, while administration of anti-IGF-1 improved airway the inflammation, resistance, and wall thickening of airway (23). Upregulated IGF-1 in the lungs of asthmatic mice was confirmed to mainly originate from alveolar macrophages (24). Furthermore, IGF-1 can suppress alveolar epithelial cells from phagocytosis of apoptotic cells, thereby promoting the release of inflammatory contents of apoptotic cells and resulting in the increased airway inflammation (25). Thus, observations from preclinical experimental studies implied that increased IGF-1 signaling may aggravate asthma risk, while findings from this study as well as other epidemiological studies indicated that IGF-1 may be associated with reduced asthma risk. The apparently contradictory results regarding IGF-1 and asthma can be explained in part by the fact that preclinical models measured IGF-1 in the lungs, while epidemiological studies assessed circulating serum IGF-1 levels, which may have disparate impacts on and airway inflammation. In a retrospective study of 61 patients with COPD, the level of serum IGF-1 were significantly lower in COPD patients compared with that in controls. Moreover, circulating IGF-1 levels were also significantly lower in patients with acute exacerbation of COPD (AECOPD) than that in patients with clinically stable COPD (AECOPD) (26). These results were inconsistent with our research, which may be partly due to sample size. For the relationship between IGF-1 and IPF, there is currently little research, especially population-based epidemiological investigations. To our knowledge, the present study first demonstrated that the level of serum IGF-1 was irrelevant with IPF from a population-based perspective. Early report has suggested that the IGF-1 in bronchoalveolar lavage (BAL) fluids was involved in IPF (27). Mechanically, TGF-b may enhance the activation of IGF-1 to promote pulmonary fibrosis (28). In contrast to this result, Bloor et al. The scatter plot showed the genetic correlations between circulating IGF-1 and asthma using different MR methods. The slopes of line represent the causal effect of each method, respectively. found that the total expression of IGF-1 was downregulated in BAL cells in patients with IPF compare to the healthy subjects (29). It was hypothesized that TGF-b enhancement not only leaded to the synthesis of fibroblast extracellular matrix and differentiation of myofibroblasts, but also enhanced epithelial cell death by reducing the expression of IGF-1 and the secretion of apoptotic factors. As a result, fibroblast repair time may be prolonged and epithelial cells were unable to regenerate, thus promoting fibrotic scarring and further inducing pulmonary fibrosis (6). In addition, temporal regulation and spatial localization of IGF-1 production may play an important role in the progression of lung disease (30). In an analysis of IGF-1 and pan-cancer, circulating IGF-1 was not associated with lung cancers, which matched our results (31). However, the IGF-1 signaling was involved in virtually all stages of lung cancers. For example, severe bronchial dysplasia produces more paracrine and autocrine IGF-1 than benign bronchial epithelial cells, which then interacts with tobacco carcinogens to promote lung carcinogenesis, thereby implying an early role of IGF-1 in the development of lung cancers (32). It was also found that exogenous IGF-1 induced the upregulation of epithelial-mesenchymal transition (EMT) and promoted the proliferation, invasiveness, metastasis, and eventually resistance to epidermal growth factor receptor-tyrosine kinase inhibitors (EGFR-TKIs) via binding IGF-1 receptor (33). The apparent contradiction between IGF-1 and lung disease in preclinical studies and epidemiology can be partly explained by the fact that preclinical models primarily measure IGF-1 in the lungs, as mentioned in asthma and IPF, while epidemiological studies primarily assess the level of peripheral serum IGF-1, which may have different effects on immune response and airway inflammation. Furthermore, feedback between IGF-1 and T helper cells-associated cytokines may lead to conflicting results from experimental and epidemiological studies. It has been reported that IL-4 and IL-13 can enhance the expression of IGF-1 induced by IL-17 in vitro (34). On the other hand, negative feedback between proinflammatory cytokines and IGF-1 has been suggested. Tumor necrosis factor (TNF)-a and IL-1b can decrease the sensitivity of IGF-1 by enhancing IGFBP production and impeding IGF-1 binding to its receptor IGF-1R through an insulin receptor substrate (IRS)-Akt pathway. Relatively, IGF-1 can suppress proinflammatory cytokine signaling by c-Jun N terminal kinase (JNK) and NF-kB pathways as well as increasing the secretion of IL-10 (35). Notably, smoking may influence the level of IGF-1, as smokers squint towards having lower circulating IGF-1 levels than non-smokers (36). The current study has several strengths and limitations. The main advantage is that the MR analysis can reduce the bias from residual confounding and reverse causality, thus strengthening the comprehensive assessment between circulating IGF-1 and pulmonary diseases. In addition, the genetic instrument of IGF-1 has favourable validity and has been used in previous MR studies, which guarantees the robustness of our results. Several limitations in this MR study should be warrant mentioned. First, the information on the grade of asthma, COPD, and lung cancers was not available, so we cannot examine whether the association with IGF-1 was different based on these characteristics. Second, more than 400 SNPs were used as genetic instrument of IGF-1 in this study, which may increase the possibility of bias caused by invalid instruments. Nevertheless, supporting results from other sensitivity analyses reduce this possibility and validate our findings. Third, epigenetic phenomena, such as methylation, and interactions between genes and environmental exposure may also influence IGF-1 in relation to lung diseases. However, we were unable to assess these effects in the current MR study. Conclusion Together, these MR Results support the possibility of a causal relationship between elevated serum IGF-1 levels and asthma, and these conclusions are useful for clinical detection in patients with asthma. Data availability statement The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://gwas.mrcieu.ac.uk.
3,544.8
2023-03-03T00:00:00.000
[ "Medicine", "Biology" ]
Higgs Phenomenology in the Minimal Dilaton Model after Run I of the LHC The Minimal Dilaton Model (MDM) extends the Standard Model (SM) by a singlet scalar, which can be viewed as a linear realization of general dilaton field. This new scalar field mixes with the SM Higgs field to form two mass eigenstates with one of them corresponding to the 125 GeV SM-like Higgs boson reported by the LHC experiments. In this work, under various theoretical and experimental constrains, we perform fits to the latest Higgs data and then investigate the phenomenology of Higgs boson in both the heavy dilaton scenario and the light dilaton scenario of the MDM. We find that: (i) If one considers the ATLAS and CMS data separately, the MDM can explain each of them well, but refer to different parameter space due to the apparent difference in the two sets of data. If one considers the combined data of the LHC and Tevatron, however, the explanation given by the MDM is not much better than the SM, and the dilaton component in the 125-GeV Higgs is less than about 20% at 2 sigma level. (ii) The current Higgs data have stronger constrains on the light dilaton scenario than on the heavy dilaton scenario. (iii) The heavy dilaton scenario can produce a Higgs triple self coupling much larger than the SM value, and thus a significantly enhanced Higgs pair cross section at hadron colliders. With a luminosity of 100 fb^{-1} (10 fb^{-1}) at the 14-TeV LHC, a heavy dilaton of 400 GeV (500 GeV) can be examined. (iv) In the light dilaton scenario, the Higgs exotic branching ratio can reach 43% (60%) at 2 sigma (3 sigma) level when considering only the CMS data, which may be detected at the 14-TeV LHC with a luminosity of 300 fb^{-1} and the Higgs Factory. I. INTRODUCTION Based on about 25 fb −1 data collected at 7-TeV and 8-TeV LHC, the ATLAS and CMS collaborations have further corroborated the existence of a new boson with a local statistical significance reaching 9σ and more than 7σ, respectively [1][2][3][4]. So far the mass of the boson is rather precisely determined to be around 125 GeV, and its other properties, albeit with large experimental uncertainties, agree with those of the Higgs boson predicted by the Standard Model (SM) [4,5]. Nevertheless, the deficiencies of the SM itself motivate the interpretation of the Higgs-like boson in new physics frameworks such as low energy supersymmetric models [6,7], and as shown by numerous studies, fits to the Higgs data in new physics models can be as good as that in the SM. Among the new physics interpretations of the Higgs-like boson, dilaton is another attractive one. This particle arises from a strong interaction theory with approximate scale invariance at a certain high energy scale. Then the breakdown of the invariance triggers the electroweak symmetry breaking, and the dilaton as the Nambu-Goldstone particle of the broken invariance can be naturally light in comparison with the high energy scale. In this framework, the whole SM sector is usually assumed to be a part of the scale invariance maintained at the UV scale, and all the fermions and gauge bosons of the SM are composite particles at weak scale. After such treatment, the couplings of the linearized dilaton field S to the SM fields take the form [8] L with f denoting the dilaton decay constant and T µ µ representing the trace of the energymomentum tensor of the SM. Through this term, the dilaton couples directly to the violation of the scale invariance in the SM, i.e., to the fermions and W , Z bosons with strength proportional to their masses, and thus mimics the behavior of the SM Higgs boson at Run I of the LHC. Since the first hint of the Higgs-like boson at the LHC was uncovered at the end of 2011, the compatibility of the dilaton with the data has been extensively discussed [9][10][11][12][13]. For example, in [10][11][12] the traditional dilaton models were compared with models including Higgs boson, and the techni-dilaton was shown to be able to explain the signals well [13]. The Higgs-dilaton was also used to solve cosmological problems such as inflation and dark energy [14]. In this work, we concentrate on the Minimal Dilaton Model (MDM), which is actually a minimal effective Lagrangian at weak scale describing the breaking of a UV strong dynamics with scale invariance [15,16]. This model is motivated by topcolor theory [17], and it introduces one massive vector-like fermion with the same quantum number as right-handed top quark. The mass of this top partner represents the scale of the dynamical sector, to which the dilaton naturally couples in order to recover the scale invariance. With such setting, top quark as the mass eigenstate couples to the strong dynamics by its mixing with the partner. Moreover, unlike the traditional dilaton model, the SM except for the Higgs field acts as a spectator of the dynamics, and consequently the dilaton does not couple directly the fermions and the W , Z bosons in the SM. In this sense, the dilaton is equivalent to an electroweak gauge singlet field. In the Minimal Dilaton Model, the SM Higgs field and the dilaton field mix to form two CP-even mass eigenstates. Hereafter we call the eigenstate with the Higgs field as its dominant component Higgs particle, and the other one dilaton. The property of the Higgs boson may deviate significantly from that of the SM Higgs boson due to the mixing effect and also due to its interactions with the dilaton. Noting that the di-photon rate of the Higgs-like boson reported by the CMS collaboration at the Rencontres de Moriond 2013 differs greatly from its previous publications, we in the following first update the fit in [15,16] by using the latest Higgs data. Then we consider the phenomenology of the Higgs boson at the LHC. We are particularly interested in the following two scenarios: • Heavy dilaton scenario, where the dilaton is heavier than the Higgs boson. In this scenario, the triple Higgs coupling may be potentially large and consequently, the Higgs pair production rate at the LHC can be greatly enhanced. • Light dilaton scenario, where the dilaton is lighter than half the Higgs boson mass. In this case, the Higgs boson may decay into the dilaton pair with a sizeable branching ratio. The paper is organized as follows. In Section II, we briefly review the Minimal Dilaton Model. In Section III, we concentrate on the heavy dilaton scenario and scan through the MDM parameter space by considering various theoretical and experimental constraints. For the surviving samples, we perform fits to the Higgs data and study the Higgs pair production and its detection at the LHC. In Section IV, we turn to investigate the light dilaton scenario in a similar way, but pay particular attention to the exotic decay of the Higgs boson into dilaton pair. Finally, we draw our conclusions in Section V. II. THE MINIMAL DILATON MODEL The Minimal Dilaton Model extends the SM by one gauge singlet scalar field S which represents a linearized dilaton field, and one fermion field T with the same quantum number as the right-handed top quark which is usually called the top quark partner. Its effective Lagrangian can be written as [15,16] where q 3L is the SU(2) L left-handed quark doublet of the third generation, M the scale of the strong dynamics, and L SM is the SM Lagrangian without Higgs potential. In Eq. (2), V (S, H) as scalar potential describes the interactions of S with the SM Higgs field H. Its general expression is given bỹ where m S , λ S , κ, m H and λ H are all free real parameters. With such potential, the field S and H will mix to form two CP-even mass eigenstates, i.e. the Higgs boson h and the dilaton s, and the mixing angle θ S is defined by with f and v/ √ 2 = 174 GeV denoting the vacuum expectation values (vevs) of S and H, respectively. Detailed studies indicate that if h instead of s corresponds to the newly discovered boson, a much lower χ 2 can be obtained in the fits to the Higgs data. So in our discussion we fix m h = 125.6 GeV which is the combined mass value of the two collaborations [12]. As for the potential, it is more convenient to use f , v, θ S , m h and m s as input parameters. In this case, κ, λ H and λ S can be re-expressed as where Sign(sin 2θ S ) denotes the sign of sin 2θ S . Then the triple Higgs self coupling normalized to its SM value and the Higgs coupling to a dilaton pair are given by In the above expressions, we define [15,16] η where N T denotes the number of T field and is set to 1 for the MDM. Note that C hhh /SM may be much larger than 1 in the case of heavy dilaton scenario. Similar to Eq.(5), one may also define the top quark mixing angle in terms of mass eigenstates t and t ′ as Then under the conditions m t ′ ≫ m t and tan θ L ≪ m t ′ /m t , the normalized couplings of h and s are given by [15,16] C hV V /SM = C hf f /SM = cos θ S , C htt /SM = cos 2 θ L cos θ S + η sin 2 θ L sin θ S , C ht ′ t ′ /SM = sin 2 θ L cos θ S + η cos 2 θ L sin θ S , C hgg /SM = cos θ S + η sin θ S , and C sV V /SM = C sf f /SM = − sin θ S , C stt /SM = − cos 2 θ L sin θ S + η sin 2 θ L cos θ S , . (11) where V denotes either W ± or Z boson, f the fermions except for top quark, and A i is the loop function presented in [20] with particle i running in the loop. III. HIGGS PHENOMENOLOGY IN HEAVY DILATON SCENARIO In this section, we first scan over the parameter space of the Minimal Dilaton Model in the heavy dilaton scenario under various constraints. Then for the surviving samples we investigate the features of h, such as its couplings to SM particles, s and itself. Before our scan, we clarify the following facts • Firstly, since the property of the dilaton in the MDM differs greatly from that of the SM Higgs boson, its mass may vary from several GeV to several hundred GeV without conflicting with LEP and LHC data in searching for Higgs boson. In fact, these data actually require the mass of the SM Higgs boson to be above 114 GeV and outside the region 127 − 710 GeV, respectively [21]. • Secondly, since we are more interested in new physics at low energy, we take 0 < η −1 ≤ 10 with η −1 ≡ f /v in our study and pay special attention to the case η −1 ≥ 1. • Thirdly, although in principle θ S may vary from −π/2 to π/2, the Higgs data have required it to be around zero so that h is mainly responsible for the electroweak symmetry breaking. In practice, requiring | tan θ S | ≤ 2 will suffice. • Finally, we note that t ′ mass has been limited. For example, the ATLAS and CMS collaborations have set a lower bound of 656 GeV and 685 GeV in the search for the top partner at the LHC [22] at 95% confidence level, respectively. The requirement of perturbativity, however, sets an upper bound of about 3400 GeV for the t ′ mass [15]. In summary, for the heavy dilaton scenario we scan the following parameter space: 130 GeV < m s < 1000 GeV, 700 GeV < m t ′ < 3000 GeV. We analytically checked that this inequality is sufficient to guarantee H = v/ √ 2, S = f corresponding to the global minimum of the potential. (2) Absence of the Landau pole for dimensionless couplings λ H , λ S and κ below 1 TeV. For each sample in our scan, we solve numerically following renormalization group equations (RGE) [23] 16π and require each of the three couplings less than 1000 before reaching µ = 1 TeV. In building the RGE, we have properly considered the normalization factors of the fields, and neglect the effects of the gauge and Yukawa couplings since we are only interested in the case of large λ H , λ S and κ. The Landau pole constraint can set upper bounds on the couplings, and the higher scale we choose to impose it, the tighter the constraint becomes. (3) Experimental constraints from the LEP, Tevatron and LHC search for Higgs-like particle. We implement these constraints with the package HiggsBounds-4.0.0 [24]. (4) Experimental constraints from the electroweak precision data. We calculate the Peskin-Takeuchi S and T parameters [25] with the formulae presented in [15], and construct χ 2 ST with the following experimental fit results [26]: We keep samples with χ 2 ST < 6.18 for further study. We do not consider the constraints from V tb and R b since they are weaker than the S, T parameters [15]. (5) Experimental constraints from the Higgs data after the Rencontres de Moriond 2013. These data include the exclusive signal rates for γγ, ZZ * , W W * , bb and ττ channels, and their explicit values are summarized in Fig.2 Fig.15 of ref. [27] for the CDF+D0 results. Similar to our previous works [28], we perform fits to the data using the method first introduced in [10] and consider correlations of the data as done in [30,31]. As for branching ratios of various decay channels for different Higgs boson masses in the SM, we used the results in [29]. Moreover, since the latest di-photon rate reported by the CMS collaboration (0.77 ± 0. 27 [4]) is much smaller than that of the ATLAS group (1.6 ± 0.3 [5]), we perform three independent fits by using only the ATLAS data (9 sets), only the CMS data (9 sets) and the combined data (22 sets) including ATLAS, CMS, and CDF+D0. We checked that χ 2 in the SM are 10.64, 4.78 and 18.79 for the three fits, and χ 2 min in the MDM are 8.32, 2.57 and 18.66, respectively. The fact that χ 2 min < χ 2 SM reflects that the MDM is more adaptable to the data than the SM. In our discussion, we are particularly interested in two types of samples, i.e., ∆χ 2 ≤ 2.3 and 2.3 ≤ ∆χ 2 ≤ 6.18 with ∆χ 2 = χ 2 − χ 2 min . These two sets of samples correspond to the 68% and 95% confidence level regions in any two dimensional plane of the model parameters when explaining the Higgs data [30,31]. Hereafter we call them 1σ and 2σ samples, respectively. A. The heavy dilaton scenario confronted with the current Higgs data In Fig.1, we project the 1σ (red bullets) and 2σ (blue triangles) samples passing various experimental constraints on the plane of η −1 ≡ f /v versus tan θ S . From this figure we can see that: • The latest Higgs data is very powerful in constraining the MDM parameter space. For the three different fits, most of the samples are confined in the narrow region of | tan θ S | < 1 and |η tan θ S | < 0.5 or the region of η −1 1 and η tan θ S ≃ −2. • The region of the 1σ samples for the ATLAS data is approximately complementary to that for the CMS data, which reflects the conflict between the results of the two collaborations. • We checked that, the minimal χ 2 for the ATLAS data comes from the sample around (10), we can see that the χ 2 min sample for the ATLAS data has an enhanced di-photon signal rate of around 1.55, which is the most attractive feature of the ATLAS data. • For the combined data, most of the samples are located within | tan θ S | 0.5 or | sin θ S | 0.45, which indicates that the dilaton/singlet component in the Higgs boson should be less than about 20%. In Fig.2, we only consider the samples with η −1 > 1 and show the normalized Higgs couplings to SM gluon, photon, light fermion and massive vector boson (C hV V /SM = C hf f /SM). Since in the MDM, C hf f and C hV V contribute to about 90% of the total decay width for the SM-like Higgs boson and C hgg dominates the Higgs production rate at the LHC, the normalized XX signal rate is roughly proportional to (C hgg /SM) 2 /(C hf f /SM) 2 × (C hXX /SM) 2 . Then combining Fig.1, Fig.2 and Eq. (10), one can learn that: • the missing parts around (C hgg /SM)/(C hf f /SM) = 1 is actually a consequence of the upper limit of η −1 ≤ 10. • Most of the samples for the ATLAS data are characterized by (C hgg /SM)/(C hf f /SM) 1. This is because some of the ATLAS inclusive signals are enhanced compared with their SM values, especially for the di-photon channel. • Most of the 1σ samples for the the CMS data satisfy (C hgg /SM) 1 and (C hγγ /SM)/(C hf f /SM) ≃ 1, because most of the CMS inclusive signal rates, including the latest di-photon rate, are smaller than 1. • When considering the ATLAS and CMS data separately, the MDM can explain each of them well but referring to different parameter space due to the apparent difference in the two sets of data. When considering the combined data, however, the MDM explanation is not much better than the SM, and the 125-GeV Higgs couplings to light fermions and vector bosons are very close to the SM values, especially for the 1σ samples. In above analysis, we checked that the Higgs data have very little constraint on θ L and m t ′ . We also checked that although the EWPD has no constraint on η −1 and tan θ S , it is very powerful in constraining sin θ L and the coupling C htt . The latter is shown in Fig.3 for the samples with η −1 > 1. This figure indicates that, due to the EWPD constraint, sin θ L 0.3 which is independent of the Higgs data. Also note that the EWPD constraint is tighter than the V tb and R b constraints, which require | sin θ L | < 0.59 and | sin θ L | < 0.52 the constraint from perturbativity [32] which corresponds to C hhh /v < 4π (or C hhh /SM 16). In Fig.4 we project the samples with η −1 > 1 on the plane of C hhh /SM versus m s . One can learn that, for most of the samples in all the three fits, the Higgs triple self coupling can be either around the SM value or monotonically increase as m s goes up. We checked that for samples with greatly enhanced Higgs self coupling, they are characterized by 0 < tan θ S < 1. In this case, λ H may be very large for heavy dilaton, reaching about 18 in optimum case. We also checked that λ S may also be quite large (reaching 35), which occurs for tan θ S < 0, small η −1 ≡ f /v and large m s . However, due to the sin 3 θ S suppression in Eq (6), its influence on C hhh is not significant. Fig.4 indicates that, given tan θ S > 0 for the samples passing the ATLAS data, m s 800 GeV has been excluded by the Landau pole constraint. We checked that this upper bound can be further suppressed if we impose the constraint at a higher scale. Also note that the perturbativity requirement on the Higgs triple coupling, which corresponds to C hhh /v < 4π (or C hhh /SM 16) [32], can set a stronger bound on m s . B. Higgs pair production and its detection at the hadron collider From the above analysis we have seen that, the latest 125 GeV Higgs data along with EWPD have powerful constraints on the Higgs couplings to SM particles. Especially in the fit to the combined data of LHC and Tevatron, the explanation given by MDM is not much better than the SM. The Higgs triple self coupling, however, can be much larger than the SM value, and we therefore investigate its effect on the Higgs pair production in the following. We take the 1σ samples for the combined data as an example, and calculate the Higgs pair production cross section at proton-proton colliders with the modified code HPAIR [33]. The relevant Feynman diagrams are shown in Fig.5, where (a) and (b) correspond to triangular and box diagrams, respectively. There also exists tree level diagram bb → hh, but its contribution is negligible due to the low component of b quark in proton-proton colliders, and to be consistent with the processes generated in the following Monte Carlo (MC) simulation we do not consider it here. In the SM and many new physics models, the contributions from box diagrams are dominated , and the top quark contribution is much larger than that from b quark [19,[34][35][36][37][38]. In the MDM we checked that, the top quark contribution is still much larger than that from the b quark and top partner. The triangular diagrams, however, may contribute more than the box diagrams due to the possible large Higgs triple self coupling. We calculate the Higgs pair production cross sections at proton-proton colliders with √ S = 14, 33, 100 TeV in both MDM and SM. In Fig.6, the 1σ samples for the combined data are projected on the plane of the cross section rate σ(gg → hh)/SM versus the self coupling rate C hhh /SM. We can see that with the self coupling increasing, the cross section rates first monotonically reduce then monotonically increase at all these collider energies, and the turning point is at C hhh /SM ≃ 2.5. The reason is that the total amplitude of the triangle diagrams has an opposite sign compared to that from the box diagrams, and in SM the contribution from the box diagrams is dominant. Thus for C hhh /SM 2.5 the contribution from triangle diagrams becomes dominant and monotonically increases when the self coupling goes up. At 14-TeV LHC the production rate can be as large as about 50 for C hhh /SM ≃ 16 or m s ≃ 500 GeV, and at 33 TeV and 100 TeV colliders it can also reach 30. Higgs pair production rates without cuts, calculated at LHC14, TeV33 and TeV100 proton-proton colliders, respectively. Since the Higgs pair production rate in the heavy dilaton scenario can be highly enhanced, we can expect its discovery at the proton-proton colliders with a moderate or small integral luminosity. The author of [39] performed a detailed MC simulation of the Higgs pair production in the SM, through gg → hh → bbγγ at proton-proton colliders with a high integral luminosity of 3000 fb −1 . In this work, we use the result of [39] in the SM but reduce the integral luminosity from 3000 fb −1 to 100 fb −1 . Table I is taken from [39] for the convenience of discussion, but the number of expected events are modified according to the reduction of integral luminosity from 3000 fb −1 in [39] to 100 fb −1 here. We assume that in the MDM the σ × Br and acceptances of the background, the acceptances of the signal are the same as that in the SM, while the σ × Br of the signal are calculated by ourselves. In Fig.7, we project the 1σ samples for the combined data on the plane of significance S/ √ B versus coupling C hhh /SM, where S/ √ B is calculated with an integral luminosity of 100 fb −1 . S/ √ B = 5 for other values of luminosity are also marked out, which are the discovery limits for the corresponding luminosity. We can see that, higher energy will make the proton-proton colliders more capable of detecting the signal of Higgs pair production. In addition, when C hhh /SM 2.5 (m s 200 GeV) and goes up, the significance S/ √ B increases monotonically. For example, for C hhh /SM = 16 or m s = 500 GeV, the discovery luminosity is about 10, 1, 0.5 fb −1 for LHC14, TeV33 and TeV100, respectively. And with luminosity of 100 fb −1 at LHC14, the Minimal Dilaton Model with C hhh /SM ≃ 9 or m s = 400 GeV may be covered. IV. HIGGS PHENOMENOLOGY IN THE LIGHT DILATON SCENARIO For a light dilaton with mass m s < m h /2 ≃ 62 GeV, the most interesting phenomenology is the Higgs exotic decay h → ss. Similar exotic decay is also very attractive in Supersymmetric models and Little Higgs Models [40,41], such as in the Next-to-Minimal Supersymmetric Standard Model (NMSSM) according to the latest paper [7]. In the following, we define samples with ∆χ 2 < 1, 1 < ∆χ 2 < 4 and 4 < ∆χ 2 < 9 to be 1σ, 2σ and 3σ samples respectively, where ∆χ 2 = χ 2 − χ 2 min and χ 2 min denote the same global minimal values in the three fits as those in the heavy dilaton scenario. For these samples, if one projects them on the plane of any observable O i versus ∆χ 2 , one can get the allowed ranges of O i at 1σ, 2σ and 3σ level, respectively. Note that the classification standard here is different from that in the heavy dilaton scenario discussed in the previous sections. The strategies we take in the following are similar to those in the heavy dilaton scenario, except for: (i) The range of m s is shifted from 130 < m s < 1000 GeV to 0 < m s < 62 GeV; (ii) Classification standard of nσ for the samples is changed according to the above description; (iii) As an example, analysis of the samples for the CMS data is paid more attention and η-fixed fits are performed and investigated. Besides, we remark that since the couplings λ H , λ S and κ are small or moderate in light dilaton scenario, the Landau pole constraints and perturbation requirement can not impose any limitation. A. The light dilaton scenario confronts with the current Higgs data To classify the samples in the light dilaton scenario, we first project the samples on the plane of BR(h → ss) versus ∆χ 2 in Fig.8. This figure shows that, the exotic decay h → ss for the CMS data can have the largest branching ratio, which can reach 21%, 43% and 60% at 1σ, 2σ and 3σ level, respectively. This is because most of the inclusive signal rates of the CMS data are suppressed compared to the SM values, thus a relatively large exotic branching ratio is more favored. On the contrary, samples for the ATLAS data have smaller FIG. 8: The scatter plots of the samples satisfying constraints from the ATLAS data (left), the CMS data (middle) and the LHC+Tevtron data (right), respectively, projected on the plane of BR(h → ss) versus ∆χ 2 = χ 2 −χ 2 min . All the samples have passed constraints from the EWPD and the Higgs search from LEP, Tevatron and LHC. Samples with ∆χ 2 < 1 (red bullets), 1 < ∆χ 2 < 4 (blue triangles) and 4 < ∆χ 2 < 9 (green square) are called 1σ, 2σ and 3σ samples, respectively. exotic branching ratios and very interestingly, there is no 1σ samples at all. The branching ratio for combined data, however, is at most about 32% at 3σ level, which is slightly larger than the value allowed in SM. To intensively study the light dilaton scenario and also compare with the heavy dilaton scenario, in Fig.9 we project the samples on the plane of η −1 ≡ f /v versus tan θ S again. We would like to emphasize again that the classification standard here is different from that in the heavy dilaton scenario. Combining Fig.9 with Fig.1, Fig.2 and Fig.8, we can see that: • In the light dilaton scenario, the surviving samples are mostly confined in |η tan θ S | < 0.2 and there are no samples with η tan θ S ≃ −2. Compared with the corresponding results of the heavy dilaton scenario, the Higgs data have stronger constrains on the parameter space in the light dilaton scenario. • Due to the narrower range of |η tan θ S |, the various Higgs couplings in the light dilaton scenario can not deviate much from the SM value. For example, most of the 2σ samples for the ATLAS and combined data have |(C hgg /SM)/(C hf f /SM) − 1| < 0.1. • The CMS data have interesting constrains on the light dilaton scenario. The region of 1σ samples has two separate parts with the broader one characterized by a negative tan θ S and the narrow one corresponding η −1 4. Since the samples for the CMS data can accommodate larger branching ratio of exotic Higgs decay in the light dilaton scenario, we will concentrate on it from now on. We perform three further scans with fixed η −1 = 1, 2.5, 5, respectively. Then for the samples satisfying the EWPD and light Higgs search data, we calculate the χ 2 using the CMS data only, and classify the samples into 1σ, 2σ and 3σ region as done in this sector. In Fig.10 of the light dilaton mass m s . The reason is that m s as an input parameter of the model is scarcely limited by the constraints we considered, and meanwhile it only affect the rate by phase space. As a consequence, the rare decay rate is mainly determined by the C hss or basically, by η −1 and tan θ S (see Eqs. (5) and (7)). In Fig.11, we project the samples on the plane of BR(h → ss) versus tan θ S . Combining with Fig.9, we can see that: • For a small η −1 ≡ f /v, e.g., η −1 = 1, the h → ss branching ratio can reach 60% while θ S is confined in a narrow region of | tan θ S | 0.2. This is because a large | tan θ S | with a large η will modify the Higgs couplings and signal rates too much, which is disfavored by the current Higgs data. • For a moderate η −1 ≡ f /v, e.g., η −1 = 2.5, all of the 1σ samples have negative tan θ S and small Br(h → ss), while most of the samples with large Br(h → ss) have positive tan θ S . Both of these two features can be understood from (C hgg /SM)/(C hf f /SM) = 1+η tan θ S . Considering that the CMS data favor suppressed signal rates, which means (C hgg /SM)/(C hf f /SM) should be around 1 or even less. If η tan θ S is too large, then a large exotic decay width would be needed to enhance the total width in order to suppress various signal rates. If η tan θ S is negative, however, a small Br(h → ss) would be enough. • For a large η −1 ≡ f /v, e.g., η −1 = 5, the 1σ samples begin to appear in the tan θ S > 0 region, which is the result of the interplay between the exotic branching ratio and Higgs couplings. We can make an estimation of the inclusive pp → h → XX signal rate in the tan θ S > 0 region, which is approximately We can see that tan θ S > 0 region can accommodate 1σ samples for the CMS data, of which the inclusive signal rates of γγ, ZZ * → 4ℓ are 0.77 ± 0.27, 0.92 ± 0.28, respectively [4]. B. Detection of light dilaton at hadron and lepton colliders From the above analysis, we see that in the light dilaton scenario the Higgs couplings to SM particles can not deviate much from their SM values, and consequently the dilaton couplings to SM particles should be very small according to Eqs. (10) and (11). Thus it will be very hard to detect the light dilaton from its couplings to SM particles, such as through its associated production with Z boson at hadron or lepton colliders. Nevertheless, since the Higgs exotic decay h → ss can have a large branching ratio according to the CMS data at 3σ level, it may be possible to detect the light dilaton at LHC14 or Higgs factory in the future. To detect the light dilaton through pp → hZ, h → ss → 4b at LHC14, we use the result of MC simulation performed at LHC14 in our previous work [7] with a luminosity of 300 fb −1 . The setting of σ × Br and acceptances are the same as in the heavy dilaton scenario. The η-fixed samples for the CMS data are projected on the plane of C 2 4b versus m s in Fig.12, where the significance curves of S/ √ B = 2, 3, 5 are taken from [7]. The signal rate C 2 4b is defined as From this figure we can see that: • For η −1 = 2.5 and 5.0, basically only the 3σ samples can be possibly detected, and most of the other samples are below the curve of S/ √ B = 5 which means larger luminosity is needed to detect the 1σ and 2σ samples. • For η −1 = 1, most samples are with C 2 4b 0.2, which are out of the detection capability of LHC14 with a luminosity of 300 fb −1 . This is because most of the η −1 = 1 samples have | tan θ S | 0.2, which means the dilaton coupling to b quark is limited to be |C sbb /SM| = | − sin θ S | 0.2. Therefore the branching ratios of the light dilaton satisfy BR(s → bb) ≪ BR(s → gg) ≃ 1 and result in a small C 2 4b . We also checked that for samples with η −1 1, C 2 4b are all very small, which means light dilaton with η −1 ≡ f /v 1 can not be detected at the LHC14 through pp → hZ → 4bZ with a luminosity of 300 fb −1 . | tan θ S | 0.2, (ii) small m s 2m b ≃ 10 GeV, and (iii) small exotic branching ratio, e.g. BR(h → ss) 0.2, also have rather small C 2 4b which can not be detected with a luminosity of 300 fb −1 . Similar to [7], we also investigated the detection of light dilaton at the electron-positron collider through e + e − → hZ, h → ss → 4b. The cross sections are calculated for a collision energy of √ S = 250 GeV, and the result is shown on the plane of σ(e + e − → hZ → 4bZ) versus m s in Fig.13. We can see that, for samples with η −1 2 and m s 10 GeV, the cross section can reach 80 fb at 3σ level, and for 1σ samples with η −1 = 5 it can still reach 30 fb. Thus we can expect that it would be easier to detect the light dilaton at the and not-too-small C sbb , or a large η −1 ≡ f /v and not-too-small | tan θ S |, is favorable for the detection of light dilaton at electron-positron colliders as well as hadron colliders. V. CONCLUSION In this work, we consider the theoretical and experimental constrains on the MDM, such as the vacuum stability, the absence of the Landau pole, the EWPD and the Higgs search at the LEP, Tevatron and LHC. Then we perform fits to the latest 125-GeV Higgs data both in the heavy dilaton scenario and in the light dilaton scenario. Noting the apparent difference between the ATLAS and CMS data, in our fits we consider the ATLAS data, the CMS data and the combined data of LHC and Tevatron separately. For each scenario, we consider following aspects: (1) In the heavy dilaton scenario, we show the surviving parameter space and various Higgs couplings to SM particles for the 1σ and 2σ samples in the fits. Modification of the Higgs triple self coupling is also discussed. For the 1σ samples fitted to the combined data, we calculate the Higgs pair production cross section at the protonproton colliders such as LHC14 ( √ S = 14 TeV), TeV33 ( √ S = 33 TeV) and TeV100 ( √ S = 100 TeV), and discuss the deviations from the SM values. Based on the MC simulation result in [39], we also investigate the significance S/ √ B at these colliders. (2) In the light dilaton scenario, we first show the ranges of exotic decay branching ratio Br(h → ss) in different fits, and compare the surviving parameter space with that in the heavy dilaton scenario. Then we fix η −1 to perform fits only to the CMS data with particular attention paid to the dependence of BR(h → ss) on η −1 and tan θ S . Based on these η −1 -fixed samples, we study the detection of light dilaton through pp → hZ → 4bZ at the LHC14 with a luminosity of 300 fb −1 by our previous MC simulation [7]. We also discuss the detection at the electron-positron collider with √ S = 250 GeV. Finally we have following observations • If one considers the ATLAS and CMS data separately, the MDM can explain each of them well, but refer to different parameter space due to the apparent difference in the two sets of data. If one considers the combined data of the LHC and Tevatron, however, the explanation given by the MDM is not much better than the SM, and the dilaton component in the 125-GeV Higgs is less than about 20% at 2σ level. • The current Higgs data have stronger constrains on the light dilaton scenario than the heavy dilaton scenario. • The heavy dilaton scenario can produce a Higgs triple self coupling much larger than the SM value, and thus a significantly enhanced Higgs pair cross section at hadron colliders. With a luminosity of 100 fb −1 (10 fb −1 ) at the 14-TeV LHC, a heavy dilaton of 400 GeV (500 GeV) can be examined. • In the light dilaton scenario, the Higgs exotic branching ratio can reach 43% (60%) at 2σ (3σ) level when considering only the CMS data, which may be detected at the 14-TeV LHC with a luminosity of 300 fb −1 and the Higgs Factory.
9,263.8
2013-11-26T00:00:00.000
[ "Physics" ]
Multi-Omics-Based Discovery of Plant Signaling Molecules Plants produce numerous structurally and functionally diverse signaling metabolites, yet only relatively small fractions of which have been discovered. Multi-omics has greatly expedited the discovery as evidenced by increasing recent works reporting new plant signaling molecules and relevant functions via integrated multi-omics techniques. The effective application of multi-omics tools is the key to uncovering unknown plant signaling molecules. This review covers the features of multi-omics in the context of plant signaling metabolite discovery, highlighting how multi-omics addresses relevant aspects of the challenges as follows: (a) unknown functions of known metabolites; (b) unknown metabolites with known functions; (c) unknown metabolites and unknown functions. Based on the problem-oriented overview of the theoretical and application aspects of multi-omics, current limitations and future development of multi-omics in discovering plant signaling metabolites are also discussed. Introduction Small molecules produced by plants play vastly diverse roles in nature, amongst which signaling and communication are two of the most important aspects. Plant metabolites are broadly classified into primary and secondary metabolites. Primary metabolites are ubiquitous to all plants whereas secondary metabolites are specifically produced by certain plants, tissues and cells and in most cases elicited under certain conditions. It is estimated that there are over one million metabolites produced throughout the plant kingdom [1]. Secondary metabolites (including but not limited to terpenes, phenylpropanoids and alkaloids) are important signaling molecules that convey information in a spatial-temporalspecific manner [2]. We define signaling molecules as those small plant metabolites that can be perceived by living organisms and trigger or participate in signal transduction. These metabolites can serve as signaling molecules during plant growth and development, initiating and coordinating plant developmental programs. In the meantime, they can "liaise" with external environments and other living organisms, fulfilling the subtle demands for plant health and growth. Plant hormones including jasmonic acid [3], abscisic acid [4], brassinosteroids [5,6], auxin [7], gibberellins [8], strigolactones [9], ethylene [10] and salicylic acids [11] are well known signaling molecules that participate in numerous aspects of plant growth, defense and plant-environment interactions. These compounds are essential for plant growth and development, yet their specific roles and the ways they function can vary drastically among different plant species and under specific environmental conditions. Secondary metabolites were well-known for their direct impacts on herbivores and pathogens in plant defense. More recently, their functions as signaling molecules that indirectly aid plants in overcoming stresses are gradually being unveiled. For instance, triterpenes including the oat antifungal avenacin precursor β-amyrin [12] and thalianol-derived triterpenes from Arabidopsis thaliana [13] were found to participate in plant root growth and development, with β-amyrin affecting the oat root epidermal cell patterning and thalianol derivatives impacting A. thaliana root length, respectively. Defense compound glucosinolate can also influence plant growth via its degradation product indole-3-carbinol, which inhibits root elongation by competing directly with auxin as a signaling molecule, so as to maintain the balance between plant growth and plant defense [14]. Other plant metabolites such as flavones apigenin and luteolin were recently found to be able to promote maize growth and nitrogen acquisition via recruiting beneficial bacteria of the taxa Oxalobacteraceae [15]. Such indirect effects of secondary metabolites on plant performance were also observed in maize. Maize roots exuded well-known defense compounds, benzoxazinoids, that altered the root-associated microbiota in soils, which, in turn, exerted a prolonged impact on the growth and herbivore resistance of maize in the next generation [16]. The aforementioned examples demonstrate that even some of the best-known plant metabolites still have unknown functions awaiting discovery. Moreover, the majority of plant metabolites discovered so far have only been chemically/structurally characterized and not yet been assigned a definite function in nature. The plant metabolites that we have already discovered might actually represent only the tip of the iceberg regarding the metabolic diversity of plants, as implicated by the numerous uncharacterized predicted biosynthetic genes present in plant genomes [17]. Current research concerning the discovery of plant signaling metabolites can be broadly classified into three categories: (a) plant metabolites with known structures but unclear functions; (b) plant metabolites with unknown structures but implicated functions; (c) plant metabolites with yet to be determined structures and functions ( Figure 1). The difficulties in discovering plant signaling metabolites under these three scenarios also vary. There are a few major challenges impeding the discovery of plant signaling molecules: (i) the content of plant signaling metabolites are usually very low; (ii) plant signaling metabolites are often under dynamic metabolism (i.e., they are actively being synthesized as well as being catabolized and secreted); (iii) plant signaling metabolites normally have characteristic spatial-temporal distributions (they can respond to the upstream signal transduction cascade, including those from the environment, growth and developmental programs at specific stages); (iv) they have extremely diverse physical and chemical properties that demand customized analytical and assay methods; (v) they have diverse specialized functions that can only be captured under specific spatial and temporal conditions. Addressing these challenges requires interdisciplinary approaches. Multi-omics is a powerful and indispensable integrated technique that has greatly accelerated the discovery of plant signaling metabolites via systematic comparative analysis of large datasets ( Figure 1). Experimental designs and technical application are critical for the successful implementation of multi-omics for discovering plant signaling metabolites. This review synthesizes the technical features and limitations of multi-omics and discusses effective strategies for implementation with recent successful examples in discovering plant signaling metabolites for the purpose of providing guidance for the effective application of multi-omics technologies in uncovering the structures and functions of plant metabolites. abolic diversity of plants, as implicated by the numerous uncharacterized predicted biosynthetic genes present in plant genomes [17]. Current research concerning the discovery of plant signaling metabolites can be broadly classified into three categories: (a) plant metabolites with known structures but unclear functions; (b) plant metabolites with unknown structures but implicated functions; (c) plant metabolites with yet to be determined structures and functions ( Figure 1). The difficulties in discovering plant signaling metabolites under these three scenarios also vary. Multi-Omics as a Powerful Tool for Uncovering Plant Signaling Metabolites Biological networks are highly complex, interconnected and tightly regulated. Plant metabolites are the output of the Central Dogma, closely related to phenotypes and associated with various aspects of cellular processes ranging from biosynthesis and catabolism to regulation, transport, mode of action and their interactions with environmental changes. Each of the related aspects provides an entry point for investigating plant signaling metabolites. These entry points correspond well with the different levels of omics (including genomics, epigenomics, transcriptomics, proteomics, metabolomics and microbiomics) that are currently available. Different levels of omics techniques will have to be employed in a combinatorial fashion to reveal a relatively complete picture of plant signaling metabolites. Depending on the nature of the study, and the current knowledge of the structures and functions of the metabolites of interest, one might design experiments with specific focus on one or two omics techniques. Nevertheless, an in-depth grasp of the technical features and research aims are critical for the successful execution of multi-omics. Features of Multi-Omics, Including Genomics, Epigenomics, Transcriptomics, Proteomics, Metabolomics and Microbiomics Multi-omics refers to the integrated application of more than one type of large dataset analysis, including genomics, epigenomics, transcriptomics, proteomics, metabolomics and microbiomics. To better understand biological activities at a system level, traditional single-omics research is rarely comprehensive enough and requires the integrated multiomics data for global analysis of biological systems [18], and multidimensional analysis as well as multi-stage development analysis are increasingly used to understand biological Genomics involves the study of complete DNA sets in organisms, including all its genes, their sequences, arrangements and architecture, providing perspective for looking into biological problems from the most basic code of life DNA. DNA carries instructions for transcription (promoters, untranslated regulatory regions and splicing sites), translation (start and stop codons) and specific functions of a gene (coding sequence) [19]. Genomics features underlying transcriptional and translational regulation, biosynthesis and the transport of plant metabolites can be utilized for systematic mining at single or multiple genome scales for discovering plant signaling metabolites. Driven by advances in high-throughput DNA sequencing technologies such as Illumina HiSeq, PacBio and Nanopore sequencing, more than 600 plant genomes have been sequenced and made publicly available [20]. The majority of genomes deposited have been structurally and functionally annotated, thus, can be exploited for mining biosynthetic genes and other genomic features concerning plant metabolites. Protein family domains and physical arrangements of the corresponding genes (e.g., whether or not colocalized) in the genomes can be used for predicting the types of enzymes and potential metabolic products derived thereof [21]. For instance, a rare type of terpene compound, namely sesterterpenes, which are convergently synthesized by plants and fungi, were discovered via investigating the metabolic output of an interesting colocalization phenomenon of genes containing the prenyltransferase (Polyprenyl_synt (PT)) and terpene synthase (Terpene_synth C (TPS)) domains in plant genomes [22,23]. Gene-guided approaches have also been employed to discover the precursor gene encoding peptide with the BURP domain (Pfam 03181) and core ribosomal peptide(s) for bioactive compound lyciumin biosynthesis. The newly discovered genomic features underpinning lyciumin biosynthesis enabled the customized tblastn search in plant genomes for genes encoding BURP domain proteins to identify ribosomally synthesized candidates and post-translationally modified peptides (RiPPs) in Amaranthaceae, Fabaceae, Rosaceae and Solanaceae families [24]. With the usage of plant genomic sequence, protein annotation and gene expression profile, a few bioinformatic tools including plantiSMASH [21], phytocluster [25] and clusterfinder [26,27] have been developed to predict plant biosynthetic gene clusters (BGCs) from plant genomes, which will certainly facilitate the discovery of plant signaling metabolites. Epigenomics-The Gatekeeper for Plant Metabolite Biosynthesis DNA in cells is wrapped around histone proteins H1, H2A, H2B, H3 and H4, which form spool-like structures that enable very long DNA molecules to be wrapped up neatly into chromosomes inside the cell nucleus. DNA and histones can undergo reversible chemical modifications like DNA methylation or histone methylation, acetylation, phosphorylation and adenylation, the complete set of which in cells are heritable without changing the DNA sequence, termed epigenome. Epigenomics unitizes high-throughput technologies to decipher epigenome landscapes based on comprehensive analyses. Epigenome landscapes are tightly associated with gene activity and expression, controlling the production of proteins and metabolites under a specific condition via altering chromatin conformation or transcription regulator recruitment. DNA methylation is one well-known epigenomic process with methyl groups being added to the bases of a DNA molecule at specific sites, switching the genes on or off by altering interactions between the DNA and methyl group reading proteins. Epigenomics technologies including chromatin immunoprecipitation-sequencing (ChIP-seq) and Assay for Transposase-Accessible Chromatin using sequencing (ATAC-Seq) [28] enable detection of global chemical modifications associated with various aspects of plant signaling metabolites, thereby providing another perspective for looking into metabolite biosynthesis and regulation. Via ChIP-seq analysis, plant triterpenes thalianol and marneral biosynthetic gene clusters were found to be regulated by histone modification with histone 3 lysine trimethylation (H3K27me3) and the histone2 variant H2A.Z reported to repress and activate the thalianol and marneral gene clusters, respectively [29]. Besides triterpenes, camalexin biosynthesis genes were also found to contain epigenetic marks with H3K18ac and H3K27me3 found to activate and repress gene expression, respectively [30]. Similarly, the diterpene gene cluster responsible for the biosynthesis of the antifungal diterpene, ent-5,10-diketo-casbene, was recently found to also be under the regulation of epigenetic modifications with H3K27me3 acting as a repression mark [31]. Epigenomics can yield meaningful information for discovering the regulatory mechanism of plant signaling metabolite biosynthesis, especially when used together with other omics techniques. Apart from plants having complete genome sequences as mentioned above, epigenomics may also be applied to decouple the regulatory mechanisms underlying plant signaling metabolite biosynthesis in non-model plants that lack a whole genome, using techniques such as epiGBS, reference-free reduced representation bisulfite sequencing [32] for exploration and comparative analysis of DNA methylation de novo. This method could help to profile epigenetic regulation patterns and understand how epigenetic regulatory mechanisms affect metabolite biosynthesis in non-model plants. Transcriptomics-Snapshots of Gene Expression under Specific Spatial-Temporal Conditions Transcriptomics is used to study all types of RNA transcripts including messenger RNAs (mRNAs), microRNAs (miRNAs) and long noncoding RNAs (lncRNAs) present in a sample under specific conditions. As one of the most widely used high-throughput sequencing methods, modern transcriptomics technology has developed from bulk RNA sequencing (RNA-seq) at the tissue or population level to single-cell RNA-seq at the individual cell level using nanopore sequencing and 10× Genomics single-cell sequencing [33]. In contrast to the high cost of plant genome sequencing, RNA-seq is a cost-efficient and facile approach to obtain snapshots of gene expression at a cell/organ/tissue under the conditions being studied. Transcriptomics data can reveal information related to many aspects of RNAs, including expression levels, functions, locations, trafficking, degradation, structures of transcripts and their parent genes, with regard to start sites, 5 and 3 untranslated regions (UTR), splicing patterns, alternative polyadenylation profiles and post-transcriptional modifications [34]. Transcriptomics is particularly useful when no genome information is available for the plant to be studied, as RNA-seq data can be assembled de novo to retrieve coding sequences (CDS) of biosynthetic genes. This has proven to be a powerful tool for discovering biosynthetic genes responsible for the synthesis of metabolites (e.g., colchicine alkaloid [35] and protolimonoid biosynthesis [36]) in medicinal plants. Furthermore, signaling metabolite-associated genes display similar expression patterns for certain biological events [37]. RNA-seq is extremely powerful for uncovering patterns of genes relevant to biological (e.g., developmental and environmental) events, exposing links between metabolite biosynthesis and genes related to their functions, hence, facilitating the discovery of metabolite structures and functions. By using the gene expression matrix from RNA-seq data analysis, various co-expression analysis approaches, including weighted gene co-expression network analysis (WGCNA) [38], hierarchical clustering [39], Pearson Correlation Coefficient (PCC) [40], Highest Reciprocal Rank (HRR) [41], Mutual Rank (MR) [42] and Self-Organizing Map (SOM) [43], have been successfully applied in identifying candidate genes involved in plant-specialized metabolic pathways by utilizing known biosynthetic genes as a bait [44]. For instance, based on a SOM analysis of oat (Avena species) transcriptomic data for six tissues, six transcripts of the known antifungal avenacin biosynthetic pathway genes clustered to a node of the self-organizing map, indicating the co-expression of these genes. Within the transcripts clustered with the avenacin biosynthetic pathway genes in 100% of self-organizing map runs, nine transcripts were identified as candidate avenacin glycosyltransferase genes (UGT). Combing the phylo-genetic analysis of the predicted amino acid sequences of the nine new candidate UGTs with Agrobacterium-mediated transient expression assay, AsTG1 and AsUGT91G16 were proven to form part of the avenacin biosynthetic gene cluster [45]. Proteomics-The Yet to Flourish Tool for Plant Signaling Metabolite Discovery Proteins translated from mRNA are effectors of biological functions, catalyzing reactions, transmitting signals and creating cellular support structures. Proteomics studies the complete set of protein abundance, structures, functions, post-translational modifications and protein-protein/metabolite interactions in a living organism under given conditions. Protein abundance is closely related to transcript abundance but more dynamic due to miscellaneous degradation and modification mechanisms present in plant cells. Some biosynthetic enzymes responsible for the synthesis of secondary metabolites are actually regulated by post-translational modifications [46]. Proteomics can also be used to improve the functional annotation of genes in plant genomes, reducing difficulties for future bioinformatics analysis and cloning efforts [47]. Moreover, some properties of proteins (e.g., solubility/melting points) can change systematically when interacting with proteins or metabolites, providing opportunities to probe protein-protein and protein-metabolite interactions using methods like the cellular thermal shift assay (CETSA) and photo-affinity labeled chemical proteomics [48][49][50]. Therefore, proteomics can reveal differentially accumulated proteins and their modification patterns associated with signaling metabolite biosynthesis, regulation and functions, aiding in disentangling the relevant complex biological events within cells [51]. One-or two-dimensional gel electrophoresis/mass spectrometry (MS) and liquid chromatography-MS (LC-MS) have been used for the quantification and identification of proteins and potential post-translational modifications [52][53][54]. For instance, by using two-dimensional gel electrophoresis, Decker et al. constructed a twodimensional protein map of two main fractions of the latex including the cytosolic serum and the sedimented fraction containing the alkaloid-accumulating vesicles isolated from Papaver somniferum. Codeinone reductase, an enzyme involved in morphine biosynthesis, within the cytosolic serum fraction was detected following the analysis of the 75 protein spots by internal peptide microsequencing and database matching [55]. Proteins annotated as tocopherol cyclase and prenyltransferases potentially involved in the biosynthesis of orsellinic acid in Peperomia obtusifolia could also be identified from the soluble proteins of the different plant tissues using LCMS-IT-TOF-based comparative proteomics analysis coupled with transcriptomics analysis [56]. Furthermore, the recent success in discovering FAD-dependent enzyme-catalyzed intramolecular [4 + 2] cycloaddition in the biosynthesis of natural plant products using chemical probe-based proteomics analysis showcases the utility and applicability of chemical proteomics in secondary metabolite research [57]. Targeted proteomics can also help reveal the rate-limiting steps in certain biosynthetic pathways [58]. At present, the discovery of structures and functions of secondary metabolites using proteomics is often not the first choice due to its relatively higher cost compared to RNA-seq and yet to be established methodologies in studying secondary metabolism. Metabolomics-The Node of Multi-Omics for Discovering Signaling Metabolites Metabolome covers all small molecules including primary and secondary metabolites present in an organism or cell. Metabolomics refers to the systematic analysis of the metabolome of a living system using analytical instruments including liquid chromatography-mass spectrometry (LC-MS) [59], gas chromatography-mass spectrometry (GC-MS) [60] and nuclear magnetic resonance (NMR) [61]. Mass spectrometry (MS)based metabolomics is the most prevalent method as it can acquire sufficient structural information for compound identification, whilst offering great sensitivity, resolution and compound coverage. The detection of all plant metabolites using one or two methods is impossible due to the enormously diverse chemical and physical properties of plant metabolites. Metabolomics analysis will have to be tailored properly to enable the detection of sufficient compounds for comparative analysis. A few methodological guides have re-cently been released to aid in MS-based metabolomics analysis [62]. Targeted metabolomics approaches identify and quantify a specific subset of predefined small molecules whilst untargeted metabolomics analysis can collect signals of metabolites (including known and unknown metabolites) that could be detected by detectors for systematic analysis. Comparative metabolomics analysis across different samples allows for the detection of differentially accumulated metabolites, yielding insights into biosynthetic and catabolic dynamics of certain small molecules or pathways. Metabolomics provides direct information regarding the status of metabolites and, thus, serves as a core node for connecting with other omics technologies in discovering plant signaling molecules. It is an essential tool for discovering previously unknown signaling metabolites, especially when starting with plant phenotypes that could possibly arise from metabolites. Microbiomics-Uncovering Metabolite and Microbe Interactions Recent studies have proven that the root microbiome, modulated by plant signal metabolites like coumarins, flavones and benzoxazinoids, improves plant stress resilience [63]. Microbiomics investigates all the microorganisms of a given community under various conditions. The main approaches for studying microbial composition are 16S ribosomal RNA (16S rRNA) gene sequencing and shotgun metagenomics sequencing. The bacterial 16S rRNA gene sequences contain species-specific hypervariable regions, which can be amplified, sequenced and then clustered into operational taxonomic units (OUT) for the identification, classification and quantitation of microbes. 16S rRNA amplicon sequencing uses primers for a relatively short genomic region (e.g., V5-V7 zone); therefore, sequencing results can often be annotated to bacterial taxa of relatively higher taxonomic rank. Another microbial community profiling method is next-generation sequencing (NGS)-based shotgun metagenomics sequencing. Total DNA in all organisms present in a given complex mixture are sequenced. This technology compensates for the limits of sequencing the restricted amplicon region in 16S rRNA sequencing, expanding the coverage of microbial DNA to be sequenced, thus, capturing the protein-coding DNA fragments for relatively more accurate functional annotations for microbes present in a sample. The features of 16S rRNA amplicon sequencing and NGS-based shotgun metagenomics sequencing analysis were nicely demonstrated in a recently published work reporting the identification of flavones that function in recruiting the beneficial rhizosphere microbe Oxalobacteraceae, which aided maize in acquiring nitrogen under nitrogen deprivation [15]. The different features of each single omics mentioned above can be synergistically. oriented for discovering signaling metabolites, in terms of both structures and functions [64][65][66][67][68][69]. We have seen a surge in plant signaling metabolites being discovered with the aid of multi-omics, particularly in the area of plant-microbe interactions [70][71][72][73][74]. We will illustrate below in more detail how multi-omics techniques were integrated to unveil plant signaling molecules with different levels of knowledge using recent works as examples ( Figure 2) to help improve the design of experiments and the application of multi-omics tools in future research. Figure 2. Details for content depicted in (a-f) can be found in references [75], [15], [76], [77], [78] and [79] respectively. Multi-Omics-Based Discovery of New Functions of Known Molecules The integration of multi-omics analysis into studies designed to uncover the metabolic basis of certain phenotypes or traits could lead to the discovery of new functions for some well-known molecules. Coumarins, a family of benzopyrones (1,2-benzopyrones or 2H-1-benzopyran-2-ones), well-known for their defensive role in nature [80], were recently found to act as signaling metabolites in plant-microbe interactions in response to iron deficiency [75,81]. As a well-known class of defense phytochemicals, coumarins protect plants from predation and pathogen infection [80]. The integration analysis of 16S rRNA gene amplicon sequencing along with RNA-seq uncovered that coumarin helps plants to deal with iron limitation by recruiting beneficial soil microbiota. When the culture-independent 16S rRNA gene amplicon sequencing analysis was employed, the impact of coumarin on the root microbiota could be systematically evaluated. Unconstrained principal coordinate analysis (PCoA) of beta diversity constrained (CPCoA) and bacterial community profiles analysis of 16S rRNA sequencing data results indicate that coumarin biosynthesis is important for plant growth and root microbiota assembly in naturally iron-limiting calcareous soil. Moreover, comparative analysis of the amplicon sequence variant (ASV) level in coumarin-deficient mutants with wild type (WT) plants revealed that coumarin fraxetin exerts variable antimicrobial activity on Burkholderiaceae strains in iron-limiting soil. Through further root transcriptional profiles and elemental analysis of Col-0 and coumarindeficient mutant f6 h1 plants under available iron or the unavailable form of FeCl3 media with live or heat-killed synthetic community (SynCom), the role of coumarins, especially fraxetin, in mediating root microbiota for improving plant performance under iron-limiting conditions, was uncovered [75] (Figure 2a). This new function of coumarins would have not been discovered had the integrated analysis of microbiomics and transcriptomics were not applied. Combined transcriptomics, metabolomics and microbiomics analysis were also employed to discover the hidden roles of other signaling molecules such as flavones [15], benzoxazinoids [82] and strigolactones [73] in mediating plant and microbe interactions. Flavones are phenolic compounds that have functions in plant signaling, defense and adaptation to stress conditions [83]. In a recent study designed to unlock mechanisms underlying beneficial interactions between plants and rhizosphere microorganisms, flavones synthesized in maize roots were found to be capable of recruiting rhizosphere Oxalobacteraceae bacteria to improve maize performance under nitrogen deprivation [15]. Hundreds of RNA-seq datasets together with their corresponding rhizosphere microbiome data from three longitudinal zones of the crown roots of 20 inbred lines of maize with significantly different genetic backgrounds were generated. Using WGCNA network analyses on root RNA-seq datasets, phylogenetic and genotype-specific gene modules that contained gene sets with similar expression patterns across all samples were identified. Correlation analysis of the expression module with maize genotypes, phenotypes and microbiome data enabled the authors to target a specific module that displayed the highest correlation with Oxalobacteraceae enriched in the root of the high-performance inbred line of maize 787 under nitrogen deprivation. The fact the flavone synthase displayed the highest modular connectivity within this module further suggests that flavones might play a role in mediating the assembly of a beneficial root microbiota for the high-performance inbred line of maize 787. To further confirm whether flavones act as a signaling molecule under nitrogen deprivation, targeted metabolite profiling of maize root extracts of the high-and low-performance maize genotypes, together with comparative phenotypic assays of wild type maize and chalcone synthesis mutants as well as complementation experiments with exogenous flavonoids further identified the roles of root-secreted flavones, especially apigenin, in recruiting Oxalobacteraceae bacteria for promoting lateral root development and nitrogen uptake in maize [15] (Figure 2b). The new function of flavones would not have been identified without an in-depth correlation analysis of transcriptomics and microbiome 16S rRNA sequencing data. Therefore, new functions of known plant metabolites could potentially be uncovered from the studies aiming to explore the mechanisms underpinning certain phenomena or traits. Multi-Omics-Based Discovery of Unknown Molecules with Known Functions An untargeted metabolomics approach is essential to uncover novel molecules that might have given rise to certain biological functions. The comparative metabolic profiling of samples with and without biological activities can capture the chemical differences in these samples unbiasedly, enabling the design of experiments to further investigate the structures and biological activities of these chemicals. Many previously unknown molecules have recently been identified using an untargeted metabolomics approach [76,77,84,85]. One notable example is the discovery of N-hydroxy-pipecolic acid as a mobile signaling metabolite that induces systemic disease resistance in Arabidopsis [76] (Figure 2c). This metabolite was identified via comparative metabolic profiling of the Arabidopsis Flavin-Dependent Monooxygenase 1 (FMO1) mutant that is deficient in systemic acquired resistance (SAR) with wild type Arabidopsis plants. Although FMO1 has been identified as a key component in mediating the SAR against pathogens for Arabidopsis [86], the chemical basis of FMO1 remains elusive, primarily due to the unprecedented nature of the biosynthetic pathway. Untargeted metabolomics analysis nicely revealed a major mass signal present in wild type plants in response to Pseudomonas syringae treatment but absent from all fmo1 mutant plants. Further structural elucidation based on mass spectra fragmentation and synthetic standards confirmed the chemical identity of the mass signal as glycosylated N-hydroxy-pipecolic acid, suggesting that FMO could hydroxylate pipecolic acid to form N-hydroxy-pipecolic acid, which can be further glycosylated in planta [87]. Having mutant plants of genes involved in certain biological events would be very helpful for uncovering the chemical basis contributing to the biological activity of the gene under investigation. The discovery of isochorismate-9-glutamate as an important intermediary in the biosynthesis of salicylic acid exemplifies this strategy [77]. The disease compromised Arabidopsis mutant npr1 (nonexpressor of pathogenesis-related genes, NPR1) with reduced salicylic acid content and the snc2 (suppressor of npr1-1, constitutive 2) mutant which displays an autoimmune phenotype with an excess of salicylic acid were used to perform comparative untargeted metabolomics analysis, which successfully identified new intermediaries for salicylic acid biosynthesis [77] (Figure 2d). MS-based untargeted metabolomics analysis provides ample structural information regarding the chemical signals being detected, enabling annotation of the metabolites with different levels of confidence, though it is still challenging to annotate most of the chemical signatures detected by untargeted MS [88]. Synthetic standards or the NMR spectra of purified chemicals are normally required to confirm the identity of unknown compounds. Nevertheless, advances in plant metabolomics, both technical and computational, will greatly facilitate the identification and delineation of chemical signals underlying gene functions [89], leading to the discovery of novel compounds that contribute to certain biological functions. Multi-Omics-Based Discovery of Unknown Molecules with Unknown Functions Discovering novel molecules with defined biological activities has been an ongoing task in natural product research. The advent and development of multi-omics technology, especially genomics, have revolutionized the way unknown natural products are discovered [17,90], shifting from phytochemistry-based isolation and functional evaluation to genome-and transcriptome-based structural and functional mining. Genomic and transcriptomic features underlying the biosynthesis of plant natural products can enable the fast discovery of previously unknown plant metabolites when coupled with efficient heterologous expression systems. Alternatively, function oriented/guided studies of genes predicted to be involved in metabolite biosynthesis, regulation or transport can often unearth unknown metabolites with novel functions. Notable examples include the recent discovery of a previously unknown specialized triterpene biosynthetic network involved in selectively modulating Arabidopsis root microbiota [78], a new cyanogenic metabolite in Arabidopsis required for inducible pathogen defense [84] and hydroxylated diterpenoids involved in plant defense [79]. Gene clustering is increasingly demonstrated to be an important genomic feature that can be utilized for the facile discovery of plant signaling metabolites [91]. Plant biosynthetic gene clusters provide a great entry point to discover and elucidate previously unknown biosynthetic pathways as multiple biosynthetic genes functioning in the same pathway can be easily identified at the same time. The specialized triterpene biosynthetic network operating in Arabidopsis roots was recently discovered using this approach, starting with the heterologous functional characterization and untargeted metabolomics analysis of root-expressed triterpene biosynthetic cluster genes and their mutants to uncover novel triterpene chemical structures. This was followed by 16S rRNA microbiomics analysis of triterpene-deficient mutants and wild type Arabidopsis root microbial communities to delineate the function of the triterpene biosynthetic network in modulating Arabidopsis root microbiota [78] (Figure 2e). Transcriptomics data enabled the discovery of other co-expressed biosynthetic genes that are not clustered with the core cluster genes but function in the same thalianin and arabidin biosynthetic pathways. This provided hints on the functions of the triterpene biosynthetic network, leading to further microbiomics analysis [78]. Similar to genomic features, transcriptomic features underlying plant metabolite biosynthesis can serve as an entry point to probe unknown plant metabolisms. The fact that the synthesis of plant signaling metabolites is responsive to external stimuli allows investigation of the transcriptomic alteration of biosynthetic genes coding for the synthesis of cryptic metabolites. Using transcriptomics data as the entry point for discovering plant signaling metabolites offers many advantages: (i) relative low cost of RNA-seq sequencing and ease of transcriptome assembly as compared to genome sequencing and assembly; (ii) amplifiable signals of transcriptomic changes can be captured with high accuracy and relatively low amounts of plant materials in contrast to the relatively large quantity required for untargeted metabolomics analysis; (iii) functional annotation of transcriptomics sequences with higher prediction accuracy in comparison to untargeted metabolomics analysis. The new cyanogenic metabolite in Arabidopsis required for inducible pathogen defense was discovered based on the untargeted metabolomics analysis of mutants of genes involved in defense against pathogens as identified from the pathogen-induced transcriptomics data analysis [84]. It is clear that untargeted metabolomics analysis has to be carried out to correlate with transcriptomics data for the identification of structurally unknown compounds. In some cases, untargeted metabolomics analysis on the organism (e.g., insects) interacting with plants can also provide cues leading to the identification of unknown metabolites with novel functions [79,89]. In a recent study aiming to uncover the metabolic basis for defense and autotoxicity of 17-hydroxygeranyllinalool diterpene glycosides (17-HGL-DTGs), the authors identified the ceramide synthase inhibition activity of modified diterpene glycosides via untargeted metabolomics analysis of the insect Manduca sexta fed with tobacco plants containing normal and compromised diterpene glycoside levels as well as its frass [79,89]. Manduca sexta fed with tobacco plants containing normal diterpene glycoside levels accumulated significantly more long chain bases, which are substrates of ceramide synthase inhibited by the diterpene glycosides, than those fed with compromised levels of the diterpene glycosides. Moreover, the frass of M. sexta fed with tobacco containing the normal level of diterpene glycosides also accumulated more modified 17-HGL-DTGs than those fed with tobacco containing compromised levels of diterpene glycosides. The identification of modified 17-HGL-DTGs as novel compounds and their activities in inhibiting ceramide synthase led to the further discovery of the toxicity of modified diterpenes (i.e., hydroxylated hydroxygeranyllinalool diterpenes) on tobacco plants. It is interesting to note that the ceramide synthase inhibition activity of 17-HGL-DTGs on M. sexta was identified based on cues obtained from comparative transcriptomics analysis of wild type tobacco and autotoxic tobacco mutant plants [79] (Figure 2f). Therefore, an untargeted metabolomics approach is indispensable for the identification of unknown compounds, and when coupled with transcriptomics analysis can often unearth unknown compounds with novel biological activities. Breaking the Limitation of Multi-Omics: Future Perspective for Accelerated Discovery of Plant Signaling Molecules Multi-omics technology has greatly facilitated the discovery of plant signaling metabolites in many aspects; however, technical limitations in individual omics techniques still pose challenges to their application. With regard to genomics, although long-read sequencing such as PacBio [92] and Nanopore [93] sequencing technologies have improved read length and, therefore, genome assembly, to some extent, the high levels of heterozygosity, complex polyploidy and the unusually high repeat content of plant genomes are still challenges impeding accurate genome assembly and annotation [94,95]. An increasing number of plant genomes have been sequenced, yet a reasonable number of which were poorly assembled and annotated (both structurally and functionally) or with low sequence quality. Functional genomics relies heavily on the sequence information of a genome; assembly errors create hurdles for the functional prediction of biosynthetic genes or gene clusters, leading to incorrect identification of plant biosynthetic gene clusters for functional validation using currently available bioinformatics tools. For instance, the discovery of plausible functional biosynthetic gene clusters would be undermined if the assembly is only at the scaffold level rather than chromosome level as the biosynthetic genes potentially forming a gene cluster might span across multiple scaffolds. Moreover, incorrect sequence information can also result in cloning issues due to not being able to design appropriate primers as a result of missing or incorrect sequence information of a gene in a plant genome. Therefore, further technical development is desired to improve read length and the accuracy of genome sequencing techniques. Similarly, sequencing read length and accuracy also affect de novo transcriptome assembly, functional annotation, gene cloning and functional validation, especially for those plant species without a sequenced genome. Currently, RNA-seq data are primarily generated using second-generation Illumina sequencing due to the low cost and relatively well-developed analysis pipeline [96]. Single-molecule Nanopore RNA and PacBio sequencing can significantly improve read length [97], yet the cost is still relatively high in comparison to Illumina sequencing. These problems are expected to be resolved in the near future with the development of long read sequencing technologies and continuously deceasing sequencing cost. Another limitation associated with transcriptomics mining for signaling molecule discovery is the resolution of data. RNA-seq data were previously generated from the bulk RNA of plant tissues, which inevitably include much transcript noise from cells where the gene of interest is not expressed [98]. The development of the single-cell sequencing technique has enabled RNA sequencing at single-cell or cell-type levels, removing undesired transcript noise from unwanted cells, thereby yielding the much finer resolution of data for dissecting gene functions in specific cells and allowing better correlations of gene functions using co-expression analysis [99]. This will be particularly useful for dissecting the functions of known/unknown metabolites as well as uncovering their biosynthesis. Although single-cell RNA sequencing techniques have greatly expanded multi-omics application, yielding hidden and more complete mechanistic insights, the development of single-cell metabolomics, in comparison, still lags far behind. This is primarily due to the fact that metabolite signals could not be amplified the same way as DNA and RNA and instrument sensitivity is not yet up to the point of detecting comprehensive sets of metabolites within a single cell [100]. MS-based metabolomics is the most prevalent metabolomics approach, yet detection of metabolites with current instrument settings including ionization methods still face many challenges, although sophisticated sensitive instruments such as Orbitrap and time-of-flight (ToF) mass spectrometers have been widely applied. Besides sensitivity issues, the annotation of the metabolite signals from MS-based metabolomics data also represents a significant problem for metabolomics analysis [59]. Currently, the compound identity of MS-based metabolomics is assigned primarily based on accurate mass and MS/MS fragmentation data available from various databases, including METLIN [101], PubChem [102] and mzCloud. The confidence level and accuracy for such annotation are still relatively low, especially for the numerous unknown metabolites present in a plant matrix. The annotation issue is expected to be alleviated with the expansion of characterized chemical entities in the databases, standardization of instrumentation parameters and newly developed artificial intelligence including machine learning algorithms that can also be incorporated to aid in the annotation of metabolites based on mass features, especially MS/MS fragmentation patterns from metabolomics experiments [103]. The power of extracting features from metabolomics data is already evident from the development of the molecular networking approach, which clusters mass fragments with different degrees of similarities to facilitate the annotation of mass spectrum signals and has already found applications in many areas [86,104]. Integrating patterns and features from different levels of omics for machine learning may generate models that can streamline the process of multi-omics analysis and speed up the process of the discovery of plant signaling molecules [105]. It is foreseeable that the discovery of plant signaling molecules will accelerate in the near future with the increasing availability of omics tools. Novel entities and functions of plant signaling molecules at single-cell or cell-type levels will be an important research direction going forward. In addition, the discovery of plant signaling molecules involved in the interaction between plants and environments or other living organisms will also be a trend in the field with future research. With a better understanding of the functions of plant signaling molecules, their utility will be further exploited, increasing the potential of commercialization, especially in agriculture-related areas. This will also fuel the development of sustainable production technologies including synthetic biology.
8,636.6
2022-01-01T00:00:00.000
[ "Biology", "Environmental Science" ]
Assessing signatures of selection through variation in linkage disequilibrium between taurine and indicine cattle Background Signatures of selection are regions in the genome that have been preferentially increased in frequency and fixed in a population because of their functional importance in specific processes. These regions can be detected because of their lower genetic variability and specific regional linkage disequilibrium (LD) patterns. Methods By comparing the differences in regional LD variation between dairy and beef cattle types, and between indicine and taurine subspecies, we aim at finding signatures of selection for production and adaptation in cattle breeds. The VarLD method was applied to compare the LD variation in the autosomal genome between breeds, including Angus and Brown Swiss, representing taurine breeds, and Nelore and Gir, representing indicine breeds. Genomic regions containing the top 0.01 and 0.1 percentile of signals were characterized using the UMD3.1 Bos taurus genome assembly to identify genes in those regions and compared with previously reported selection signatures and regions with copy number variation. Results For all comparisons, the top 0.01 and 0.1 percentile included 26 and 165 signals and 17 and 125 genes, respectively, including TECRL, BT.23182 or FPPS, CAST, MYOM1, UVRAG and DNAJA1. Conclusions The VarLD method is a powerful tool to identify differences in linkage disequilibrium between cattle populations and putative signatures of selection with potential adaptive and productive importance. Conclusions: The VarLD method is a powerful tool to identify differences in linkage disequilibrium between cattle populations and putative signatures of selection with potential adaptive and productive importance. Background When a part of the genome that confers enhanced fitness or productive ability is preferentially kept in a population by increasing the frequency of favorable alleles, neutral loci that surround this region and that are in linkage disequilibrium (LD) with it, are also retained, thus driving the frequency of particular haplotypes in the region towards fixation in a pattern that decays progressively with distance from the causative location [1][2][3][4]. Such a selective sweep can be detected by reduced haplotype diversity and a different LD pattern when compared to those of the surrounding background [2,5]. Characterizing regions that are affected by selection may enable inferences on the functionality of a genomic region and possibly the effects of specific genes or gene combinations on specific traits [3][4][5][6]. Indicine (i.e., Bos primigenius indicus) cattle have been bred for adaptation to tropical and marginal production environments [7,8], while taurine (i.e., Bos primigenius taurus) cattle have been intensively selected for production in temperate regions of the world [5,7,9]. Analyzing differences between these two sub-species of cattle and comparing breeds selected for different purposes (milk or beef ) within these subspecies may yield insights into genomic regions that are impacted by these differences in adaptation and productivity traits associated with these two groups of cattle [5]. The amount of LD that exists in genomic regions within a population is a key parameter to trace selective sweeps [2,3] and differences in decay of LD between bovine populations have been reported [9][10][11][12]. Analyses based on the study of regional variation of LD within a population compared to their background LD level, and the contrast of the regions with the same analyses in other populations, allow the assessment of signals of differential selection, also called signatures of selection (SS), in different cattle breeds. In addition, a high coincidence between SS and copy number variants (CNV) has been reported for the human Hapmap populations [13], which suggests that selection mechanisms may possibly act through copy number differences [14]. Indeed, a study of the effects of CNV on gene expression in Drosophila identified several potential outcomes of gene copy number variation, including the possibility that gene expression increases, decreases or remains stable as copy number fluctuates [15]. Thus, it is of interest to compare SS obtained via analysis of LD variation with reported CNV for the bovine genome [16,17]. Data A total of 108 Nelore (NEL), 29 Gir (GIR), 33 Angus (ANG), and 85 Brown Swiss (BSW) individuals were genotyped using the Illumina BovineHD BeadChip (HD chip) [18]. The samples used were either derived from previous studies, approved by local ethical committees, or obtained from AI centers through their routine practice so no further ethical approval was required for the present analysis. Only autosomal SNPs were included in the analysis, resulting in approximately 735 000 SNPs. Quality control measures were calculated using the PLINK software [19,20] separately for each breed; parameters and thresholds used were a SNP minor allele frequency of at least 5%, a genotype call rate of at least 90%, both per SNP and per animal, and a Hardy-Weinberg equilibrium z-test with p > 10 -6 . In addition, the population was pruned for close relationships using the identity-by-state (IBS) relationship matrix, or in other words the pairwise genomic kinship coefficient as proposed by Leutenegger et al. [21], estimated with the GenABEL R package ibs function [22] and removing one of the individuals from a pair with an IBS > 0.8 (this limit was defined experimentally by assessing IBS relationships of 20 half-sibs). Final SNP counts and numbers of individuals used in the analyses are in Table 1. Grouping A total of six pair-wise comparisons between the four breeds were conducted. These comparisons included differences between the indicine and taurine (I/T) subspecies, differences between dairy and beef (D/B) breed types, and both subspecies and breed type differences (I/T, D/B). Specifically, the six comparisons were NEL/ANG (I/T), GIR/BSW (I/T), GIR/NEL (D/B), BSW/ANG (D/B), GIR/ ANG (I/T, D/B), and BSW/NEL (I/T, D/B). Since the method applied here requires using common SNPs, i.e. SNPs that segregate in both breeds compared, for each comparison the coincident SNPs after quality control were extracted. The number of SNPs used in each analysis is in Table 2. Principal component analysis To have an overview of the population structure pertaining to the individuals and breeds included in the study, a Principal Component Analysis (PCA) was carried out using the IBS matrix generated with GenABEL [22], by converting the calculated genomic kinship coefficients to squared Euclidean distances that capture the differences between individuals via classical multidimensional scaling [23] in n-1 dimensional spaces (where n represents the number of samples) of n eigenvectors, by applying the cmdscale function from the R 'stats' package v.3.0.1 [24]. LD decay To provide an insight about the overall levels of LD in the different breeds, genome-wide pairwise r 2 values of SNPs separated by a maximum distance of 100 kilobases (kb) (average SNP distance was 7.9 kb), were calculated and graphed using R [24] and PLINK [19,20] software. VarLD VarLD is a program for quantifying differences in genomewide LD patterns between populations [13]. The software quantifies for each window of 50 SNPs the signed r 2 of all pairwise comparisons and a square matrix is built with the results, representing a correlation matrix between all SNPs [25]. Equality between the elements of the two matrices is estimated by comparing the extent of Individuals included and number of SNPs left for analysis in the final dataset after quality control was performed separately for each breed. Table 2 Number of common SNPs in each breed comparison after quality control departure between their respective ranked eigenvalues after eigen-decomposition of each matrix [25]. A raw VarLD score is assigned for the window as the trace of the difference between the respective diagonal matrices with the sorted eigenvalues in descending order [25]. The magnitude of this score gives a measure of the degree of dissimilarity between the correlation matrices and is used to quantify the extent of regional LD differences between the populations [25,26]. Positive selection for genes in a genomic region from a specific population is likely to produce a different LD pattern in that region when compared to a non-selected population, which leads to the identification of the region [2,3,6,26]. The methodology used to calculate VarLD scores is described in more detail in [13,25,26]. In short, permutation is used to obtain a Monte Carlo statistical significance and the scores are standardized (S i ') to center the distribution of the scores around a mean of zero and a standard deviation of one, helping to avoid bias in the raw VarLD scores due to differences in the size of the windows in terms of base-pairs (bp) and the populations being compared having different background LD levels. The software uses sliding windows and we applied windows containing 50 SNPs and a step-size of one SNP, following Teo et al. [13]. A window was flagged as a putative SS (SS region) if the associated score S i ' was greater than or equal to the score at the 99.99 th percentile and 99.9 th percentile of all scores across the genome. The middle position of the first window in an identified SS was taken as the starting point of a signal, and the end position was the middle of the final window in the SS. Assigning signals to a breed To assess which breed showed a selective sweep in a particular region with extreme S i ' , we graphed LD heatmaps of the r 2 between all SNPs from the identified SS region, using PLINK [19,20] and R [24]. Since the levels of r 2 differed greatly between the two breeds on each comparison group it was relatively easy to determine the origin of a sweep by assigning it to the breed with the higher LD levels in the region. To have an additional evaluation of the LD differences between the breeds included in the identified SS regions, we estimated the average r 2 of SNPs in windows of 200 kb, with a step-size of 20 kb, discarding any windows that included less than 50 SNPs, and then graphed the results using R [24]. Only the graphs corresponding to the regions explored in detail in this publication are presented, together with graphical representation of the VarLD scores in these candidate SS regions. Gene identification After the SS candidate regions were defined, we extracted details on these regions, including comparison group, chromosome, and bp position (middle position of the starting and ending windows included in the peak). Then, the SS regions were sorted by chromosome and bp position, and common signals across comparison groups were highlighted. To identify genes possibly associated with the SS regions, we compared the bp position of the regions to the position of the genes listed from the Ensembl Biomart Tool [27] for the UMD3.1 Bos taurus genome assembly [28,29], and extracted a list of genes having a common position with the SS regions. Confounding flagged regions with CNV regions and previously reported SS Regions that were flagged by the above method were compared to the latest CNV reports by Bickhart et al. [16] and Hou et al. [17] on the bovine genome to detect common regions between VarLD SS and variations in copy number. The resulting signals were also compared to previously published SS using different methodologies and SNP densities. Information from the supplementary files of these publications was used for the comparisons. Results The PCA results ( Figure 1) show that the first Principal Component (PC) explaining 10.2% of the SNP variation clearly separates the taurine and indicine populations, while the second PC explaining 3.7% of the variance divides each subspecies separating the breeds correctly. The patterns of dispersion also indicate that the two indicine breeds are genetically closer to each other and have lower within breed variance as compared to the taurine breeds. The results of LD decay up to a distance of 100 kb for the four breeds are in Figure 2. The pattern of decay shows higher LD at short distances for the taurine than the indicine breeds, particularly for Angus, reaching an average r 2 of 0.3 at a distance of almost 40 kb, while both indicine breeds showed a faster decay, reaching an average r 2 of 0.3 at approximately 20 kb. The genome-wide distribution of standardized VarLD scores for the six comparison sets is in Figure 3. Strong SS were confined to narrow regions of the genome. The most distinct peaks were observed for the ANG/BSW and the GIR/NEL comparisons, which show that the largest VarLD scores are found when comparing different production types within a subspecies. This result is confirmed by the differences in the percentile distributions between the six comparison sets (Table 3), which shows higher 0.1 and 0.01 percentile scores for these two comparisons (ANG/BSW and GIR/NEL). For the top 0.01 percentile scores across all comparisons, 26 signals were found. Six SS were identified in more than one comparison and 17 genes were associated with these signals. For the top 0.1 percentile scores, 165 signals were detected, covering 10.76 Mb and representing 0.43% of the autosomal genome. Combining the SS shared across several comparison analyses, a total of 42 regions were identified with 125 genes related to these genomic positions (see Additional file 1). For the I/T comparisons, detailed results for a signal that was found on BTA6 at 81.5-81.7 Mb and was shared across the NEL/ANG, NEL/BSW, GIR/ANG and GIR/BSW analyses are shown in Figure 4. This signal lies within the annotated boundaries of the TECRL (trans-2,3-enoyl-CoA reductase-like) gene (ENSBTAG00000024826) [29]. For this region, the two taurine breeds showed sustained high levels of LD, indicating a selective sweep in both these breeds. In addition, a loss of CNV, a type of variation caused by loss of genetic material due to deletions, was observed in this region, between 81.46 and 81.58 Mb, encompassing 71 SNPs and 120 kb. The NEL/ANG and GIR/ANG comparisons identified a SS on BTA3 between 14.9 and 15.5 Mb, with a peak between 15.37 and 15.39 Mb ( Figure 5). When assessing the LD behavior of the three breeds involved in these comparisons, we found that the signal corresponded to a region with extended high LD in the ANG breed near the FPPS_BOVIN or BT.23182 (ENSBTAG00000003948) [29] gene. For the D/B comparisons, the strongest signals were observed within subspecies, and primarily from taurine breed comparisons. For the taurine D/B comparison, one of the signals with the highest VarLD score was located on BTA24, between 37.79 and 37.84 Mb. This region ( Figure 6) includes the annotated boundaries of the MYL9 (myosin, light chain 9, regulatory) and the MYL12B (myosin, light chain 12B, regulatory) genes [29], and a high LD level between the SNPs in this region indicates that the SS is associated with the ANG breed. For the indicine D/B comparison, a signal on BTA5 between 48.5 and 49.1 Mb overlapped with the LEMD3 (LEM domain containing 3) [29] gene (see Figure 7), and further analysis assigned this sweep to the Gir breed. Thirty-four signature signals from the top 0.1 percentile were found in regions that contained reported bovine CNV [16,17], and genes located in these regions are presented in Table 4. These 34 regions cover 1.84 Mb or 0.07% of the autosomal genome, and many of the CNV positions coincided between the two reports [16] and [17] even though the authors have used different type of data as source of information for CNV discovery, sequence and SNPs, respectively. Information about several other candidate genes identified in this study through the VarLD methodology and that were previously identified in other cattle SS studies are presented in Additional file 2. Discussion The VarLD method has the potential to capture recent strong selection because LD breaks down quickly over longer distances and, thus, high LD over an extended region is likely the result of recent selection. The human populations that have been analyzed [26,30,31] have very similar extents and patterns of LD and differ from each other only in limited regions. Cattle populations differ from human populations because they have experienced very strong recent selection caused by breed formation and use of advanced reproductive technologies. This makes the comparison of LD between cattle breeds worthwhile [9]. Differences such as those observed here between indicine and taurine breeds in the rate of decay of LD with increasing distance have been previously reported but with lower marker densities [9,10,32,33]. Our analysis clearly shows that the pattern of LD decay is faster in the indicine breeds compared to the taurine breeds. This supports the use of higher SNP densities in the indicine breeds, both for LD analysis and differences in LD patterns, in order to capture the nature of genomic events that affect narrow regions by having SNPs sufficiently close to the cause of an event to show significant LD. In this study, the regions with the highest VarLD scores that we identified were very narrow, with the largest signal covering 696.78 kb and the smallest signal involving single SNPs, which confirms the benefit of using a high-density SNP beadchip for this approach. The effect of ascertainment bias in the choice of SNPs for different SNP chip platforms has been discussed in the literature [34], but the HD chip was constructed using a larger number of indicine breeds and individuals in the reference population, and in general seems to perform better on Bos indicus individuals, than the Illumina BovineSNP50 BeadChip [18]. When the analysis was replicated using the 50Kchip SNPs, nine signals were found for the 0.1 percentile, covering 24.9 Mb of the autosomal genome, and ranging in size from 212.3 kb to 10.2 Mb (results not shown), with only four regions found in common with the analyses performed using the HD chip. This demonstrates a capacity for higher resolution analyses when using the HD chip. Highest scoring SS for different comparisons In the I/T comparisons, the strongest signal identified in all breed contrasts was created by unusually high LD in the taurine breeds. The TECRL [29] gene encodes an enzyme that has an oxidoreductase activity on the CH-CH group of donors and other acceptors, and is directly involved in chemical reactions and pathways involving lipids [35]. The SS region containing TECRL also overlaps with a region in which a particular type of CNV with loss of nucleotides is commonly observed, which suggests a possible role of copy number differences being causative in selection processes. Because the selection signature was found in the taurine breeds and is directly related to lipid production in the body, this is a suggestive signature of artificial selection for production purposes. The FPPS_BOVIN [29] gene, detected in a signal between ANG and both indicine breeds, is a gene involved in cholesterol (sterols) and steroid biosynthesis [35]. The breed comparison where the signal came from, the chromosome number, the VarLD signal and CNV positions in bp, the author reporting the CNV, and the genes with a short description, are all reported here; several VarLD signals that coincide with the same CNV are indicated in bold characters, while several CNV concurring with one signal are indicated in italic characters; *Gene ID source: ENSEMBL: http://www.ensembl.org/. Lipid synthesis is a very important physiological function for both milk [36] and beef [37,38] production, and both taurine breeds have been selected intensively for these characteristics during the past decades [5,7,9]. The LEMD3 gene [29], detected as a selective sweep in the Gir breed, is a specific repressor of the transforming growth factor beta (TGF-beta) receptor, activin, and BMP signaling, and is involved in negative regulation of skeletal muscle cell differentiation, which might have been selected for in Gir, a breed developed for milk production [35]. In humans, mutations leading to loss of function of this gene are associated with diseases causing sclerosing bone lesions and increased bone density, such as osteopoikilosis [39,40]. This selection signature was reported by Ramey et al. [41], between 48.67 and 48.9 Mb on BTA5, using an approach based on sliding windows estimations of minor allele frequency (MAF). In the taurine D/B comparison, two genes possibly related with variation in muscle accretion were identified i.e. MYL12A and MYL12B [29]. MYL12A encodes a nonsarcomeric myosin complex component with calcium ion binding regulatory functions that are involved in signal transduction mechanisms, cytoskeleton formation, cell division and chromosome partitioning [35]. MYL12B [29] encodes a component of the Z-disc and the myosin II complex. Phosphorylation of MYL12B regulates the activity of non-muscle myosin II, resulting in higher MgATPase activity and the assembly of myosin II filaments. It is also involved in axon guidance processes, muscle contraction and regulation of muscle cell shape [35]. When extending detection of signatures of selection to the 0.1 percentile of VarLD scores, a third gene in this region overlapped with the signal: MYOM1 (myomesin 1) [29], which encodes a 85 kDa protein that is a structural constituent of muscle. Together with its associated proteins, the MYOM1 protein interconnects the major structure of sarcomeres, the M bands and the Z discs, and is involved in muscle contraction [35]. MYOM1 is one of the top 10 genes with preferential expression in muscle tissue [42] and has been associated with intramuscular fat content [43]. In addition, the most significant physiological and system development functions associated with genes involved in meat tenderness include skeletal and muscular system development and tissue morphology, both of which have been related with muscle contraction in the pig [43]. The CAST (calpastatin isoform I) gene (ENSBTAG0 0000000874) [29] identified on BTA7 between 98.44 and 98.58 Mb in the NEL/ANG comparison, with the signal originating from ANG, has been intensively studied in different breeds and selected for to improve meat tenderness and other traits associated with beef quality [44][45][46][47][48][49][50]. The gene encodes an endogenous calpain (calciumdependent cysteine protease) inhibitor that is involved in the proteolysis of amyloid precursor proteins. The calpain/calpastatin system is involved in numerous membrane fusion events, such as neural vesicle exocytosis and platelet and red-cell aggregation, and it is hypothesized that it affects the expression of genes encoding structural or regulatory proteins [35]. Due to its capacity to prevent proteolysis [35], some polymorphisms in this gene have been shown to be associated with increased meat tenderness in beef cattle breeds [49]. The protocadherin beta gene cluster (PCDHB4, PCD HB6, PCDHB7, PCDHB13, among others) [29] was identified as having a selection signature in the taurine D/B comparison. This cluster encodes neural cadherin-like cell adhesion proteins that are integral plasma membrane proteins and most likely play critical roles in the establishment and function of specific cell-cell neural connections [35]. In addition, these proteins are involved in nervous system development, synapse assembly, and synaptic transmission [35]. As reported by MacGregor [51], protocadherin II contains a high-affinity cell surface binding site for Prion proteins and a number of protocadherin genes also function as tumor suppressor genes [52,53]. Three protocadherin genes, protocadherin-psi1, PCDHB4 and PCDHB6 were previously reported as a selection signature using an Fst approach [54] and were found to overlap with CNV regions. The UVRAG (UV radiation resistance associated) (ENSBTAG00000016355) [29] gene located on BTA15 between 56.2 and 56.3 Mb, was found to have a selection signature in the comparison between the BSW and ANG breeds. This gene is associated with DNA repair and positive regulation of autophagy [35]. The human homologue of this gene [55] has been shown to complement the ultraviolet sensitivity of xeroderma pigmentosum group C cells [56] and encodes a protein with a C2 domain [57]. This protein activates a Beclin1 complex that promotes autophagy and suppresses the proliferation and tumorigenicity of human colon cancer cells [58]. The DNAJA1 (DnaJ (Hsp40) homolog, subfamily B, member 1) (ENSBTAG00000045858) gene [29], located on BTA7 between 53.8 and 54 Mb, was identified in the comparison between the BSW and ANG breeds. It encodes a heat shock protein binding gene [35], which is a co-chaperone of the 70 kDa heat shock protein (Hsp70), and the DNAJA1/Hsp70 complex directly inhibits apoptosis [59]. Because of its anti-apoptotic role, it has been considered as having an important role in meat tenderness in beef cattle. Association studies showed that this gene explained up to 63% of the phenotypic variability of tenderness in Charolais [60]. The selection signature identified in the DNAJA1 gene could be a good indicator of selection for meat tenderness in the ANG breed during the last decades. Comparison of VarLD with other methods to detect selection signatures Several genes previously reported using other methods to detect SS were also identified in our study. The first example is the MSRB3 (methionine sulfoxide reductase B3) gene (ENSBTAG00000044017) [29] located on BTA5 between 48.56 and 48.74 Mb for which a SS was found in the NEL/GIR comparison between 48.65 and 49.35 Mb, which was also found by Ramey et al. in Brahman populations [41]. This gene has been associated with the 'long ear' phenotype, which characterizes the Gir breed, and against which the Nelore breed has been strongly selected; this reveals a clear sign of differential selection between the indicine breeds [41]. MSRB3 was first identified through a genome-wide association study as a candidate for a QTL involved in ear floppiness and morphology in dogs [61]; it is an indicator of strong artificial selection for a specific phenotype, and of the time at which the breed was formed. A second example is the PCSK4 (proprotein convertase subtilisin/kexin type 4) gene (ENSBTAG00000002305) [29] on BTA7. The SS was identified in the NEL/GIR comparison at 45.5 Mb. This gene was also reported in SS studies by [41] and [9] in Jersey and Santa Gertrudis breeds. It is responsible for serine-type endopeptidase activity, which is involved in acrosome reaction, binding of sperm to the zona pellucida, sperm capacitation, and fertilization, which are all key functions of male fertility [35]. A third example is the TOX (thymocyte selectionassociated high mobility group box) gene located on BTA14 between 26.6 and 26.9 Mb (ENSBTAG0000000 4954) [29]. This gene encodes a possible bovine blood group antigen transcript. Blood group antigens have been shown to be under balancing selection in humans [62], and this gene was also reported to be under positive selection in the Normande and Montbéliarde French dairy breeds by Flori et al. [63]. Surprisingly, although haplotype-based and LD methods are expected to perform similarly [64], when the Rsb method for detecting selection signatures [65], which evaluates differences between breeds by estimating extended haplotype homozygosity (EHH) for each SNP location, was applied in our data, the results were quite different, and only two regions shared a signal. Another method, ΔDAF [66], which is based on the difference in the derived allele frequency between populations, was also tested and no common signals were identified. Considering the differences that the adopted LD method could have with other methods to identify SS, LD methods may detect regions that haplotype based-methods such as EHH [67], iHs [67], Voight's iHs [68], and Rsb [65] might not detect, because genomic processes such as insertion/deletion (in/ del) and other CNV produce LD patterns that may not be accounted for in the haplotype construction. The fact that LD methods cannot deal with monomorphic SNPs, makes VarLD less sensitive for regions with completely fixed SNPs or with many fixed SNPs for one population. Such signatures might be detected using methods like smoothed Fst [9,69] and MAF-based [41] approaches. Comparison of VarLD results with CNV regions Several of the identified selection signatures, especially for the indicine breeds, pointed to non-genic regions, including some CNV regions, such as (i) the signal on BTA6 between 66.75 and 66.78 Mb observed in the NEL/GIR comparison that coincides with a CNV on BTA6 between 66.75 and 66.76 Mb, and (ii) a CNV on BTA8 between 46.31 and 46.34 Mb that coincides with two overlapping SS observed in the GIR/BSW and GIR/ANG comparisons. These results suggest that different types of genomic variation, other than SNPs, may have a role in selection mechanisms. Given that CNV have been shown to influence gene expression through dosage-dependent interactions [15], it is possible that the identified VarLD regions correspond to selection for a specific gene copy number or for a certain duplicated paralog that is present in the duplication. Across the whole genome, most CNV have evolved under neutral evolutionary pressures and their frequency and sequence context have been shaped by demographic events, mutation, and genetic drift [14,15,17]. However, CNV that are located in functional regions of overlapping genes, are mostly under purifying (negative) selection and only a few examples of positive selection on these CNV are known [15]. Regions that differ in copy number between subspecies can be informative about ancient adaptations that may have led to species-specific phenotypes. Recent copy number changes can be an indicator that artificial selection may have led to genetic and phenotypic differences between breeds. In previous studies using the VarLD method to analyze human data, a large fraction of the top signals corresponded to CNV for some of the populations compared [13]. Comparing our signals from the top 0.1% VarLD scores to recently published reports on the detection of bovine CNV [16,17], we found that 20.6% of our signals overlapped with reported CNV. Since these signals cover only 0.43% of the genome and the CNV discovery sets included 2.1 and 5.6% of the genome, respectively [16,17], it is hypothesized that CNV are associated with differences in LD between populations and with selection processes [13][14][15]. Conclusions VarLD is a powerful tool to identify differences in LD between cattle populations and possible signals of directional selection between them. The strongest signals differentiate LD patterns between breeds within subspecies and seem to point towards very recent selection. The narrow signatures of selection peaks that were detected in this analysis seem to indicate that both the methodology and the SNP density applied were appropriate to identify genes that underlie the identified selective sweeps. Some of the genes found in the I/T comparisons indicate potential adaptive signatures, while the D/B comparisons point out genomic regions related to production of milk and beef. A high number of the genomic regions identified with the VarLD method were shown to be associated with physiological pathways of adaptation and production processes, and some of the genes present in these regions have also been reported to coincide with signatures of selection in other species. The fact that 20.6% of the top VarLD signals overlap with recently reported CNV regions, which cover less than 7.7% of the genome, is a strong indicator of the role of CNV in selection within a breed type. In contrast, it is surprising that results from previous studies using the same breed comparisons and partially overlapping data sets, which applied haplotype-based methods to detect signatures of selection, had almost no overlap with the signals we detected using the VarLD method. Additional files Additional file 1: Signals found for the top 0.01 and 0.1 percentile of VarLD scores. Table with the signals found on the top 0.1 percentile of the distribution of VarLD scores, organized by chromosome and bp position along the genome; the breed comparison where the signal came from, the chromosome number, the starting and ending bp position and information for the genes spanning the regions under the signals are given in the table; additionally, signals that contain the 0.01 percentile are highlighted in yellow, and in regions that cover several genes, the genes underlying the highest scoring window are highlighted in blue. Additional file 2: Common signals with other Signatures of Selection studies. Common signals found between our analysis and previous signatures of selection regions reported in the literature; the breed comparison where the signal came from, the chromosome number, the base pair spanning position, the position of the common region, the reference to the authors and the gene name are given in the Table; several VarLD signals that coincide with the same signal from other studies are highlighted in yellow, while several signals from other authors that concur with one VarLD Signal are highlighted in blue [5,7,9,41,54,63,64,[69][70][71][72][73].
7,357
2014-03-04T00:00:00.000
[ "Biology" ]
Gambling Motives: Do They Explain Cognitive Distortions in Male Poker Gamblers? Gambling behavior is partly the result of varied motivations leading individuals to participate in gambling activities. Specific motivational profiles are found in gamblers, and gambling motives are closely linked to the development of cognitive distortions. This cross-sectional study aimed to predict cognitive distortions from gambling motives in poker players. The population was recruited in online gambling forums. Participants reported gambling at least once a week. Data included sociodemographic characteristics, the South Oaks Gambling Screen, the Gambling Motives Questionnaire-Financial and the Gambling-Related Cognition Scale. This study was conducted on 259 male poker gamblers (aged 18–69 years, 14.3% probable pathological gamblers). Univariate analyses showed that cognitive distortions were independently predicted by overall gambling motives (34.8%) and problem gambling (22.4%) (p < .05). The multivariate model, including these two variables, explained 39.7% of cognitive distortions (p < .05). The results associated with the literature data highlight that cognitive distortions are a good discriminating factor of gambling problems, showing a close inter-relationship between gambling motives, cognitive distortions and the severity of gambling. These data are consistent with the following theoretical process model: gambling motives lead individuals to practice and repeat the gambling experience, which may lead them to develop cognitive distortions, which in turn favor problem gambling. This study opens up new research perspectives to understand better the mechanisms underlying gambling practice and has clinical implications in terms of prevention and treatment. For example, a coupled motivational and cognitive intervention focused on gambling motives/cognitive distortions could be beneficial for individuals with gambling problems. Introduction Gambling is characterized as an activity whose outcome is based mainly or totally on chance, which involves an irreversible provision of money or an object of value beforehand. Different gambling types include games of luck (lottery, slot machines, scratch-cards and roulette) and games of skill (poker, blackjack, sports and horse betting). The outcome of games of luck is due only to chance, while games of skill depend on luck, strategy, experience and knowledge parameters. To date, few epidemiological data are available on European populations, which can be explained by the lack of research on the prevalence of gamblers in the general population. Different sets of practices can be identified depending on the intensity of gambling, defined by its frequency or the amount spent on games. In France and Northern Europe, 1-2% of problem or pathological gambling occurs in the general population. A similar prevalence was found in Canada and New Zealand (Costes et al. 2011). However, these estimations are much lower than those found in the United States and Australia (around 5%). Differences in terms of prevalence between countries are still widely discussed. Accessibility to gambling and the materials used to obtain these data are some explanations regularly put forward. Among types of gambling, poker playing appears to have special features, particularly the involvement of both chance and strategy in the game's outcome (Barrault et al. 2014), which could influence players' perception of chance in the game (Barrault and Varescon 2013). Furthermore, poker players show specific gambling problems (Bjerg 2010;Barrault et al. 2014). Evidence from the literature thus suggests that excessive poker players may have a specific psychological profile. While the practice of gambling can be seen as a personal choice, it can be influenced by factors favoring it or not (Burlacu et al. 2013). Cognitive distortions are one of the variables that influence involvement in gambling activity. Inherent to gambling situations, all gamblers, including non-problem gamblers and gamblers who have good numerical and objective probabilities of winning, are susceptible to developing cognitive distortions. These are therefore not related to a lack of knowledge or information regarding the game (Lambos and Delfabbro 2007). According to Sévigny and Ladouceur (2003), pathological gamblers will experience two cognitive states: one focused on taking into account rational and objective probabilities, and another primarily focused on the activity and the expected results. Thus, even individuals with a good knowledge of probabilities and principles could present cognitive distortions that would be activated in gambling situations (Barrault and Varescon 2013). These irrational beliefs coexist with and oppose the rational beliefs of the gamblers. This cognitive change can be explained by the concept of heuristic control. In fact, in a particular class of situations, such as gambling, individuals are more likely to overestimate the amount of their perceived control over results; particularly when they are strongly committed to the task and/or they have a strong desire for results (Clark 2014;Delfabbro et al. 2006). The presence of such cognitive distortions in gamblers is widely documented in the literature (Clark 2014). Hence, a model representing the five major beliefs related to gambling may be clinically significant (Raylu and Oei 2004). Gambling-related expectancies is more about the perception of the expected impact on the game itself than the hope of happiness, pleasure, or other types of emotions with personal utility that can be derived from the game. Illusion of control corresponds to the perceived controllability over the results of a game, while predictive control is the perception of predicting the outcome of the game. The belief about the inability to stop gambling refers to the perception of being unable to resist an urge to gamble. Finally, interpretative bias is characterized by an attributional belief, that is to say promoting the successful continuation of gambling and allocating losses to external factors. Furthermore, cognitive distortions increase with the severity of gambling: problem gamblers seem to have more than non-problem gamblers (Miller and Currie 2008). They tend to overestimate the role of skill in games and consider gambling a financially profitable activity. However, the desire to win money causes losses that can be fueled by cognitive distortions. Delfabbro et al. (2006) implicitly suggested that gambling motives could influence cognitive distortions, particularly the illusion of control. Although motivations vary according to the type of population studied (age and gender) (Dowling 2013;Sundqvist et al. 2016), the type of game, although this is not studied here (Binde 2013;Lee et al. 2007), and the severity of gambling (Francis et al. 2014), some of them are found fairly consistently in the literature. The motivational factor most often raised is financial gain, but others have also been highlighted: for fun or amusement, against boredom (Lam 2007;Neighbors et al. 2002), to escape from routine (Loroz 2004), to socialize, for excitement (Lee et al. 2007), for the challenge (Chantal et al. 1994), to escape from stress (Lee et al. 2007) or to escape depressive affects (Stewart and Zack 2008). These motives seem to reflect the expectations of gamblers about, on the one hand, the positive aspects of the reward of gambling behavior and, on the other hand, its potential to reduce negative affects. Several models have been proposed to explain what motivates and animates gamblers; those of Binde (2013) and Milosevic and Ledgerwood (2010), who identified enhancement, coping and social motivation. Enhancement motivation refers to the presence of sensationseeking, impulsiveness, a sense of excitement felt for the game and coping. Gamblers wishing to regulate their emotional states would have a high level of this motivation whereas gamblers with a high level of coping would be characterized by a set of inappropriately used behaviors to escape their negative emotional state (Bonnaire et al. 2009;Vachon and Bagby 2009). Lastly, gamblers with a high level of social motivation play to establish a social affiliation (for fun with friends, for example) and would not present psychopathological disorders. Added to this is the financial motivation, already mentioned, which cannot be dissociated from this type of gambling (Dechant 2013). To date, much empirical research has been carried out on cognitive distortions. However, no study has focused specifically on the link between these variables and gambling motives. Thus, our objectives were to assess gambling practice, gambling motives and cognitive distortions in terms of their presence and nature in gamblers, to compare them between the different types of gamblers, to determine the link between motivations and cognitions, and also to explain the nature of the relationship between gambling motivations and cognitive distortions. In view of these objectives, it was hypothesized that (1) problem gamblers would show a higher level of gambling motives and a higher level of cognitive distortions than non-problem gamblers; (2) there would be strong correlations between these two variables and with gambling severity; and (3) gambling motives would be a strong predictor of cognitive distortions. Participants Gambler participants were recruited between June 2016 and October 2016 through online gambling forums (Bet Clever, Club Poker). After receiving an authorization from the administrators, an announcement was posted on the ''presentation'' tab of the gambler community. Members interested in participating in the study had to click on the LimeSurvey Ò link provided in order to access the study (information, consent forms and questionnaires). All participants were at least 18 years of age, fluent French speakers, and with a regular online or live gambling activity (at least once a week). This criterion of regularity has been used previously in the literature (Bonnaire et al. 2009; Barrault and Varescon 2012). Finally, 259 male poker gamblers responded to all the questionnaires including 24% of non-problem gamblers (assessed by the South Oaks Gambling Screen-SOGS score under 3), 61.7% of at risk-problem gamblers (SOGS score between 3 and 4) and 14.3% of problem gamblers (SOGS score equal to or higher than 5). The Ethics Committee approved the study (IRB number: 20162200001072) and informed consent was obtained from all participants, who took part freely and voluntarily. Sociodemographic and Gaming Data Age, gender, marital status, level of education, household composition, professional activity and social-professional category were assessed. Gaming data were obtained using a questionnaire especially designed for this study, including all types of gambling (live and online). South Oaks Gambling Screen (SOGS) The SOGS (Lesieur and Blume 1987) was validated in French by Lejoyeux (1999). This is a self-administered 20-item questionnaire that assesses lifetime and current (occurring in the last 12 months) gambling habits and problems. A score less than or equal to 2 corresponds to the absence of problem gambling, a score of 3 or 4 corresponds to risky or problematic gambling, while a score greater than or equal to 5 defines the individual as a probable pathological gambler. In our study, this questionnaire appears only relevant for the current period (in the last 12 months) so we chose to remove the lifetime assessment. The SOGS shows satisfactory reliability and validity in the general population and in problem gamblers (Stinchfield 2002). In the current sample, Cronbach's alpha was .72, indicating good internal consistency. Gambling Motives Questionnaire-Financial (GMQ-F) The Gambling Motives Questionnaire (GMQ-Stewart and Zack 2008) is a 15-item questionnaire directly adapted from the Drinking Motives Questionnaire (Cooper et al. 1992) assessing three types of gambling motive: ''enhancement''-to increase positive emotions (5 items), ''coping''-to reduce or avoid negative emotions (5 items), and ''social''-to increase social affiliation (5 items). Dechant (2013) then proposed adding 9 items to assess the financial motivation of gamblers, a non-negligible factor. The resulting GMQ-F tool was translated and validated in French (Devos et al. 2017). Items were rated on a 4-point Likert scale ranging from 1 (never or almost never) to 4 (almost always or always). This questionnaire has shown favorable psychometric properties on middle-aged adult gamblers (Dechant and Ellery 2011). In the present study, Cronbach's alpha for the total scale (a = .80) and for each subscale was satisfactory: enhancement (a = .80), coping (a = .76), social (a = .65) and financial (a = .67). Gambling Related Cognition Scale (GRCS) The GRCS (Raylu and Oei 2004), validated in French (Grall-Bronnec et al. 2012), is a 23-item self-report that records common thoughts associated with problem gambling. This questionnaire uses a seven-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree) that provides a total score (23-161). Items are grouped into five subscales: predictive control (6 items), illusion of control (4 items), interpretative bias (4 items), gambling expectancies (4 items), and inability to stop (5 items). The higher the score is, the greater the gambling-related cognitions are. In our study, the internal consistency for the total score (a = .84) and for each subscore was adequate: gambling expectancies (a = .62), illusion of control (a = .75), predictive control (a = .54), perceived inability to stop (a = .84), and interpretative bias (a = .57). Data Analyses All data were analyzed using SPSS Ò , version 21, and were tested with a two-sided significance level of .05. Descriptive statistics for quantitative measures (mean, standard deviation) and for qualitative measures (percentage) were first calculated. To estimate the group effect regarding sociodemographic dimensions, and regarding the results of questionnaires, Chi squared tests were used for categorical measures and ANOVA for continuous ones (non-problem gamblers-NPGs vs. at risk-problem gamblers-RPGs vs. probable pathological gamblers-PPGs). Next, Spearman correlations were made between SOGS scores and the scores and subscores of the GMQ and GRCS. Then, one model of linear regression analysis was designed with the gambling motives and the problem gambling as the independent variables. Separate models were calculated for each scale (SOGS and GMQ) and subscale score. Finally, a multivariate linear regression analysis was designed with all the predictors with a significant effect revealed in the previous independent models. Sociodemographic Characteristics and Gambling Two hundred and fifty-nine (259) male poker gamblers completed the online questionnaires. Sociodemographic data, SOGS scores and the characteristics of gambling practices are detailed in Table 1. For the entire sample, the average age was 33.9 years (SD = 9.3). Statistical analyses (ANOVA and Chi 2 ) showed significant differences between NPGs (n = 62), RPGs (n = 160) and PPGs (n = 37) only for professional activity and socioprofessional category in terms of sociodemographic variables (p \ .05). Scale and Questionnaire Results The results from each scale and the comparison between groups (ANOVA) are presented in Table 2. Table 3 shows the correlations between scales and subscales. Gambling Motives Questionnaire-Financial (GMQ-F) The ANOVA results revealed that PPGs had significantly higher scores than the two other groups for the GMQ overall score (p = .001) and all subscores (p = .012 for coping and p = .001 for the others) except for ''social'' (p = .088, p [ .05). Gambling-Related Cognitions Scale (GRCS) The ANOVA showed that PPGs displayed significantly higher GRCS scores than the two other groups (p = .001), for the general scale and for all subscales (p value ranging from .001 to .18). In general, gambling motives (except social motivation) and cognitive distortions increased with the severity of the gambling practice (p \ .05). Moreover, significant correlations between problem gambling, cognitive distortions and gambling motives were observed (p \ .05). Gambling motives (overall GMQ-F) appeared to be significantly and strongly correlated with a specific dimension of cognitive distortions (i.e. gambling-related expectancies) (r = .51, p \ .05). Multiple Linear Regression A three-step regression was conducted. First, a simple linear regression model was carried out to determine whether the components of gambling motives (independent variable) could predict problem gambling (SOGS score). It showed that overall motivations explained 22.0% of probable pathological gambling in our sample (Adjusted R 2 = .217; t = 8.521; p = .001). In gambling motives, coping appeared to be the main significant predictor (17.7%; R 2 = .174; t = 7.444; p = .001) of probable pathological gambling, followed closely by financial motivation (17.2%; R 2 = .169; t = 7.304; p = .001). The second step included, one by one, sociodemographic variables, gambling motives (scale and subscales) and the SOGS score as predictors of cognitive distortions. The results showed that no sociodemographic variables significantly predicted cognitive distortions (p [ .05). However, on their own, gambling motives (overall GMQ-F) explained 34.8% of cognitive distortions present in our sample (Adjusted R 2 = .345; t = 11.712; p = .001), whereas problem gambling (overall SOGS) explained 22.4% of these cognitive distortions (Adjusted R 2 = .221; t = 8.609; p = .001). Although this model was statistically significant, two dimensions of gambling motives, coping and financial, constituted significant predictors (22.3 and 21.4%, respectively) of cognitive distortions (Adjusted R 2 = .220; t = 8.588 for coping, and Adjusted R 2 = .211; t = 8.373 for financial; p = .001 for both). Finally, a multivariate analysis was conducted (see Table 4). All the significant predictors listed below were entered simultaneously with cognitive distortions as the dependent variable. This multivariate model accounted for 39.7% of the variance of cognitive distortions (Adjusted R 2 = .393; p \ .05). However, only three significant predictors were highlighted: overall gambling motives and problem gambling were positively related to cognitive distortions (respectively, t = 8.586 and t = 4.580; p = .001), while ''enhancement'' motivation was related negatively to cognitive distortions (t = -2.400; p \ .05). Discussion This research is the first to assess gambling motives and cognitive distortions jointly in poke players. One of the purposes of this study was to investigate whether gambling motives explain cognitive distortions. The main results show that gambling motives are a strong predictor of cognitive distortions. This close linkage enables us to hypothesize that gamblers with more motives to gamble (both in quality and in quantity) are more likely to develop and maintain cognitive distortions during gambling, such as the failure to appreciate the independence of turns (GRCS predictive control) and expectancies that gambling will be exciting and/or relieve negative affect (GRCS gambling related-expectancies) for example. Cognitive distortions are mainly explained by financial and coping motivation. Greed and the tendency to escape from daily difficulties may explain the presence of more or fewer cognitive distortions. Indeed, the desire to succeed and win money may be overestimated by gamblers while creating financial losses. Moreover, winning money probably enables some gamblers to escape their routine and their difficulties, which may reinforce cognitive distortions. In addition, the game's outcome modulates the gambler's mood: more depressive affects are observed after a loss, and more euphoria after a win. The link between gambling motives and gambling severity has already been studied in the literature (Francis et al. 2014). Our results confirm these results and show that gambling motives are significantly correlated to the degree of gambling severity. Moreover, the differences between the three groups of gamblers (NPGs, RPGs and PPGs) are statistically significant for all dimensions of the GMQ-F scale, except for ''social'' motivation. Earlier research showed that PPGs gamble more to escape problems compared to recreational gamblers (Burlacu et al. 2013;Clarke 2008;Platz and Millar 2001), which is supported in this study. Respondents with a gambling problem have significantly higher scores than RPGs and NPGs: first, on coping, secondly, on financial motivation and then on enhancement. The content of the coping subscale might mediate the negative affects found in pathological gamblers (Connor-Smith and Flachsbart 2007). Furthermore, our study highlights the nature of the relationship between these two variables: gambling motives explain 22.0% of problem gambling. Individual differences, particularly in terms of motives, may be a vulnerability factor of problem gambling behavior. Multivariate analyses reveal that gambling severity associated with gambling motives accounts for 39.7% of cognitive distortions, with the enhancement motive negatively related to cognitive distortions. We can hypothesize that the acquisition of a set of knowledge and know-how, in order to progress, opposes the potential installation of cognitive distortions. For example, gamblers motivated by improvement are well informed about the game (probabilities, gambling techniques, etc.). Nevertheless, our sample is mainly composed of male gamblers, who are more oriented towards, on the one hand, games of skill (poker and sports betting), and on the other hand, enhancement (Gandolfo and Debonis 2014). The enhancement motives may reflect a desire to increase self-esteem. However, changes in self-perception are directly linked to major trends in success and failure, also related to the acquired skills. Nevertheless, the literature also reports the link between the severity of gambling and cognitive distortions, putting forward cognitive distortions as a risk factor and not the other way round (Barrault and Varescon 2013;Cunningham et al. 2014;Navas et al. 2016;Romo et al. 2016). In our study, all GRCS measures were higher in the PG group than in the other two groups (NPGs and RPGs). The inability to stop gambling appears as the main reported cognitive distortion. However, this cognitive distortion component is one of the criteria characterizing pathological gambling. This result highlights a certain awareness in problem gamblers of their difficulties in controlling their gambling behavior. Unlike the results found previously, in pathological gamblers (Michalczuk et al. 2011;Navas et al. 2016) and pathological gambling poker players (Barrault and Varescon 2013), the illusion of control does not appear to be an important belief. However, it is possible that gamblers subjectively experience the illusion of control, without it concretely impacting their way of gambling. Gamblers could also have underestimated their answers as they were not in gambling conditions when filling out the questionnaires. This is consistent with the results found by Dannewitz and Weatherly (2007). Moreover, several factors may influence cognitive distortions, including the degree of involvement of the gambler and some features of the game (Ladouceur and Sévigny 2005). The literature highlights, on one hand, a quantitative difference in cognitive distortions depending on game frequency and intensity (the level of irrational beliefs increases with the intensity of gambling practice) and on the other hand, a qualitative difference (the role of skill in games and the perception of the game as financially profitable are more present in pathological gamblers) (Delfabbro et al. 2006;Källmen et al. 2008;Miller and Currie 2008;Moodie 2008;Raylu and Oei 2004). Different groupings of preferred gambling type, such as games of skill and games of luck, could be another way to examine cognitive distortions. To our knowledge, few studies have focused on gambling motives. This is why our study aimed to provide some additional information by identifying and comparing gambling motivations based on gambling practices. However, there are several limitations to the interpretation and generalization of the results. First, participants are only male poker gamblers, who are not representative of the gambler population due to their specific profile. In order to limit this bias, studies need to be carried out on groups of gamblers created according to the type of game practiced. Moreover, we cannot generalize the results to women gamblers while a gender effect has been demonstrated in some previous studies. In fact, in both pathological and non-pathological gamblers, women appear to have less intense cognitive distortions than men (Dannewitz and Weatherly 2007;Moodie 2008;Raylu and Oei 2004). Nevertheless, we tried to constitute a sample as representative as possible by recruiting our participants in ecological settings (Internet forums). Secondly, we used the SOGS (Lesieur and Blume 1987) to screen for gambling problems: in our sample, we observed 14.3% of probable pathological gamblers, which is similar to the results found by Barrault and Varescon (2013) using the SOGS, and by Wood et al. (2007) using the DMS-IV-TR criteria (APA 2003). Although the SOGS is known to foster false positives in a general population (Stinchfield 2002), it is still the most used screening tool in research. Another limitation of this study is the cross-sectional design. Indeed, it seems to make changes in motivations and/or in involvement over time and in terms of gambling practice. A longitudinal or qualitative study would allow access to these evolutionary elements. Moreover, for practical reasons, our results are based on the prevalence during the current period (the last year). Finally, we were unable to compare online and live gamblers because the majority of our participants reported having both types of practices. Another approach could be to compare, as far as possible in terms of feasibility, exclusive online gamblers and live gamblers. Despite these limitations, the present study offers interesting results and research perspectives. The relationship between gambling problems in all gamblers, gambling motives and cognitive distortions merits further study. Conclusions and Implications In conclusion, gambling motives differ between gamblers depending on the intensity and severity of the gambling practice. The level of risky gambling is associated with motivations and should be considered during tests or assessment. Moreover, the presence of cognitive distortions is impacted by the presence of gambling motives. A longitudinal study would show the evolution of motivations and cognitive distortions (in terms of numbers and importance) in gamblers engaging in regular gambling activity. The differences in psychological functioning between non-problem gamblers, at riskgamblers and those with a probable problem demonstrate the need for further research to understand better the factors involved in gambler behavior, with the objective of prevention and care for gamblers. With these results, we can hypothesize that a systematic assessment of gambling motives of gamblers would enable us to target the type of intervention required. Moreover, due to the possible presence of varying levels of cognitive distortions, a motivational and cognitive intervention (such as cognitive remediation) would act on the two components of gambling, directly linked to gambling problems. Ethical Approval Ethical approval for this study was obtained from the CERES Committee (IRB number: 20162200001072). All participants chose to participate by clicking on the study invitation link and provided their informed consent to participate prior to beginning the study. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
5,910.8
2017-06-17T00:00:00.000
[ "Economics", "Law", "Psychology" ]
Modeling the postmerger gravitational wave signal and extracting binary properties from future binary neutron star detections Gravitational wave astronomy has established its role in measuring the equation of state governing cold supranuclear matter. To date and in the near future, gravitational wave measurements from neutron star binaries are likely to be restricted to the inspiral. However, future upgrades and the next generation of gravitational wave detectors will enable us to detect the gravitational wave signatures emitted after the merger of two stars, at times when densities beyond those in single neutron stars are reached. Therefore, the postmerger gravitational wave signal enables studies of supranuclear matter at its extreme limit. To support this line of research, we present new and updated phenomenological relations between the binary properties and characteristic features of the postmerger evolution. Most notably, we derive an updated relation connecting the mass-weighted tidal deformability and the maximum neutron star mass to the dominant emission frequency of the postmerger spectrum. With the help of a configuration-independent Bayesian analysis using simplified Lorentzian model functions, we find that the main emission frequency of the postmerger remnant, for signal-to-noise ratios of $8$ and above, can be extracted within a 1-sigma uncertainty of about 100 Hz for Advanced LIGO and Advanced Virgo's design sensitivities. In some cases, even a postmerger signal-to-noise ratio of $4$ can be sufficient to determine the main emission frequency. This will enable us to measure binary and equation-of-state properties from the postmerger, to perform a consistency check between different parts of the binary neutron star coalescence, and to put our physical interpretation of neutron star mergers to the test. I. INTRODUCTION The extreme densities and conditions inside neutron stars (NSs) cannot be reached in existing experiments.This makes NSs a unique laboratory to study the equation of state (EOS) governing cold-supranuclear dense material.Following the first detection of a gravitational wave (GW) signal originating from the coalescence of a binary neutron star (BNS) system, GW170817, by the Advanced LIGO [1] and Advanced Virgo detectors [2], it became possible to constrain the NS EOS by analyzing the measured GWs [3][4][5][6][7].Because of the increasing sensitivity of GW interferometers, multiple detections of merging BNSs are expected in the near future [8].This will make GW astronomy an inevitable tool within nuclear physics community. In general, there are two ways to extract information about the EOS governing the NS's interior from a GW detection.The first method relies on the modeling of the BNS inspiral [9][10][11][12][13] and on waveform approximants that include tidal effects, represent accurately the system's properties, and are of sufficiently low computational cost that they can be used in parameter estimation pipelines, e.g., [10,14,15].The zero-temperature EOS is then constrained by measuring a mass-weighted combination of the quadrupolar tidal deformability Λ or similar parameters that characterize tidal interactions, e.g., [16,17]. To date, the advanced GW detectors have only been able to observe the inspiral of the two NSs [39,40] and no postmerger signal has been observed.This nonobservation is caused by the higher emission frequency at which current GW detectors are less sensitive.But, the increasing sensitivity of the 2nd generation of GW detectors (Advanced LIGO and Advanced Virgo) will not only increase the detection rate of BNS inspiral signals, there will also be the chance of observing the postmerger signal for a few 'loud' events.Ref. [41] finds that for sources similar to GW170817 but observed with Advanced LIGO and Advanced Virgo's design sensitivities, the postmerger part of the BNS coalescence might have an SNR of ∼ 2-3.The planned third generation of GW interferometers, e.g., the Einstein Telescope [42][43][44] or the Cosmic Explorer [45], have the capability to detect the postmerger signal of upcoming BNS mergers with SNRs up to ∼ 10. Unfortunately, the postmerger spectrum is influenced in a complicated way by thermal effects, magnetohydrodynamical instabilities, neutrino emissions, phase transitions, and dissipative processes, e.g., [24,25,[46][47][48][49][50].Currently, any postmerger study relies heavily on expensive numerical relativity (NR) simulations and there is to date no possibility to perform simulations incorporating all necessary microphysical processes.Therefore, our current theoretical understanding of this part of the BNS coalescence is overall limited.In addition, there has been no NR simulation yet which has been able to show convergence of the GW phase in the postmerger.While this observation can be generally explained by the presence of shocks or discontinuities formed during the collision of the two stars, it also increases our uncertainty on any quantitative result. Nevertheless, the community tried to construct postmerger approximants focusing on characteristic (robust) features present in NR simulations.The discovery of quasi-universal relations is a building block for most descriptions of the postmerger GW spectrum.Clark et al. [51] showed that a principle component analysis can be used to reduce the dimensionality of the spectrum for equal mass binaries once the different spectra are normalized and aligned such that the main emission frequencies coincide.Effort has also been put to model the plus polarization in time domain using a superposition of damped sinusoids incorporating quasi-universal relations [35].Relying on a very accurate f 2 estimate for an accurate rescaling of the waveforms Easter et al. [37] created a hierarchical model to estimate the postmerger spectra. Here, we follow a similar path and try to describe the GW spectrum with a set of a three-and a six-parameter model function with a Lorentzian-like shape.Comparing our ansatz with a set of 54 NR simulations, we find average mismatches of 0.18 for the three-parameter and 0.15 for the six-parameter model; cf.Tab.I. Our approximants do not incorporate directly quasi-universal relations, but are constructed to describe generic postmerger waveforms.Thus, our analysis is flexible and allows to describe almost arbitrary configurations.Employing our model in standard parameter estimation pipelines [52,53] of the LIGO and Virgo Collaborations, we find that we can extract the dominant emission frequency in the postmerger for a number of tests.To our knowledge, this is the first time a model-based (but configuration independent) method is employed within a Bayesian analysis of the postmerger signal. Once the individual parameters describing the postmerger spectra are extracted, we use fits for the peak frequency to connect the measured signal to the properties of the supranuclear EOS and the merging binary.This way, one can combine measurements from the inspiral and postmerger phase to provide a consistency test for our supranuclear matter description. Although not used here, we want to mention an alternative approach, which employs the morphology-independent burst search algorithm called BayesWave [54,55].Ref. [36] showed that this approach is capable of reconstructing the postmerger signal and allows to extract properties from the measured GW signal.Even for a measured postmerger SNR of ∼ 5, the main emission frequency of the remnant could be determined within a few dozens of Hz.Compared to BayesWave, our simple model functions might have the advantage that without any modifications of the current code for statistical inferences, in particular the LALInference module [53] available in the LSC Algorithm Library (LAL) Suite, they can be added to existing frequency domain inspiral-merger waveforms describing the first part of the BNS coalescence, e.g., [10,14,15,[56][57][58], to construct a full inspiral-merger-postmerger (IMP) waveform directly employable for GW analysis.Such an IMP study can also be carried out within the BayesWave approach, but seems technically harder since one has to combine model-based and non-model-based algorithms. Our paper is structured as follows.In Sec.II, we discuss the general time domain and frequency domain morphologies of the postmerger signal as obtained from NR simulations.Based on this discussion, we derive new quasi-universal relations for the time between the merger and the first time domain amplitude minimum, and the first time domain amplitude maximum.We also extend existing quasi-universal relation for the main emission frequency f 2 of the GW postmerger spectrum and its amplitude in the frequency domain.In Sec.III A we discuss two different Lorentzian model functions and their performance to model NR simulations.In Sec.III B a full Bayesian analysis of a set of NR model waveforms is performed.In Sec.III C we show how our analysis can be used to constrain the EOS and how to test consistency between the inspiral and postmerger.We conclude in Sec.IV.We list in the appendix the NR data employed for the construction of the quasi-universal relations presented in the main text. Unless otherwise stated, this paper uses geometric units by setting G = c = 1.Throughout the work, we employ the NR simulations published in the Computational Relativity (CoRe) database [59].In addition, where explicitly mentioned, we increase our dataset by adding results published in [31,33].We refer the reader to Tab.I for further details about the individual data. A. Time Domain While the inspiral GW signal is characterized by a chirp, i.e., a monotonic increase of the GW amplitude and frequency, the postmerger emission shows a nonmonotonic amplitude and frequency evolution.Figure 1 presents one example of a possible postmerger waveform.In the following, we highlight some of the important features characterizing the signal. First postmerger minimum: By definition, the inspiral ends at the peak of the GW amplitude (merger) marked with a red circle in Fig. 1.After the merger, the amplitude decreases showing a clear minimum (red squared marker) shortly afterwards, see [61,62] for further discussions.Around this intermediate and highly non-linear regime, different frequencies are excited for a few milliseconds, see e.g., [21,34] for further details.While it was already known that the merger frequency can be expressed by a quasi-universal relation, e.g., [20,63], we find that the time between merger and this amplitude minimum also follows a similar relation. In Fig. 2 we show the time between the merger and the first amplitude minimum, ∆t min /M , as a function of the mass-weighted tidal deformability Λ = 16 13 5 . ( with the individual dimensionless tidal deformabilities Λ = 2 3 k 2 ( R M ) 5 , where k 2 labels the dimensionless = 2 Love number and R labels the radius of the isolated NSs.We show with different colors the mass ratio of each setup defined as q = M A /M B 1, cf. colorbar of Fig. 2. We find a clear correlation between ∆t min /M and the mass-weighted tidal deformability Λ.A good phenomenological representation is given by with the parameters α = 2.4681 × 10 1 , β = 2.8477 × 10 −3 , γ = 6.6798 × 10 −4 obtained by a least-square fit for which the root-mean-square (RMS) error is 2.4608.Interestingly, the two highest mass ratio simulations do not follow Eq. ( 2).This is caused by the different postmerger evolution for these high-mass ratio setups.While the amplitude minimum is produced when the two NS cores approach each other and potentially get repelled, configurations with very high mass ratio show almost a disruption during the merger, i.e., the lower massive NS deforms significantly under the strong external gravitational field of its companion. One possible application for the quasi-universal relation for ∆t min /M is the improvement of BNS waveform approximants, i.e., it might help to determine the amplitude evolution after the merger of the two NSs.In particular, incorporating an amplitude tapering after the merger with a width of ∆t min provides a natural ending condition for inspiral-only approximants, e.g., NRTidal [14,58,64] or tidal effective-one-body models [65][66][67].Therefore, Eq. ( 2) might become a central criterion to connect inspiral and postmerger models. First postmerger maximum: After the minimum of the GW amplitude, the amplitude grows and reaches a maximum, marked with a red diamond in Fig. 1.One finds that the main binary property determining the amplitude of this first postmerger GW amplitude maximum is the mass ratio of the binary q, cf.Fig. 3 with 1 − (5.3149 × 10 −1 )q 1 − (2.3420 × 10 −1 )q . (3) The qualitative behavior is again related to the possible tidal disruption of the binary close to the merger for unequal mass systems.We note that even if the secondary star does not get disrupted, the maximum density in the remnant shows one peak rather than two independent cores [68,69] which leads to a smaller first postmerger peak and overall on average a smaller GW amplitude. B. Frequency domain We obtain the frequency domain waveform by fast Fourier transformation (FFT) of the time domain GW strain h(t): As before, we consider only the dominant 22-mode of the GW signal. In Fig. 4 we show the frequency domain GW signal (blue solid line) of THC:0001.The merger frequency of this particular configuration is 1638 Hz and it is marked with a red circle.The main feature of the postmerger spectrum is the dominant peak characterizing the main emission frequency f 2 , which for the setup shown in the Figs. 1 and 4 is about 2354 Hz.For a better interpretation, we also present the frequency domain postmerger spectrum in green.Such postmerger-only waveforms are obtained by FFT after applying a Tukey window [70] with a shape parameter 0.05 at t min (where the shape parameter represents the fraction of the window inside the cosine tapered region) and will be used for our injections to test our parameter estimation infrastructure. f 2 -frequency: The dominant feature in the postmerger frequency spectrum is the dominant emission mode of the merger remnant at a frequency f 2 .As mentioned before, a number of works, e.g., [18][19][20][21][22] have discussed possible EOS-insensitive, quasi-universal relation for the f 2 -frequency. Building mostly on the work of Bernuzzi et al. [22], we derive a new relation for the f 2 frequency.First, we extend the dataset of 99 NR simulations employed in Ref. [22] and use a set of 121 data by incorporating additional setups published as a part of the CoRe database [59]; cf.Tab.I. Second, we are switching from to which yields a tiny improvement (∼ 0.1%) in the RMS error against the NR data, but more notably relates directly to the mass-weighted tidal deformability Λ measured most accurately from the inspiral part of the signal.In addition to the dependence of the mass-weighted tidal deformability Λ, the postmerger evolution depends also on the stability of the formed remnant and how close it is to the black hole formation.This information is in part encoded in the ratio between the total mass M and the maximum allowed mass of a single non-rotating NS M TOV .By incorporating an additional M/M TOV dependence we are able to reduce the RMS error by ≈ 28%.Therefore, we define a parameter ζ by a linear combination of κ T eff and M MTOV (see also [71][72][73] for a similar approach), The free parameter a = −131.7010 is determined by minimizing the RMS error.Finally, the dimensionless frequency M f 2 is fitted against ζ using a Padé approximant: with α = 3.4285 × 10 −2 , A = 2.0796 × 10 −3 and B = 3.9588 × 10 −3 .We present Eq. ( 8) together with our NR dataset and a one sigma uncertainty region (shaded area) in Fig. 5. Frequency domain amplitude of f 2 : Finally, we want to briefly discuss the dependence of the f 2 -peak amplitude on the binary properties.While the f 2 -frequency correlates clearly to ζ, we have not been able to find a similar tight relation between any combination of the binary parameters and the amplitude | h22 (f 2 )|.The only noticeably imprint which we have been able to extract comes from the mass ratio q, where generally higher mass ratios lead to a smaller amplitude |r h22 (f 2 )/M | as shown in Fig. 6 with We note that because of the large uncertainty, we see Eq. ( 9) more as a qualitative rather than a quantitative statement about the postmerger spectrum.However, the overall amplitude decreases for an increasing mass ratio seems to be a robust feature and might help to interpret future GW observations. III. MODEL FUNCTIONS, f2 MEASUREMENT, AND INSPIRAL-POSTMERGER CONSISTENCY A. Lorentzian Approximants Based on our previous discussion and the dominance of the characteristic f 2 frequency, we start our consideration with a simple damped sinusoidal time domain waveform to model the postmerger waveform.The Fourier transform of a damped sinusoidal function is a Lorentzian function, Eq. ( 10).In the simplest case which we consider, we use 3 unknown coefficients (c 0 , c 1 , c 2 ) corresponding to the amplitude, the dominant emission frequency and the inverse of the damping time, respectively, and write the frequency-domain signal as: Equation (10) suggests that the amplitude peak of the GW postmerger spectrum and also the main postmerger phase evolution are connected to the same frequency characterized by c 1 . Maximizing over (c 0 , c 1 , c 2 ), we compute the mismatches between the used NR data from the CoRedatabase (Tab.I) and the model function, Eq. (10). Figure 7 shows all mismatches, which on average are ∼ 0.18. The mismatches can be further decreased by adding three additional coefficients: For Eq. ( 11) the amplitude and phase evolution are independent from each other and we obtain average mismatches of 0.15, i.e., about 17% better than for the threeparameter model.While one might argue that the additional introduced degrees of freedom hinder the extraction of individual parameters in a full Bayesian analysis, it might also be possible that the more flexible 6-parameter model recovers signals with smaller SNRs.Thus, we continue our study with both model functions Eqs. ( 10) and ( 11).Finally, one obtains the plus and cross polarizations from Eq. ( 10) and Eq. ( 11) by incorporating the inclination (ι) dependence: hc , hp can be employed directly to infer information from the postmerger part of a GW signal or to construct a full IMP-waveform for BNSs. B. Validating the Parameter Estimation Pipeline In this section, we present for four selected cases the performance of the three-and six-parameter models.We inject the NR waveforms immersed in the same simulated Gaussian noise with total network SNRs ranging from SNR 0 to SNR 10 assuming that Advanced LIGO and Advanced Virgo detectors run at design sensitivity [8] 1 .Fig. 8 shows the injection of THC:0021 with SNR 8 in both the time (top panel) and frequency domain (bottom panel).For each injected waveform, a Tukey window with shape parameter 0.05 is applied at t min to isolate the postmerger signal and avoid Gibbs phenomenon.All simulated signals are injected with zero inclination angle, zero polarization angle ψ and sky location (α, δ) to be (0, 0).We estimate parameters using Bayesian inference with the LALInference module [53] available in the LALSuite package.Sampling is done on 9 (12) parameters 1 The corresponding power spectral density (PSD) files, LIGO-P1200087-v18-aLIGO DESIGN.txt and LIGO-P1200087-v18-AdV DESIGN.txt are available under the LALSimulation module in the LALSuite package.with nested sampling algorithm lalinferencenest [74], where i runs from 0 to 2 (5) for the three-(six-) parameter model, and t c and φ c are the reference time and phase, respectively.The priors are chosen to be uniform in [0, 10 −20 ]s −1 on c 0 , uniform in [1500, 4096]Hz on c 1 and c 5 , uniform in [1,400]Hz on c 2 and c 4 , uniform in [0, 6] on c 3 , uniform in [0, 2π] on α, ψ and φ c , uniform in [−1, 1] on cos(ι) and sin(δ), and uniform in [trigger time − 0.05s, trigger time + 0.05s] on t c where the trigger time is the signal arrival time at the geocentric frame. Fig. 9 shows the posterior for c 1 , i.e., our best estimate of the f 2 -frequency for SNRs up to 8 for our 4 examples, which we mark in Tab.I. We present the recovery with the 3-and the 6-parameter model in the top and bottom panels, respectively.The solid vertical line represents the injected f 2 -frequency and the dashed line represents the FIG. 9. Posteriors for the parameter c1 in Eq. ( 10) (top panels) and Eq. ( 11) (bottom panels) for a variety of SNRs.c1 can be directly related to the peak in the frequency domain spectrum and therefore relates to the f2 frequency.The IDs of the four injected waveforms are # 18, 20, 32, 34, i.e., THC:0021, THC:0031, BAM:0048, BAM:0057 of [59].The chosen set covers various EOSs, mass ratios, and masses and is therefore used as a testbed for our new algorithm.FIG. 10.Posteriors for the parameter c1 and c5 in Eq. ( 11) for 2 SNRs.c5 peaks at a frequency close to f2 but it is significantly less constrained than c1. estimate according to the quasi-universal relation Eq. ( 8) together with a one-sigma uncertainty (gray shaded region). We summarize the main findings below: (i) The three-parameter and six-parameter approxi-mants perform similarly. (ii) Depending on the exact setting (e.g., intrinsic source properties, noise realization, sky location) one can recover the f 2 frequency with an SNR of ∼ 4 for the best and ∼ 8 for worst considered scenarios. (iii) Interestingly, one finds that also c 5 relates to a frequency which is close to the f 2 frequency, however, c 5 is significantly less constrained than c 1 (Fig. 10). (iv) Once 3rd generation detectors are available and so the postmerger SNRs of ∼ 10 are obtained, the systematic uncertainties of the quasi-universal relations become larger than the statistical uncertainties; cf.dashed and solid, vertical black lines. C. Inspiral and Postmerger consistency Finally, we want to illustrate how a future detection of a postmerger GW signal will help to constrain the source properties and the internal composition of NSs.As shown before, the f 2 -frequency can be extracted through a simple waveform model (or alternatively by using BayesWave, e.g., [36]).To connect the f 2 -frequency with the source parameters, one needs to employ quasi-universal relations as presented in Sec.II and some information obtained from the analysis of the inspiral GW signal.In particular, the total mass M can be measured precisely using state-of-art BNS inspiral waveforms, e.g., [10,14,15 77].For GW170817 the uncertainty of M could be reduced to ±0.04M once EM information had been included [71,78].Thus, we will use an uncertainty of ∆M = ±0.04Mas a conservative estimate.addition, we have to know the maximum TOV-mass M TOV . Current estimates for M TOV are based on the observation of J0740+6620 [79] with M = 2.17 +0.11 −0.10 M and the assumption that GW170817's endstate was a black hole [80][81][82][83] such that M TOV 2.17-2.35M .Due to the increasing number of BNS detections in the future, we expect that the uncertainty of M TOV can be considerably reduced so that we will use an uncertainty of ±0.04M .From this information, we can compute the ζinterval consistent with the observed inspiral signal from the Λ posteriors, cf.vertical red shaded region in Fig. 11.We then connect the ζ estimate obtained from the inspiral with Eq. ( 8) and the f 2 -measurement of the postmerger signal.This consistency analysis is somewhat connected to the inspiral-merger-ringdown consistency test for BBH [84], but not only has to assume the correctness of general relativity but also that our understanding of supranuclear matter and the EOS-insensitive quasiuniversal relations are valid. Figure 11 summarizes our main results.Generally the GW measurements can be considered consistent between the inspiral and postmerger observations as well as with the quasi-universal relation relating the main postmerger frequency with the binary properties.We find that for all cases, the quasi-universal relation and its 1-sigma uncertainty region lie within the intersection of the red shaded region (inspiral) and the blue/orange/green horizontal regions (postmerger).Thus, (as expected) all simulations are consistent with (i) general relativity and (ii) the nuclear physics descriptions used as a basis for NR simulations to derive the quasi-universal relations.For future events this approach will allow us to probe our understanding of physical processes under extreme conditions, and in cases where the quasi-universal relation seems violated to even derive new relations based on GW measurements (under the assumption that general relativity is correct).We note that even in the case where either general relativity or the quasi-universal relations would be violated, we might not be able to determine reliably the violation based on one individual event, but stronger constraints can be obtained by combining multiple BNS events. IV. SUMMARY In this work, we discussed the general morphology of a BNS postmerger in both the time and frequency domain.We presented quasi-universal relations for the time at which the first postmerger amplitude minimum happens and the strength of the first postmerger amplitude maximum.In general, the time between the merger and the amplitude minimum increases with an increasing Λ, while the amplitude of the first postmerger maximum decreases with an increasing mass ratio.In the frequency domain, we improved the existing quasi-universal relations of M f 2 by extending the employed NR dataset (121 simulations in total) and adding an extra dependence of M/M TOV .The extra term M/M TOV characterizes how close the setup is to the black hole formation. We find that a three-(six-) parameter Lorentzian can model the postmerger waveform with average mismatch of 0.18 (0.15).To test these model functions, we performed an injection study, in which we simulated the detector strains with four different BNS configurations immersed in the same simulated Gaussian noise assuming Advanced LIGO and Advanced Virgo at design sensitivities.We find that in the best cases the Lorentzian models could measure the dominant emission frequency f 2 once the signal has an SNR of 4 or above; however, for most scenarios higher SNRs ∼ 8 were required. Employing the new quasi-universal relation for M f 2 described in this work, we could present consistency tests between the inspiral and the postmerger signal; cf.Fig. 11. FIG. 3 . FIG.3.A scatter plot of the first peak in the postmerger spectra after merger versus q.The color shows the massweighted tidal deformability Λ.The shaded region indicates the 1-sigma (±1.7915 × 10 −2 ) uncertainty. FIG. 5 . FIG. 5. M f2 as a function of ζ.The color shows the massweighted tidal deformability Λ.The shaded region indicates the 1-sigma (±1.1025 × 10 −3) uncertainty.In addition to the CoRe-dataset employed to derive the previously shown quasi-universal relations, we include here the published results of[31,33]. FIG. 7 . FIG. 7. Mismatch between a subset of the NR data listed in Tab.I and the three-and six-parameter models. FIG. 11.Schematic plot showing how one can constrain ζ from f2 measurements where the spread f2 are measured by one standard deviation.Top panel is for three-parameter model and the bottom panel is for six-parameter model.The vertical red shaded region corresponds to the ζ-interval consistent with a hypothetical inspiral signal assuming an uncertainty of ±0.04M for M and MTOV, and ±30 for κ T eff .The exact value is marked as vertical red-dashed line.
6,161.4
2019-07-04T00:00:00.000
[ "Physics" ]
Supply Chain Finance Analysis Supply Chain Finance as an emergent and currently, one of the most viable and plausible financing procedural instruments is not a new conceptual framework. It has been widely noted and acclaimed as an essential aspect of supply chain management and trade finance. The global economic crises have necessitated the urgent consideration and eventual adoption of Supply Chain Finance (SCF). SCF is noted for its capability to collaborate and coordinate trade partners and procedures in order to increase Trade transparency, the shift from paper-laden business documentation to comprehensive sophisticated automation process of concise detailed information exchange and the ultimate dematerialization of the entire supply chain process. The processes involved in a thoroughly comprehensive SCF scheme also reduces trading costs and risks shouldered by all the parties involved in the process. This study attempts to delineate and define the not so obvious but diverse trading efficiencies, value enhancement enjoyed by the users of SCF and the enormous improvement in working capital accessibility and maximization afforded by SCF in the entire supply chain process. It also seeks to highlight some of the major challenges of the SCF system thereby prescribing and providing working innovative but compelling solutions to streamline and ensure the viability and the instrumentation of SCF mechanisms and their capabilities of eradicating trade finance problems faced by various trade partners and companies. This is, by far, a brilliant innovative method of leveraging working capital accessibility and the substantial enhancement of credit ratings and values of the various companies using the SCF system to optimize trade efficiency, predictability and ultimately profitability. INTRODUCTION It has been recognized that trade credit granted by trade partners (suppliers) is a very essential catalyst and source of trade finance. Existing findings indicate that about 80% of business transactions carried out in UK thrive on trade credit (Summers,Wilson, 2002). Even buyers with strong credit ratings prefer trade credit to bank loans as this improves their net working capital (Petersen, Rajan, , 1997). Supply Chain Finance (SCF) as a major systemic shift emergent but viable trade finance instrument and leverage now available and accessible to companies working with and in the banking industry is geared towards the diversification of funding source endowed with the potency and International Journal of Academic Research in Business and Social Sciences Vol. 8 , No. 12, Dec, 2018, E-ISSN: 2222-6990 © 2018 HRMARS 387 possibilities as a credible means and measure to eliminate financial and credit constraints faced by all the parties involved in an entire supply chain process thereby substantially enhancing the financial and operational efficiency of the entire supply chain. Today, trade credit, ie, the credit that a seller grants to its buyer for the purchase of goods and services is common in both developed economies and the not so developed financial markets. (Steele, 2006). Existing evidence shows that trade credit is a more effective screening device than bank credit since it helps to measure the credit worthiness of all the parties at the receiving end of the trade continuum. The information advantage theory asserts that the manufacturer must gather useful information about a potential buyer in the course of business in order to ascertain their credit ratings and worthiness. However, financial institutions must also equally reduce substantial barriers to hands-on operational and financial information which is made possible via a sophisticated but highly predictable automated system giving prompt and reliable feedback on trade transactions in a Supply Chain Finance systemic approach of operations and financing. (Ferris, 1981) also asserts that trade credit may control the transaction costs between trading partners thereby enhancing trade efficiency and effectiveness. Compared to traditional supply chain management, SCF is less expensive and does not involve resource. Due to this leverage window of opportunity and possibilities, today's buyers mostly prefer to use SCF to effectively and conveniently extend their payment terms and Thu access and obtain trade credit and improve their working capital. (Rafik, Jamal, 2005) since SCF provides with the wide range of maneuverability needed and necessary to access funds. With SCF, a supplier delivers to a buyer and provides trade credit by allowing payment due at a mutually agreed date. This ensures maximal trade satisfaction between /among the various trade parties in an entire supply chain process in a highly efficient Supply Chain Finance scheme thereby sustaining trade continuity and profitability in the long run. LITERATURE REVIEW For well over a decade now, competition in almost all industries has skyrocketed and become virtually global in the wake of globalization and cyberspace play-off technologies. Therefore, to remain vibrant and relevant as well as benefit from the ensued advantages and opportunities currently available, many companies have been somewhat coerced to assume the role as searchers and leaders of innovative solutions which are more efficiency driven as well as the need to subsequently streamline and strengthen their organizational systems by ensuring that their operational and financial activities are highly cost effective and underfunded by efficiency. In the wake of such need and developments, SCF suffices as a viable paradigm which is quite plausible to meet all the major trade requirements helping to bring to fore trade standardization and satisfaction among the various respective trade partners. Now, the emphasis is to harness SCF solutions by making money out of it. Essentially, new innovative attempts are positioned to trigger and sustain major trade improvisations making both financial and operational flows in the entire supply chain faster, more reliable, more predictable and more cost efficient. (Herring, 2011). Cost and financial optimization have hastened the urgency on every facet of business to generate and sustain competitive edge and advantages. Since global competition is keener than ever before with the virtual removal of all trace barriers, companies are to outperform their peers an innovation in working capital optimization solutions are now vital within a company's entire supply chain. International Journal of Academic Research in Business and Social Sciences Vol. 8 , No. 12, Dec, 2018, E-ISSN: 2222-6990 © 2018 HRMARS 388 The existing literature indicates that, somewhat, traditional trade finance solutions are expensive and mostly just one party in the entire supply chain continuum is to benefit at the detriment and disadvantage of other partners involved in the entire supply chain. The approach of SCF has become more popular and plausible in part due to the opportunity it makes available to exchange goods and services as they move from their original manufacturer to their final consumptive destination along a very comprehensive and highly transparent and predictable SCF scheme. Working capital is demonstrably seen as one of the most important indicators of the efficiency in a supply chain (Farris,Hutchison, 2003). To optimize working capital, the cash-to-cash cycle advanced by (Richards,Laughlin, 1980) is good and credible index. However, it is quite evident that many companies need significant amounts of working capital in order for them to streamline their productive endeavors as well as deal effectively with the unstable and to notable extent, the unpredictable financial inflows and outflows through their productive oriented interactions with other key players in the supply chain finance process and this requires a higher working capital than necessary (Hofmann, Kotzab , 2010) In an attempt to improve working capital traditionally, several single company-oriented methods have been rolled out with the cardinal objective of ensuing and ensuring an optimum level of working capital (Farris,Hutchison, 2003)The traditional solutions provided and prescribed for improving working capital have been inadequate since it is almost invariably always one-sided in their rewarding of advantages so much so that the buyer's attempt to defer or postpone payment or the seller's decision to increase the rate of cash collection is ,to that effect, somewhat inhibited leading to unnecessary distortions in the buyer-seller interactions thereby creating a lopsided discord between the interested parties as they work against each other's interests in the supply chain continuum. Therefore, there is the need to understand the working intricacies of the bigger supply chain picture as well as effectively coordinate with the supply chain partners in tandem with their respective interests to yield the maximum trade benefits and returns for all the parties involved in such undertakings. The maximum solution to lessen and ultimately to eliminate every capital accessing bottlenecks and exposure between buyers and sellers in the supply chain is to ensure that there is proper coordination and cooperation between the parties involved. According to the existing relevant literature, when the burden of the supply chain is unduly shifted and skewed to one party in the process, it can inordinately lead to unnecessary risks to the supply chain such as the possibility of losing customers, ruining business relationships and continuity. SCF provides a great unsurpassed opportunity for both buyers and sellers to collaborate and create benefits for each side of the transaction as well as improve working capital (PricewaterhouseCoopers, 2009) As the term 'supply chain' involves a network of partners that supplies raw materials, assembles, manufactures products and eventually distributes them via a single or multiple distribution channels to the end customer/user. It is evidently obvious that along this supply cha INTRODUCTION It has been recognized that trade credit granted by trade partners (suppliers) is a very essential catalyst and source of trade finance. Existing findings indicate that about 80% of business transactions carried out in UK thrive on trade credit (Summers,Wilson, 2002). Even buyers with strong credit ratings prefer trade credit to bank loans as this improves their net working capital (Petersen, Rajan, , 1997). Supply Chain Finance (SCF) as a major systemic shift emergent but viable trade finance instrument and leverage now available and accessible to companies working with and in the banking industry is geared towards the diversification of funding source endowed with the potency and possibilities as a credible means and measure to eliminate financial and credit constraints faced by all the parties involved in an entire supply chain process thereby substantially enhancing the financial and operational efficiency of the entire supply chain. Today, trade credit, ie, the credit that a seller grants to its buyer for the purchase of goods and services is common in both developed economies and the not so developed financial markets. (Steele, 2006). Existing evidence shows that trade credit is a more effective screening device than bank credit since it helps to measure the credit worthiness of all the parties at the receiving end of the trade continuum. The information advantage theory asserts that the manufacturer must gather useful information about a potential buyer in the course of business in order to ascertain their credit ratings and worthiness. However, financial institutions must also equally reduce substantial barriers to hands-on operational and financial information which is made possible via a sophisticated but highly predictable automated system giving prompt and reliable feedback on trade transactions in a Supply Chain Finance systemic approach of operations and financing. (Ferris, 1981) also asserts that trade credit may control the transaction costs between trading partners thereby enhancing trade efficiency and effectiveness. Compared to traditional supply chain management, SCF is less expensive and does not involve resource. Due to this leverage window of opportunity and possibilities, today's buyers mostly prefer to use SCF to effectively and conveniently extend their payment terms and Thu access and obtain trade credit and improve their working capital. (Rafik, Jamal, 2005) since SCF provides with the wide range of maneuverability needed and necessary to access funds. With SCF, a supplier delivers to a buyer and provides trade credit by allowing payment due at a mutually agreed date. This ensures maximal trade satisfaction between /among the various trade parties in an entire supply chain process in a highly efficient Supply Chain Finance scheme thereby sustaining trade continuity and profitability in the long run. LITERATURE REVIEW For well over a decade now, competition in almost all industries has skyrocketed and become virtually global in the wake of globalization and cyberspace play-off technologies. Therefore, to remain vibrant and relevant as well as benefit from the ensued advantages and opportunities currently available, many companies have been somewhat coerced to assume the role as searchers and leaders of innovative solutions which are more efficiency driven as well as the need to subsequently streamline and strengthen their organizational systems by ensuring that their operational and financial activities are highly cost effective and underfunded by efficiency. In the wake of such need and developments, SCF suffices as a viable paradigm which is quite plausible to meet all the major trade requirements helping to bring to fore trade standardization and satisfaction among the various respective trade partners. Now, the emphasis is to harness SCF solutions by making money out of it. Essentially, new innovative attempts are positioned to trigger and sustain major trade improvisations making both financial and operational flows in the entire supply chain faster, more reliable, more predictable and more cost efficient. (Herring, 2011). Cost and financial optimization have hastened the urgency on every facet of business to generate and sustain competitive edge and advantages. Since global competition is keener than ever before with the virtual removal of all trace barriers, companies are to outperform their peers an innovation in working capital optimization solutions are now vital within a company's entire supply chain. The existing literature indicates that, somewhat, traditional trade finance solutions are expensive and mostly just one party in the entire supply chain continuum is to benefit at the detriment and disadvantage of other partners involved in the entire supply chain. The approach of SCF has become more popular and plausible in part due to the opportunity it makes available to exchange goods and services as they move from their original manufacturer to their final consumptive destination along a very comprehensive and highly transparent and predictable SCF scheme. Working capital is demonstrably seen as one of the most important indicators of the efficiency in a supply chain (Farris,Hutchison, 2003). To optimize working capital, the cash-to-cash cycle advanced by (Richards,Laughlin, 1980) is good and credible index. However, it is quite evident that many companies need significant amounts of working capital in order for them to streamline their productive endeavors as well as deal effectively with the unstable and to notable extent, the unpredictable financial inflows and outflows through their productive oriented interactions with other key players in the supply chain finance process and this requires a higher working capital than necessary (Hofmann, Kotzab , 2010) In an attempt to improve working capital traditionally, several single company-oriented methods have been rolled out with the cardinal objective of ensuing and ensuring an optimum level of working capital (Farris,Hutchison, 2003)The traditional solutions provided and prescribed for improving working capital have been inadequate since it is almost invariably always one-sided in their rewarding of advantages so much so that the buyer's attempt to defer or postpone payment or the seller's decision to increase the rate of cash collection is ,to that effect, somewhat inhibited leading to unnecessary distortions in the buyer-seller interactions thereby creating a lopsided discord between the interested parties as they work against each other's interests in the supply chain continuum. Therefore, there is the need to understand the working intricacies of the bigger supply chain picture as well as effectively coordinate with the supply chain partners in tandem with their respective interests to yield the maximum trade benefits and returns for all the parties involved in such undertakings. The maximum solution to lessen and ultimately to eliminate every capital accessing bottlenecks and exposure between buyers and sellers in the supply chain is to ensure that there is proper coordination and cooperation between the parties involved. According to the existing relevant literature, when the burden of the supply chain is unduly shifted and skewed to one party in the process, it can inordinately lead to unnecessary risks to the supply chain such as the possibility of losing customers, ruining business relationships and continuity. SCF provides a great unsurpassed opportunity for both buyers and sellers to collaborate and create benefits for each side of the transaction as well as improve working capital (PricewaterhouseCoopers, 2009) As the term 'supply chain' involves a network of partners that supplies raw materials, assembles, manufactures products and eventually distributes them via a single or multiple distribution channels to the end customer/user. It is evidently obvious that along this supply chain, there are three essential but instrumental parallel flows intrinsic to the supply chain: goods and services, information and finance (Lambert and Pohlen, 2001) The flow of goods and services deals with services or products that move via the supply chain network between the suppliers and buyers. Information flows involve the availability of vital information encompassing products and services delivery as well as payment flows within the chain of supply. Until recently, information and financial flows and data were treated differently. However, with the innovative changes in and the subsequent sophisticated but comprehensive automation of the SCF, brilliant payment solutions are now available which provide detailed transaction information specifying the date and time of receipt of transaction payments effected, quantity of goods received and the like. This gives prompt feedback on business transactions making the entire process susceptible to effective and efficient exchange of vital business transaction information aiding quick and informed decision making in the entire supply chain process. Last but not least in importance is how the financial flows in SCF deal with the multiple invoices and payments between the various market players in an efficient and predictable manner. In an efficient comprehensive SCF system, these three vital trade elements are intertwined and for that matter coexist where each play vital role by supporting one another for maximum trade benefit to accrue to all parties involved in the process. By far, the existing relevant literature establishes that at the center of the SCF is the management of working capital and financial flows, but equally important is the effective management of the relevant information across the supply chain as well as the documents and data involved that support these flows and payment approval processes (UN, 2015) The existing SCF solutions considered to be viable tend to present and possess certain essential elements (features) such as the elimination of all paper work leading to the eventual automation of all trade transactions and the transparency and predictability inherent in the thoroughly comprehensive automated process thereby making available a wealth of important information handy enabling both internal and external trade partners to mutually exchange data on various transactions. This outflow and inflow of relevant data pertaining to various transactions. The consistent outflow and inflow of relevant business transaction information leads to risk reduction and prompt awareness of business transactions to the various participants involved in the entire supply chain process. Definition of SCF Many have opined that SCF is about managing capital whereas, for others, it is simply the flow of cash between corporations along the supply chain either in the form of a payment between a vendor and a buyer or in the form of finance. The latter can be either from a bank or a financial institution or from a Supply Chain partner willing to lend in the form of an early or extended payment (Erik Hofmann, Oliver Belin, 2001). Others have also put forward that the above definition is too limited in scope since it fails to capture and encompass the exchange of assets as well as liabilities within the entire supply chain. It is well-known and documented in the available literature that SCF is the reverse flow to the flow of goods within the field of supply chain management (Bansod and Borade, 2007) By extension, SCF includes the flow of information simultaneously along the production and distribution process. Therefore, SCF solutions are noted for certain enhancing features with the advent of new innovative technologies paving the way for the entire supply chain process to be comprehensively automated which enables prompt mutual feedback on business transaction between trade partners thereby helping quick but informed decision making, transparency through the automation of the entire process which facilitates the flow of ongoing business transactional information. This enables the efficient flow and exchange of transaction data in the entire supply chain. With the sheer visibility and accessibility of transaction events in the supply chain process, risk is immensely minimized as all parties involved in the process seeks to mutually satisfy their respective but mutually important interests. There is also a high degree of predictability due to the introduction of new viable innovative technologies that have led to the thorough and insightful automation of the entire supply chain process. Moreover, there is the complete emphasis on cooperation and collaboration among the various market partners thereby ensuring trust based and mutually informing and rewarding relationships among the various participants in the entire supply chain and its managerial as well as financial process to yield the maximum trade benefits for all the parties involved. Supply Chain Finance on a diagram There are several observed challenges laden within the supply chain process such as slow processing of business transactions since it takes time parties involved to make decisions, unreliable and not so predictable financial flows as well as expensive processes involved in the process. However, when such manifest challenges are mitigated through viable and innovative measures and procedures, SCF serves as a highly advantageous paradigm system to leverage the trade capital of many companies both tactically and strategically. But, in order to alleviate these challenges exhaustively, there must be a diagnosis of their causes to put companies in the position to effectively deal with them. First, there is an observed significant delay in information accessibility due to lack of informational details attached to financial flows either in the manual or automated systems. This must be properly corrected using innovative measures and methods. Second, inadequate working capital is also a cause of delays and inefficiencies in most invoice reconciliation. Inadequate informative details are a challenge inherent in the supply chain process since mostly references on details of payments are not sufficient. The existing evidence shows that collection errors take an average of four weeks to rectify when they are identified. Also, most companies devote all their attention to the bigger transactional accounts at the detriment of those deemed to be insignificant and thereby losing substantial amount in the process. Furthermore, most companies are plagued with inadequate capacity to thoroughly and effectively assess, address and set appropriate parameters to establish the credit limits and worthiness of their customers and most of them end up at the losing end of the chain process. PROSPECTS OF SUPPLY CHAIN FINANCE With supply chain finance as a viable working capital optimization paradigm, there is a corresponding need to effectively address and eventually eliminate all the noted and major challenges inherent in the entire model. First and foremost, the globalization of the chain process has triggered many relevant forces aiding the viability of SCF. The rapid growth of global trade and business transactions through the outsourcing of capital-intensive Labor as well as the production and distribution of goods and services to global partners has accelerated and is predictably going to continue over the long haul. It has been observed that world trade in goods and services grew by 83% from $2,093 billion in 199983% from $2,093 billion in to $3,823billion in 200883% from $2,093 billion in (OECD, 2009) However, all these opportunities come with the increased complexities, risks and costs associated with a long distance supply chain (Erik Hofmann, Oliver Belin, 2001). To minimize the risks involved, many companies have cut down capital exposure in the supply chain. Moreover, global competition is getting keener than ever as the various trade boundaries are giving way for the emergence and establishment of a unified global marketplace. Due to this development in competition, there is also little difference among vendors and their products. This has increased demand for new ways of handling business transactions such as extended payment terms for customers and reduction in payment terms to suppliers increasing their market shares (Jones 2008). In addition, supply chain management with optimal operational efficiency and effectiveness places many companies at advantageous position by managing their supply chain adequately via better collaboration of both internal concerned departmental units and external trading partners (Mentzer, DeWitt, el ta, 2001)Many companies have really benefited from the supply chain management but most companies are now turning their attention to the financial supply chain to attain similar benefits (Erik Hofmann, Oliver Belin, 2001) There is a remarkable change in trade paradigms in recent years. Many have drifted and shifted from the traditional documentary letter of credit (L/C) to open account (O/ A). According to estimates, the use of a L/C as a payment method in a Supply Chain has drastically declined 20% and as result, it has left a huge margin of about 80% to O/A. Currently, more than 80% of global trade is now in the form of O/A (Erik Hofmann, Oliver Belin, 2001). Many buyers and sellers have thoroughly come to appreciate the utility and benefits of O/A trade as a preferable payment method in their various cross-border supply chain management. O/A is even earmarked to retain its lure as it remains viable to streamline processes by eliminating the multiple parties involved in the flows while reducing the amount of documentation required in global trade transactions (Erik Hofmann, Oliver Belin, 2001) Furthermore, new viable and innovative technologies have made way for the penetration of SCF in the global market. With the complete automation and transparency accorded by new technological devices and systems, SCF is destined to entrench itself in the global market. All business-related information is and can be transmitted electronically, and more so, digital signature and identification has become more and more standardized in global trade and has facilitated SCF (Ali, 2016). Also, with the application of web-based instruments that yield to successful implementation of SCF programs, transparency, speed and predictability are guaranteed in the entire supply chain. CONCLUSIONS The SCF approach helps to improve the required working capital in supply chains. Even though there are certain inimical challenges observed within the SCF approach, there are effective solutions to eradicate such challenges as discussed in the preceding writing. Therefore, with the various innovative drivers and enablers of SCF working effectively in the global market, companies must also be circumspect about their risk exposure in the manner they deal with the entire supply chain whether in management or finance. Research Significance Supply Chain Finance as an emergent and currently, one of the most viable and plausible financing procedural instruments is not a new conceptual framework. It has been widely noted and acclaimed as an essential aspect of supply chain management and trade finance. The global economic crises have necessitated the urgent consideration and eventual adoption of Supply Chain Finance (SCF). SCF is noted for its capability to collaborate and coordinate trade partners and procedures in order to increase. The study would also inform policy makers and lead to improvement of service on the global Supply China Network. This study formulated a wild range of researches brought together to present a newer view on Supply Chain Finance. This study presented, a brilliant innovative method of leveraging working capital accessibility and the substantial enhancement of credit ratings and values of the various companies using the SCF system to optimize trade efficiency, predictability and ultimately profitability. Finally, the study aims at generating interest among researchers and engenders further studies into Supply Chain Finance issues, using similar or other research designs in the study area and eventually contribute to body of knowledge. Data Availability The data and other research documents (papers) used to support the findings of this study are included and cited in this article.
6,419
2018-12-26T00:00:00.000
[ "Business", "Economics" ]
Enantioselective reductive multicomponent coupling reactions between isatins and aldehydes A reductive coupling of two different carbonyls via a polar two-electron reaction mechanism was developed and the stereochemical outcome of this multicomponent process is precisely controlled by a chiral triaminoiminophosphorane. At the outset, we envisaged the possibility of catalytic generation of an a-oxycarbanion from a carbonyl substrate and its rapid and selective trapping with another carbonyl compound to form 1,2-diols. For substantiating this hypothesis, polarity reversal of a particular carbonyl group is of critical importance and we sought to take advantage of the phosphonate-phosphate (phospha-Brook) rearrangement to achieve this requisite process. Thus, a base-catalyzed sequence of Pudovik addition and phosphonate-phosphate rearrangement between ketone 1 and dialkyl phosphite was projected to lead to carbanion 2. The interception of this key intermediate by aldehyde 3 would afford mono-protected diol 4 through dialkoxyphosphinyl migration (Figure 1b). 9 A crucial departure from prior art is the fully intermolecular nature of the coupling and the need for the phosphite to exhibit complete selectivity between the two carbonyl reactants. We reasoned that the crucial chemoselectivity issue underlying this mechanistic framework, viz. the selective generation of a-oxycarbanion 2 from ketone 1, would be ensured by the inherent reversibility of Pudovik reaction and the reluctance of the aldehyde Pudovik product to undergo phospha-Brook rearrangement. In addition, absolute stereochemical guidance in the C-C bond-forming event could be provided by the conjugate acid of a suitable chiral base. In providing the conceptual blueprint for this scenario, we focused our attention on the exceptional electrophilicity and utility of a-dicarbonyls. 9d-g,10 Steps were initially taken to assess the feasibility of the proposed reaction in a racemic sense using achiral bases such as potassium tert-butoxide (KO t Bu). Initial trials with diethyl phosphite as the stoichiometric reductant indicated that the reaction proceeds most cleanly and efficiently when a protecting group is used on the isatin. Benzyl, allyl, and methyl protecting groups were examined using 20 mol% KO t Bu in THF at 0 C ( Table 1, (AE)-4a-(AE)-4c). Under these conditions, the reactions were complete in minutes with no observable intermediates (if the aldehyde is omitted from the reaction, the Pudovik-phospha-Brook product can be observed, however). 9f These experiments revealed that the benzyl protecting group provided the highest isolated yield and diastereoselectivity. We subsequently veried that para-tolualdehyde is not capable of phospha-Brook rearrangement when treated with diethyl phosphite and 20 mol% KO t Bu: only the Pudovik adduct was observed, implying that it is the isatin that is undergoing polarity reversal as we expected. We then briey studied the scope of the racemic reaction. The reaction gives consistently good yields for various aryl aldehydes incorporating substituents of different electronic properties (Table 1, (AE)-4d-(AE)-4g). At the current level of optimization, alkyl aldehydes and Boc-protected imine electrophiles were not well tolerated and only provided messy reactions. 11 The substitution pattern of the isatin was also examined; we found that the racemic reaction is reasonably exible in terms of isatin electronics ((AE)-4h-(AE)-4k). Efforts were next directed to the development of the enantioselective variant. 12 We were encouraged to nd that when we used the chiral iminophosphorane (C1), we obtained the secondary phosphate 4a with appreciable enantioenrichment (er 89.5 : 10.5), although the diastereoselectivity was poor (Table 2, entry 1). Gratifyingly, we found that upon lowering the temperature to À78 C, phosphate 4a was obtained in 82% yield, 15 : 1 diastereoselectivity and an er of 96.5 : 3.5 (entry 2). Using the same temperature, we proceeded to evaluate the effect of the catalyst structure (entries 3 to 6), but ultimately concluded that a-branching in ligand substituent R is essential for promoting the desired transformations and the valinederived iminophosphorane C1 was optimal in terms of stereoselectivity and chemical yield. The disparity between the stereoselectivities at 0 C and À78 C prompted us to investigate the reversibility of the carboncarbon bond formation via crossover experiments in that temperature range (Table 3). When racemic phosphate (AE)-4a was subjected to standard conditions in the presence of 4-uorobenzaldehyde, signicant incorporation of that component in the form of phosphate 4a-F was observed at 0 C and À40 C, but no crossover was observed at À78 C. These data support the hypothesis that the increase in enantioselectivity at À78 C is not only a consequence of more rigorous facial discrimination of both substrates but also shutting down a stereoablative retro-aldol process that is operative at higher temperatures. Using the optimized conditions, we evaluated the scope of the asymmetric reaction by initially looking at various isatins. Table 1 Three component reductive coupling: racemic a a All reactions were run on 0.2 mmol scale, using 1.1 equiv. of dialkylphosphite and 5.0 equiv. of aldehyde. % Yields refer to isolated yields. All d.r. and % yield values are the averages of two trials. Reactions were run until complete as adjudged by TLC. b % Yield determined by crude 1 H NMR using mesitylene as an internal standard. Products derived from apparent retro-reaction signicantly diminished the isolated yield; therefore, this substrate was not selected for further study. While electron-decient 5-halogenated isatins were well accommodated under the optimized conditions, use of dimethyl phosphite was indispensable for completion of the reactions with 5-methyl and methoxy isatins probably because of the slow phospha-Brook rearrangement ( Table 4, 4h-4m). 13 6-Chloro and 7-uoro isatins were also smoothly converted into the reductive coupling products of high stereochemical purity using appropriate phosphite (4n and 4o). The absolute stereochemistry was determined at this stage by an X-ray diffraction study of phosphate 4j (Fig. 2). 14 For exploration of aldehyde generality, we selected 5-bromo isatin as a coupling partner in consideration of its high reactivity and advantage of having an additional functional handle at the aromatic nuclei. As included in Table 4, various para- substituted aromatic aldehydes were tolerated and relatively electron rich aldehydes exhibited higher reactivity and selectivity (4p-4t). Hetero-substituents at the meta-position slightly affected the stereochemical outcome (4u-4w). For sterically demanding ortho-substituted aldehydes, dimethyl phosphite was needed to accelerate the reaction and virtually complete stereocontrol could be achieved (4x-4z). In summary, we have developed a highly stereoselective, fully organic multicomponent coupling reaction between isatins and aldehydes with dialkyl phosphite as an economical reductant. The advantages of extending the reductive coupling into a twoelectron manifold are manifest, and the mechanistic framework established herein may be applicable to other stereoselective reductive carbon-carbon bond constructions. Efforts to exploit this reaction paradigm in other systems are ongoing in our laboratories. Fig. 2 ORTEP diagram of 4j (ellipsoids displayed at 50% probability. Calculated hydrogen atoms except for that attached to the stereogenic carbon atom are omitted for clarity. Black: carbon, red: oxygen, purple: phosphorous, blue: nitrogen, vermilion: bromine, white: hydrogen).
1,555.4
2015-07-30T00:00:00.000
[ "Chemistry" ]
Keeping Pathologists in the Loop and an Adaptive F1-Score Threshold Method for Mitosis Detection in Canine Perivascular Wall Tumours Simple Summary Performing a mitosis count (MC) is essential in grading canine Soft Tissue Sarcoma (cSTS) and canine Perivascular Wall Tumours (cPWTs), although it is subject to inter- and intra-observer variability. To enhance standardisation, an artificial intelligence mitosis detection approach was investigated. A two-step annotation process was utilised with a pre-trained Faster R-CNN model, refined through veterinary pathologists’ reviews of false positives, and subsequently optimised using an F1-score thresholding method to maximise accuracy measures. The study achieved a best F1-score of 0.75, demonstrating competitiveness in the field of canine mitosis detection. Abstract Performing a mitosis count (MC) is the diagnostic task of histologically grading canine Soft Tissue Sarcoma (cSTS). However, mitosis count is subject to inter- and intra-observer variability. Deep learning models can offer a standardisation in the process of MC used to histologically grade canine Soft Tissue Sarcomas. Subsequently, the focus of this study was mitosis detection in canine Perivascular Wall Tumours (cPWTs). Generating mitosis annotations is a long and arduous process open to inter-observer variability. Therefore, by keeping pathologists in the loop, a two-step annotation process was performed where a pre-trained Faster R-CNN model was trained on initial annotations provided by veterinary pathologists. The pathologists reviewed the output false positive mitosis candidates and determined whether these were overlooked candidates, thus updating the dataset. Faster R-CNN was then trained on this updated dataset. An optimal decision threshold was applied to maximise the F1-score predetermined using the validation set and produced our best F1-score of 0.75, which is competitive with the state of the art in the canine mitosis domain. Introduction Canine Soft Tissue Sarcoma (cSTS) is a heterogeneous group of mesenchymal neoplasms (tumours) that arise in connective tissue [1][2][3][4][5][6].cSTS is more prevalent in middle-age to older and medium to large-sized breeds with the median reported age of diagnosis between 10 and 11 years old [3,[7][8][9][10].The anatomical site of cSTS can vary considerably, but it is mostly found in the cutaneous and subcutaneous tissues [9].In human Soft Tissue Sarcoma (STS), histological grade is an important prognostic factor and one of the most validated criteria to predict outcome following surgery in canines [10][11][12][13].General treatment consists of surgically removing these cutaneous and subcutaneous sarcomas.Nevertheless, it is the higher-grade tumours that can be problematic, as their aggressiveness can reduce treatment options and result in a poorer prognosis.The focus of this study was on one common subtype found in dogs: canine Perivascular Wall Tumours (cPWTs).Canine Perivascular Wall Tumours (cPWTs) arise from vascular mural cells and are often recognisable from their vascular growth patterns [14,15]. The scoring for cSTS grading is broken down into three major criteria: the mitotic count, differentiation and the level of necrosis [9].Mitosis counting can be exposed to high inter-observer variability [16], depending on the expertise of the pathologist; however, the counting of mitotic figures is considered the most objective factor in comparison to tumour necrosis and cellular differentiation when grading cSTS [16].It is routine practise to investigate mitosis using 40× magnification; however, manual investigation at such high-powered fields (HPFs) is a laborious task that is prone to error, thus leading to the previously discussed inter-observer variability phenomenon. For the purposes of this study, the focus was on creating a mitosis detection model as it is a significant criterion from the cSTS histological grading system [13] where the density of mitotic figures is also considered highly correlated with tumour proliferation [17].Mitosis detection has been pursued in the computer vision domain since the 1980s [18].Before 2010, relatively few studies aimed to automate mitosis detection [19][20][21].However, since the MITOS 2012 challenge [22], there has been a resurgence of interest.Mitosis detection can often be considered as an object detection problem [23].Rather than categorising entire images as in image classification tasks, object detection algorithms present object categories inside the image along with an axis-aligned bounding box, which in turn indicates the position and scale of each instance of the object category.In the case of mitosis detection, the considered objects are mitotic figures.As a result, several approaches have used object detection-related algorithms for mitosis detection.An example of an object detection algorithm is the regions-based convolutional neural network (R-CNN) [24].At first, a selective search is performed on the input image to propose candidate regions, and then the CNN is used for feature extraction.These feature vectors are used for training in bounding box regression.There have been many developments on this type of architecture such as Fast R-CNN [25] and Faster R-CNN [26], which is the primary object detection model used in this work.One set of authors detected mitosis using a variant of the Faster R-CNN (MITOS-RCNN), achieving an F-measure score of 0.955 [27]. Several challenges have been held in order to find novel and improved approaches for mitosis detection [17,22,23,28,29].Some of these challenges and research on mitosis detection methods have also been conducted using tissue from the canine domain [30][31][32][33]. It was made apparent by the collaborating pathologists that AI approaches for grading tasks in cSTS were desirable, and so this study aims to tackle one criterion, which is to develop methods for mitosis detection in a subtype of cSTS: cPWT.To the best of our knowledge, this is the first work in the automated detection of mitoses in cPWTs. Data Description and Annotation Process A set of canine Perivascular Wall Tumour (cPWT) slides were obtained from the Department of Microbiology, Immunology and Pathology, Colorado State University.A senior veterinary pathologist at the University of Surrey confirmed the grade of each case (patient) and chose a representative histological slide for each patient.These histological slides were digitised using a Hamamatsu NDP Nanozoomer 2.0 HT slide scanner.A digital Whole Slide Image (WSI) was created via scanning under 40× magnification (0.23 µm/pixel) with a scanning speed of approximately 150 s at 40× mode (15 mm × 15 mm). Veterinary pathologists independently annotated the WSIs for mitosis using the opensource Automated Slide Analysis Platform (ASAP) software (https://www.computationalpathologygroup.eu/software/asap/,accessed on 28 January 2024) [34].The pathologists used different magnifications (ranging from 10× to 40×) to analyse the mitosis before creating mitosis annotations.These annotations were centroid coordinates, which were centered on the suspecting mitotic candidate.Centroid coordinate annotations can be considered as weak annotations as they are simply coordinates placed in the centre of a mitotic figure and not fine-grained pixel-wise annotations around the mitosis.In order to categorise a mitotic figure, both pathologist annotators needed to form an agreement on the mitotic candidate.As these were centroid coordinates, an agreement was determined when two independent centroid annotations from each annotator were overlaid on one another.Any centroid annotations without agreement were dismissed from being considered as a mitotic figure.Table 1 shows the differences between the two annotators for both training and validation when counting mitotic figures in our cPWT dataset. Table 1.The differences between the two annotators in regard to mitosis annotations for the training/validation set.The "Slide" column represents the anonymised set of slides annotated."Anno 1" and "Anno 2" show the number of mitoses annotated per slide for each annotator."Agreement" represents the number of agreed mitoses between each annotator.The "% agreement" for each annotator represents the percentage of the agreed mitotic count against the respective annotators mitotic count."Avg" is the average of every WSI % agreement, which is computed for each annotator.For patch extraction, downsized binary image masks (by a factor of 32) were generated, depicting tissue from the biopsy samples against background slide glass.A tissue threshold of 0.75 was applied to 512 × 512 patches for final patch extraction.Therefore, if a patch contained less than 75% of any tissue, it was dismissed from the dataset.This was to ensure that the patches contained relevant information for mitosis object detection. Slide The test set consisted of patches extracted from high-powered fields (HPFs) determined by the pathologist annotators.To replicate real-world test data, our collaborating pathologists selected 10 continuous non-overlapping HPFs from each WSI.The size of this area was determined by loosely following the Elston and Ellis [35] criteria of an area size of 2.0 mm 2 .For 20× magnification (level 1 in the WSI pyramid), the width of the 10 HPFs was 4096 pixels and the height was 2560 pixels.This produced 40 non-overlapping patches of 512 × 512 pixels, thus producing a dataset of 440 patch images from the 11 hold-out test WSIs at 20× magnification.Only patches containing mitosis were used for training and validation, whereas for testing, all extracted patches were evaluated.Details on the number of mitosis per slide in training/validation and test sets are provided in Appendix Tables A1 and A2, respectively.Details on the number of patches used for training/validation and testing for 40× magnification is provided in Appendix Table A3.Details on the number of patches used for training/validation and testing for 20× magnification is provided in Appendix Table A4. Object Detection and Keeping the Pathologist in the Loop for Dataset Refinement Mitosis detection is generally considered an object detection problem [23]; For this study, we used a Faster R-CNN model [26].We initialised a Faster R-CNN model with pre-trained COCO [36] weights with the ResNet-50 head pre-trained on ImageNet.The model was fine-tuned, updating all parameters of the model using our dataset.Preliminary experiments suggested using a learning rate of 0.01 and SGD to be used as the optimiser.A batch size of 4 was also used for these experiments.Training was implemented for 30 epochs, where the the model with the lowest validation loss was saved for final evaluation.Faster R-CNN is jointly trained with four different losses; two for the RPN and two for the Fast R-CNN.These losses are RPN classification loss (for distinguishing between foreground and background), RPN regression loss (for determining differences between the regression of the foreground bounding box and ground truth bounding box), the Fast R-CNN classification loss (for object classes) and Fast R-CNN bounding box regression (used to refine the bounding box coordinates).Therefore, in our implementation of determining the lowest validation loss, at every epoch, each loss type was considered equally.We implemented 3-fold cross-validation at the patient (WSI) level to test the veracity and robustness of our approach with the training data split into three folds for training and validation.We also used an unseen hold-out test set for final evaluation and for a fair comparison of all three folds.The training, validation and hold-out test splits for each fold are depicted in Appendix Table A5. Furthermore, as most mitotic figures from the same tissue type are generally of a similar size (dependent on the stage of mitosis, staining techniques, and slide quality), we opted to use the default anchor generator sizes provided by the PyTorch implementation of Faster R-CNN.These sizes were 32, 64, 128, 256 and 512 with aspect ratios of 0.5, 1.0 and 2.0.See Figure 1 for a depiction of the Faster R-CNN applied to the cPWT mitosis detection problem.During the evaluation inference, non-maximum suppression (NMS) with an IoU value of 0.1 was applied as a post-processing step to remove low-scoring otherwise redundant overlapping bounding boxes.This post-processing method is also consistent with other mitosis detection methods in the literature [38,39]. In object detection, mean average precision (mAP) is typically used to evaluate the performance of a model depending on the task or dataset [40][41][42][43].However, we opted to use the F1-score in order to compare our results to mitosis detection approaches in the literature.The F1-score was computed globally for each fold; thus, it was applied and determined for the entire dataset of interest.True positive (TP) detections were computed if there was an IoU of >= 0.5 between the ground truth and proposed candidate detections.Anything that did not meet the IoU threshold was considered a false positive (FP) detection.Any missed ground truth detections were considered false negatives (FNs).As a result, we could also generate the F1-score.The F1-score can be considered the harmonic mean between the precision and recall (sensitivity).Both precision (Equation ( 1)) and sensitivity (Equation ( 2)) contribute equally to the F1-score (Equation ( 3)): where TP, FP and FN are true positives, false positives and false negatives, respectively.The models were implemented in Python, using the PyTorch deep learning framework.The hardware and resources available for implementation used a Dell T630 system, which included 2 Intel Xeon E5 v4 series 8-Core CPUs with 3.2 GHz, 128 GB of RAM (Dell Corporation Limited, London, UK), and 4 nVidia Titan X (Pascal, Compute 6.1, Single Precision) GPUs. The mitosis annotation process is an exhaustive and arduous process, and thus the initial annotation process may be suboptimal due to the vast area annotators needed to examine mitotic candidates.Taking inspiration from Bertram et al. [33], we used our deep learning object detection models from these experiments to refine the dataset (see Figure 2).We hypothesised that many of the FP candidates may have been incorrectly labelled.Our collaborating pathologists reviewed all the FP candidates (irregardless of class score) from each validation fold and the hold-out test set and determined which candidates were mislabeled.As a result, we were able to formulate additional ground truth mitoses for use in the final set of experiments. Adaptive F1-Score Threshold For this method, the Faster R-CNN object detector was trained on detecting mitotic candidates using the refined (updated) dataset.The same training hyperparameters as described earlier were applied; however, we lowered the number of epochs.It was observed that the models found their optimal validation loss by epoch 7 across all three folds in the initial experiment runs.Therefore, to ensure optimality, we chose 12 epochs for training, again using the lowest validation loss as determining the "best" model.The trained Faster R-CNN model outputs potential mitosis candidates, but it also outputs probability scores relating to the strength of the object prediction.These scores ranged from 0 to 1, where 1 would highlight the model is 100% certain that the candidate is mitosis and 0.01 would describe a prediction that is very low in confidence.We optimised our models based on the F1-score [44][45][46].The probability thresholds t ranged from 0.01 to 1, and so choosing the optimal threshold T for the F1-score F1 can be represented formally as: We determined the optimal F1-score threshold value using the validation set and applied this threshold value to the final evaluation on the hold-out test set.Figure 3 demonstrates the entire workflow of this method from the creation of the updated mitosis dataset where the pathologists reviewed all the FP candidates all the way to the adaptive F1-score thresholds applied to the mitosis candidate predictions.1).Optimal thresholds using Equation ( 4) were applied on the output candidates determined from the validation set. Results The pathologists-in-the-loop approach for dataset refinement was first applied as demonstrated by Figure 2. In a preliminary investigation, two magnifications (40× and 20×) were used to determine the best resolution for our for our task (see Table 2) . Tables A6 and A7 show the differences in mitotic candidate numbers before and after refinement (second review) for the training/validation and test sets, respectively.The first set of results from the optimised Faster R-CNN approach is depicted in Table 3.This shows a comparison of performance of the Faster R-CNN trained on the initial mitosis dataset and the updated refined mitosis dataset.It is apparent that sensitivities have improved for all folds when using the updated refined dataset; however, in some cases, such as in fold-1 validation, fold-3 validation and fold-3 test, we can see that the F1-score is lower due to a decrease in precision scores.This could be due to the updated refined dataset containing more difficult examples for the effective mitosis object detection training.The previous initial dataset may have contained more obvious mitosis examples and thus was predicting detections that closely resembled these obvious examples.Table 4 shows the Faster R-CNN results before and after F1-score thresholding was applied on the models trained using the updated mitosis dataset.The thresholds were predetermined on the validation set for each fold using Equation (4) (see Figure 4).When applying the optimal thresholds, we saw large improvements in the F1-score, which were largely due to an improvement in precision because of a reduction in FPs.This was seen on the test set with an F1-score of 0.402 to 0.750.However, this increase in precision came at the expense of some sensitivity across all three folds, where for example on the test set the mean sensitivity for all three folds reduced from 0.952 to 0.803.Nevertheless, the depreciation in sensitivity does not offset the increase in precision, where sensitivity decreased by 14.9 % and precision increased by 45.2 %.This suggests that the majority of TP detections prior to the adaptive F1-score thresholding are of a high probability confidence compared to the FP detections. Table 2. Initial mitosis object detection results for the 40× and 20× magnification patches datasets. As the difference in performance between the two resolution datasets was of interest, we first present the initial results for 20× and 40 magnifications for validation and test sets and for all three folds.Interestingly, although the 40× magnification trained models appeared to produce better F1-scores for validation, 20× magnification models performed better across all three folds when applied to the hold-out test set.It appears that with our experimental set-up, the models trained on 20× magnification generalise across the two evaluation datasets better.As a consequence, and to also reduce computational requirements, we proceeded further with the 20× magnification extracted dataset.Results for these initial experiments also suggested that the object detector was highly sensitive for the test set (at a mean average of 0.918) but not as precise (at a mean average of 0.249 for the precision measure).Line graphs that show the sensitivity, precision and F1-score calculated for each probability threshold for the three validation folds.To determine the optimal probability threshold, we choose the threshold with the highest F1-score as determined via Equation ( 4).In the above plots, these are denoted as "best threshold".For fold 1, this threshold was 0.96, for fold 2, it was 0.84, and for fold 3, it was 0.91. Table 4. Results of the models trained on the updated dataset.The "Thresholds" column depict whether models were optimised using the adaptive F1-score threshold metric described in Equation ( 4); filled in values state the probability threshold.It is apparent that the models with optimised thresholds produced the highest F1-scores across all folds, producing a mean average F1-score of 0.750 on the test set compared to 0.402. Discussion This study has demonstrated a method for mitosis detection in cPWT WSIs using a Faster R-CNN object detection model, an adaptive F1-score thresholding feature on output probabilities and the refinement of a mitotic figures dataset by keeping pathologists in the loop. Many approaches in the literature use the highest resolution images for their object detection methods (typically at 40× objective); however, we preliminarily found that 20× magnification was beneficial for our task and the dataset provided, as shown in Table 2. Nevertheless, this warrants a further investigation and additional discussions with the collaborating pathologists, who may provide reasoning as to why certain candidates were classed as mitosis at different resolutions. Initially, solely using the outputs from a Faster R-CNN model produced promising results generating high sensitivities; however, these outputs required further post-processing to improve precision.Applying adaptive F1-score thresholds, where the optimal values were predetermined on the validation set and applied to the test set, demonstrated an effective method of reducing the number of FP predictions.This ultimately resulted in dramatically increasing the F1-score due to a stark increase in precision.However, this came at a small expense of sensitivity.Nevertheless, the rate of change of the sensitivity and the precision are not equal with the latter vastly improving.This suggests that the majority of FP detections are of lower probability confidence compared to TP detections. Multi-stage (typically dual-stage) approaches have also become increasingly prevalent over the years where they typically take the form of selecting mitotic candidates in the first stage and then apply another classifier in the second stage [32,33,[47][48][49].Although not reflected in the main findings of this study, we attempted to use a second-stage classifier (Figure A1) on mitotic candidates to classify between TP and hard FPs to no avail (see results of the two-stage approach in Table A8 and its subsequent ROC curves in Figure A2).Most machine learning methods require large datasets for effective training, which in this case was not available once optimisation was applied using the adaptive F1-score threshold method.One could train models using the non-thresholded detections; however, this would result in a model that is able to distinguish between true positive mitosis and mostly obvious FP candidates.By applying the adaptive F1-score thresholding method, we constrained the dataset and attempted to learn differences between TP and high confidence hard false positive detections, but we did not provide an adequately large dataset for training.Figure 5 depicts a 512 × 512 pixel image in the test set, highlighting FN and FP detection.Different phases and other biological phenomenon could influence the size of the mitosis region of interest.Going forward, it may also be worth labelling mitosis in regard to the phases and thus creating a multi-class problem rather than binary, as shown in this study.As a consequence, the size of the ground truth bounding boxes could also be varied depending on the target phase being classified.Nonetheless, the models were still able to predict the vast majority of mitosis in these phases. It must be further denoted that the methodology is applied to only patches from HPFs containing mitosis that were annotated by the collaborating pathologists.Therefore, we propose expanding our dataset to include a broader range of sections, including those not initially marked by pathologists, to evaluate and enhance our model's generalisability.The data should include labels for areas containing tumour and non-tumour tissue to fully consider the overall impact of this mitosis detection method. Our focus for this study is on cPWT; however, we could potentially adapt this method to other cSTS subtypes as well as to other tumour types.An additional study might explore the application of cPWT-trained models to different cSTS subtypes to assess if comparable outcomes are achieved.Nevertheless, given that tumour types from various domains exhibit unique challenges due to their specific histological characteristics, it may be necessary to train or fine-tune models using tumour-specific datasets to evaluate the efficacy of this approach. While our F1-score demonstrates competitive performance for detecting mitosis in the canine domain, the clinical relevance and applicability of this metric should be taken into account.Future work should focus on employing this method as a supportive tool, assessing its practical effectiveness and reliability in a veterinary clinical setting. To conclude, by using our experimental set-up, the optimised Faster R-CNN model was a suitable method for determining mitosis in cPWT WSIs.To the best of our knowledge, this is the first mitosis detection model applied solely on cPWT data, and thus we consider this a baseline three-fold cross-validation mean F1-score of 0.750 for mitosis detection in cPWT. Figure 1 . Figure 1.Image is inspired by Mahmood et al.'s depiction of Faster R-CNN [37].A Faster R-CNN object detection model applied to the cPWT mitosis dataset.An input image of size 512 × 512 pixels is passed through the model where the feature map is extracted using the Resnet-50 feature-extraction network.This is then followed by generating region proposals in the Region Proposal Network (RPN) and finally mitosis detection in the classifier. Figure 2 . Figure 2. Keeping humans in the loop: (a) Two pathologist annotators independently review canine Perivascular Wall Tumour (cPWT) Whole Slide Images (WSIs) and applied centroid annotations to mitotic figures.(b) After initial agreement of mitoses, this formed the initial dataset of patch images with annotations.(c) A Faster R-CNN object detector was trained and tested on these data.(d) Thereafter, false positive (FP) candidates are reviewed again by the pathologist annotators where misclassified candidates are reassigned as true positives (TPs).(e) These new TPs are added to the updated dataset.(20× magnification images). Figure 3 . Figure 3.We used 20× magnification images and annotations from the updated mitosis dataset to train the Faster R-CNN object detection model (details from the Faster R-CNN model are also shown in Figure1).Optimal thresholds using Equation (4) were applied on the output candidates determined from the validation set. Figure 4 . Figure 4. Line graphs that show the sensitivity, precision and F1-score calculated for each probability threshold for the three validation folds.To determine the optimal probability threshold, we choose the threshold with the highest F1-score as determined via Equation (4).In the above plots, these are denoted as "best threshold".For fold 1, this threshold was 0.96, for fold 2, it was 0.84, and for fold 3, it was 0.91. Figure 5 . Figure 5.An example 512 × 512 pixel image from the test set with a false negative (FN) shown in the red bounding box and a false positive (FP) detection shown in the yellow bounding box (32 × 32 pixels).The FP detection provides a probability confidence score of 5.3% and so would typically be dismissed as a mitosis candidate once the adaptive F1-score threshold is applied. Figure A1 . Figure A1.A depiction of the two-stage mitosis detection approach.On the top, in stage 1, 20× magnification images and annotations from the updated refined mitoses dataset are used to train a Faster R-CNN model (the model is also presented in Figure1).Optimal probability thresholds are applied on the output candidates, which are determined from the validation set (based on Equation (4)).These selected candidates are then extracted (size 64 × 64 pixels) at 40× magnification from the original Whole Slide Images (WSIs) and passed into the second stage.On the bottom shows stage 2 where the extracted patches are fed into a DenseNet-161 ImageNet pre-trained feature extractor, where the outputs are fed into a logistic regression classifier to determine whether the candidates are mitosis or difficult false positives. Figure A2 . Figure A2.Receiver operating characteristic (ROC) curve plots from the second-stage logistic regression model results for each cross-validation fold.For each fold, it is evident that the models do not effectively learn the differences between true positive (TP) and false positive (FP) mitosis detections. Table 3 . A comparison of results of the models trained on the initial annotated dataset and the updated dataset.Results are shown for both the validation and test sets for folds 1, 2 and 3. Table A2 . The number of mitosis annotations in 10 continuous high-powered fields (HPFs) from each Whole Slide Image (WSI) for both 40× and 20× magnifications in the hold-out test set. Table A3 . The number of patches per Whole Slide Image (WSI) in the train/validation and test sets for patches extracted from level 0 (40× magnification) of the WSI. Table A4 . The number of patches per Whole Slide Image (WSI) in the train/validation and test sets for patches extracted from level 1 (20× magnification) of the WSI. Table A5 . The training, validation and hold-out test splits for each fold in the dataset. Table A6 . The updated agreed mitosis between annotator 1 and 2 for the training/validation sets.The "Agreement" column shows the number of ground truth agreed mitosis annotations for the 20× magnification dataset before refinement."Updated Agreement" shows the number of mitosis after refinement."Missed Mitosis" shows the difference in numbers of mitosis before and after refinement.Lastly, "% Missed Mitosis" shows the difference in percentage of mitosis before and after refinement against the updated agreed mitotic count. Table A7 . The updated agreed mitosis between annotator 1 and 2 for the hold-out test set.The "Agreement" column shows the number of ground truth agreed mitosis annotations for the 20× magnification dataset before refinement."Updated Agreement" shows the number of mitosis after refinement."Missed Mitosis" shows the difference in numbers of mitosis before and after refinement.Lastly, "% Missed Mitosis" shows the difference in percentage of mitosis before and after refinement against the updated agreed mitotic count. Table A8 . Results from the stage 2 logistic regression model.Across all fold datasets, the sensitivity has dramatically decreased, and it is offset with a large increase in precision when compared to the results in Table4.The mean average F1-scores for the validation and test sets are 0.654 and 0.611, respectively.
6,482
2024-02-01T00:00:00.000
[ "Medicine", "Computer Science" ]
Resistance of blended concrete containing an industrial petrochemical residue to chloride ion penetration and carbonation In this study, the resistance of blended concrete containing catalytic cracking residue (FCC) to chloride ion penetration and carbonation was examined. FCC was added at the levels of 10%, 20%, and 30% as partial replacement for cement. Concretes with 10% of silica fume (SF), 10% of metakaolin (MK), and without additives were evaluated as reference materials. The rapid chloride permeability test (RCPT) performed according to ASTM C1202 standards and an accelerated carbonation test in a climatic chamber under controlled conditions (23 °C, 60% RH and 4.0% CO2), were used in order to evaluate the performance of these concretes. Additionally, their compressive strength was determined. The results indicate that binary blends with 10% FCC had similar compressive strength to concrete without additives and had lower chloride permeability. 10% SF and 10% MK exhibited better mechanical behavior and a significant decrease in chloride penetration when compared to 10% FCC. It is noted that there was an increase in concrete carbonation when FCC or MK were used as additives. It was also observed that with longer curing time, the samples with and without additives, presented higher resistance to carbonation. The accelerated corrosion test by impressed voltage was also performed to verify the findings and to investigate the characteristics of corrosion using a 3.5% NaCl solution as electrolyte. The mixtures that contained 10% FCC were highly resistant to chloride ion penetration and did not present cracking within the testing period. Introduction The factors primarily contributing to reinforcing steel corrosion are carbonation and chloride attack; therefore, these are responsible for a large share of the costs involved in rehabilitation of concrete structures. Concrete carbonation is a process by which atmospheric carbon dioxide reacts with the calcium in calcium hydroxide and calcium silicate hydrate to form calcium carbonate.Once carbonation has taken place, the high pH of the concrete pore solution will begin to drop to as low as 9 or 8 for fully carbonated concrete (Pacheco-Torgal et al., 2012).As a result, the passive layer surrounding the steel reinforcement begins to deteriorate (Parrot, 1987).One of the principal factors that directly impact the process of carbonation is the water-to-cement ratio (w/c); it is well known that an increase of the w/c increases the depth of carbonation (Tsuyoshi et al., 2003).The relative humidity (RH) and the concentration of CO2 are the most important environmental factors (Aguirre and Mejía de Gutiérrez, 2013).It has been reported that RH values between 50% and 75% promote the diffusion of CO2 (Roberts, 1981) and that CO2 concentration in the atmosphere may fluctuate from 0.03% in rural environments to over 0.3% in cities, where the incidence of carbonation is higher (Gruyaert et al., 2013;Pacheco-Torgal, 2012).For this reason, in tropical non-marine environments carbonation may be the main deterioration mechanism in reinforced concrete.Several researchers have attempted to correlate the mechanism of carbonation with concrete properties, particularly mechanical strength and porosity, having detected inconsistencies in the results, especially when blended concretes are used (Pacheco-Torgal, 2012).It should be noted that some authors agree that concrete containing supplementary cementitious materials are more susceptible to carbonation than ordinary Portland cement concrete (Ho and Lewis, 1987;Sideris et al., 2006;Papadakis, 2000;Gruyaert et al., 2013). Chloride ions, from deicing salts or seawater, are the primary cause of reinforcing steel corrosion in highways and marine or coastal structures.Chlorides are transported through the concrete pore network, where they may be present as free ions or bound to cement hydration products in the form of Friedel´s salt or physically adsorbed to calcium silicate hydrates (Loser et al., 2010).Consequently, the chloride resistance mainly depends on the type of binder, the water-to-binder ratio, the hydration degree of the cement and of the supplementary materials present in the mixture.Gutiérrez et al. (2009) studied blended concretes with different types of MK and SF and made behavioral comparisons regarding carbonation and chloride ion penetration, finding a higher depth of carbonation in blended concretes when compared to control concrete.This tendency decreased as the curing age increased.On the other hand, these same materials exhibit better performance in the presence of chlorides. Mejía de A residue of the petrochemical industry called spent fluid catalytic cracking catalyst (FCC), has been studied in the last few years.This material presents pozzolanic characteristics comparable to those of metakaolin (Payá et al., 2001;Borrachero et al., 2002;Soriano et al., 2008;Trochez et al., 2010;Torres et al., 2012Torres et al., , 2012aTorres et al., , 2013)).Zornoza et al. (2009) measured the chloride ingress (penetration) rate in mortars with FCC by means of thermogravimetric analysis and found that the pozzolanic reaction of FCC increases the hy-drated calcium aluminates and silicoaluminates content, so the chloride binding capacity of mortars was highly improved, consequently the resistance to chloride-ion penetration increases.This article presents the evaluation of the resistance against chloride penetration and carbonation of concretes with a partial substitution of cement by catalyst cracking residue (FCC) in proportions of 0, 10, 20 and 30%.The results are compared with blended concretes containing 20% Metakaolin (MK) and 10% Silica Fume (SF). Materials and Experimental Procedure An ordinary Portland cement (OPC) Type I was used for concrete preparation.Spent fluid catalytic cracking catalyst (FCC), Metakaolin (MK) and Silica Fume (SF) were used as supplementary materials.FCC was supplied by a Colombian petroleum company (Ecopetrol, Cartagena).The chemical and physical characteristics of these raw materials are shown in Table 1.The Average Particle Size was determined by laser granulometry in a Mastersizer 2000 particle size analyzer (Malvern Instrument).The pozzolanic activity was determined according to ASTM C618 standards.As shown in Table 1, FCC is composed almost entirely of silica and alumina, with a composition similar to that of Metakaolin.ASTM C618 standard requires a minimum pozzolanic activity index of 75% at 28 days of curing to consider a material as a pozzolan; this requirement is met as is shown in Table 1.All the supplementary materials used comply with that parameter.X-ray diffractometry was carried out using RX Rigaku RINT 2200 with copper lamp.Figure 1 shows the diffractograms (XRD) for FCC, MK and SF.The catalyst residue contains different crystalline phases, such as zeolite type faujasite (F) Na2[Al2Si10O24].nH2Owith peaks located in 2 = 6.19°, 15.6°, 23.58°), kaolinite (K) and quartz (Q).FCC catalyst also has a high content of amorphous (glassy) aluminosilicate phases because of the partial destruction of the zeolite structures in service.In the case of MK, a material of amorphous characteristics is shown by the uprising of the baseline in the region 2 = 15 to 30° and the disappearance of the peaks corresponding to kaolinite.The intense broad peak observed for SF indicated that this material is totally amorphous. Concrete mix design and tests The concrete mix design resulted in a total binder content of 380 kg/m 3 and a total amount of aggregate of 1727 kg/m 3 (55% coarse and 45% fine), obtained by mixing coarse and fine aggregate respectively.The aggregates used are from alluvial origin.Coarse aggregate with a maximum nominal size of 12.7 mm, nominal density of 2668 kg/m³, unit weight of 1542 kg/m³ and absorption of 3.0%, and sand with nominal density of 2679 kg/m³, unit weight of 1667 kg/m³, absorption of 2.1% and a fineness modulus of 2.84 were used.Six concrete mixes including the control mix (without addition) were produced.Two mixtures were used as reference containing 20% MK and 10% SF.The other mixtures were made with partial substitution of OPC by FCC (10, 20 and 30%).A constant effective water-to-binder ratio (w/b) of 0.5 was selected (aggregates moisture corrections were included in calculations); this value is based on the durability requirements specified in the Colombian Earthquake Resistant Construction Code NSR-10, item C.4.2.In order to maintain the workability, a superplasticizer was used.The test provides a measure of the total water permeable pore space and the second represents effective porosity or porosity that is accessible to water and therefore, to aggressive environmental agents. The chloride ion penetration was evaluated by means of the rapid chloride permeability test (RCPT), performed according to ASTM C1202 (Standard Test Method for Electrical Indication of Concrete's Ability to Resist Chloride Ion Penetration).The test consists of measuring the electrical current that passes through a concrete sample of a 100 mm diameter and 50 mm thickness maintaining the system at 60V for a period of 6 hours.Additionally, an accelerated corrosion test by impressed voltage was also performed to verify the findings and to investigate the characteristics of corrosion in terms of the initial current and the time of crack initiation.The specimens used for accelerated corrosion test were cylinders of 50 mm in diameter and 100 mm height with steel bars of 10 mm diameter placed in the middle of the specimens.Specimens were water cured and were then immersed in the corrosion cells.The electrolyte was 3.5% by weight of NaCl solution.A constant voltage of 5 V was applied between the anode (steel reinforcement) and the cathode and the current intensity was recorded up to the appearance of a longitudinal crack or a maximum period of 60 h.This test was realized in mortar samples (OPC and blended with FCC and MK) and the reinforcement was protected near the mortar surface in order to avoid corrosion at this point. The study of carbonation was performed by an accelerated test using a climatic chamber under controlled conditions (23°C, 60% H.R. y 4.0% of CO2) and cylindrical specimens of 75 mm in diameter and 150 mm height.The flat ends of these specimens were water proofed leaving the cylinder lateral faces uncovered and therefore, exposed to CO2.The evaluation process was done by cutting off a 20 mm thick slice from specimens exposed to CO2 and applying an indicator, phenolphthalein solution, over the flat surface.Since the color of the indicator turns purple at a pH above 9.0, non-carbonated zones are colored while carbonated zones are colorless.Six radial measurements of the zones that did not present coloration were taken for each specimen.The flat face of the same specimen was sealed again, with epoxy and the specimen was introduced into the chamber to measure the progression of the carbonation depth at later ages (3, 6 and 9 weeks) for each one of the curing ages. Compressive strength Figure 2 presents the evolution of compressive strength for each of the mixtures after being cured for 1, 3, 7, 28, 56, 90, 120, 180 days.The testing was conducted according to the ASTM C39 standard using three cylinders with 100 mm in diameter and 200 mm of hieght, as samples.As expected, regardless of the type or percentage of addition, strength increases with age.It can be seen that at 1 day, the strength of the mixtures with FCC is higher than that of the mixture with MK and the strength of 10% FCC mixture is slightly higher than that of the control mixture.This may be due to the high reactivity of FCC reported at early ages, which is in line with other researchers' findings (Soriano et al., 2008;Payá et al., 2001Payá et al., , 2003;;Antiohos et al., 2006).Figure 2 also shows that the resistance decreases with FCC additions higher than 20%, these results are consistent with those reported by Antiohos et al. (2006), highlighting that, the value at 28 days for all the mixes is higher than 40 MPa, allowing us to consider the concrete as structural concrete.In general, the optimal percentage of FCC is 10%.The maximum compressive strength for SF mixture (10%) was about 13.4% higher than that of the control mixture at 180 days and comparable to that of the concrete containing 10% MK at the same curing ages. Water absorption, capillary absorptivity and Chloride ion permeability ASTM C642 test results are presented in Figure 3.It can be appreciated that the concrete samples tested exhibit a water absorption of less than 5% and porosity of less than 10%.The absorptivity coefficient of blended concretes ranges between 0.10x10 -2 and 0.15x10 -2 kg/m 2 s 1/2 .It can be seen that a longer curing period contributes to the improvement of cement hydration and pozzolanic reactions as well as create a denser microstructure with a smaller volume of capillary pores, resulting in a concrete with lower permeability.The chloride ion penetration was evaluated at different curing ages (28, 56, 90 and 180 days).The total electric charge (TEC, coulombs), passed through the samples, is calculated using the results obtained in the RCPT test.Figure 4 shows an index of reduction of the TEC, calculated as the percentage ratio between the TEC of blended material (OPC + addition %) and the TEC of the control sample for different curing ages.This figure shows that blended concretes present a higher resistance to chloride penetration than the control mixes and that this resistance increases with curing time.This could be attributed to the refinement of the matrix pore network resulting from the pozzolanic addition reaction.In general, the addition of FCC can reduce the penetration of chloride up to 54% at 28 days curing.In natural exposure, especially chloride environments, the presence of high alumina pozzolan contributes additionally to the formation of Friedel's salt.This compound acts as a barrier to the chloride ingress through the cementitious matrix (Mejía de Gutierrez et al., 2000;Torres et al., 2007;Morozov et al., 2013).Zornosa et al. ( 2009) and Mejía de Gutierrez et al. ( 2000), using XRD and DTG (Differential Thermogravimetric analysis) techniques, confirm the formation of Friedel´s salt in mortars containing FCC and MK.It should be noted that the RCPT test takes only 6h, which is a very short period for allowing significant chloride binding.Therefore, this test is highly related to concrete electrical conductivity, which may be the parameter mainly affected by the addititives used. Figure 5 presents the current intensity-time relationships for the reinforced mortar samples (control and blended with FCC and MK) evaluated by the impressed voltage test.These results are the average of two samples per mix.In general, current intensity with time.This increase is directly related to the progress of steel corrosion.Corrosion products generate tensile stresses in the hardened matrix leading to material cracking and to a significant increase of current intensity.It should be noted that the current passed through samples in this test is also an indication of the volume of corrosion products.The current intensity of the control mortar was superior to the other blended mixes.In general, the current intensity of blended mortars was very low indicating that these mixes were highly resistant to chloride penetration.Additionally, it can be observed that the use of FCC at 20% has a significant effect on the time of first crack appearance meanings a good corrosion resistance.The crack size was 1.053 mm for control mortar (100% OPC) and 0.496 for blended mortar with 10% FCC (figure 5).These results confirm those obtained in the RCPT test.Results also indicate that concretes with FCC are durable therefore, they can be used in reinforced concrete structures exposed to chloride-contaminated environments.According to these results, the recommended proportion is 20% FCC. Resistance to concrete carbonation The carbonation tests were performed at different curing ages (28, 56, 90 and 180 days).Figure 6 shows photographs of specimens cured for 28 days and exposed for 9 weeks to CO2 inside the chamber after the application of phenolphthalein.Figures 7 to 9 show that as the curing time increases, the carbonation depth decreases for all of the samples.The best behavior was presented by the control specimen and the blended samples with 10% SF.In regards to FCC concrete, the best behavior was presented by the sample with 10 % FCC, exhibiting better behavior than that of the specimen blended with MK.It can be seen that as the percentage of FCC increases, the susceptibility to carbonation increases.These results confirm that concrete containing supplementary cementitious materials such as FCC, MK or SF are more susceptible to carbonation than OPC concrete.It should also be noted that there is a wide range of published literature related to the carbonation of blended concrete and in some cases, contradictory findings are reported (Pacheco-Torgal et al., 2012).Some researchers report that the accelerated tests may underestimate the carbonation depths of specimens moist cured for 28 days, for this reason some standards recommend using specimens cured for 55 days (Gruyaert et al., 2013). From these data, the carbonation coefficient (KC) can be calculated using equation [1], where X represents the penetration depth (mm) and t, the exposure time.C is the CO2 concentration used, 4%. 𝑥 = 𝐾 𝐶 √𝑡 (1) The accelerated test method could provide an indication of the long-term concrete carbonation resistance.Considering that in actual exposure conditions, the level of CO2 in the atmosphere varies from 0.03% to 0.3%, the carbonation depth of a concrete element after a service life of 25 and 50 years can be estimated using the equation [2] (Castro et al., 2004).In this equation, KC and KN represent the carbonation coefficient corresponding to two different concentrations of CO2 named C and N. Using data from the accelerated test for specimens cured for 28 days, estimations of carbonation depths at 25 and 50 years were made (Table 2). / = √/ (2) Results show that in general, for environments containing 0.03% CO2, the carbonation depth of blended concretes with SF, MK and FCC 10%, after 25 and 50 years, is lower than 20 mm.Taking into account the results presented in Table 2, concrete mixes with 20 and 30% of FCC would not perform well in a 0.3% CO2 environment. Conclusions Given the results of this study, the following conclusions can be made: the catalyst residue obtained from the Colombian petroleum company plant is a pozzolan that substantially improves the mechanical properties and durability of concrete.It is a pozzolan of high reactivity at early ages, which leads to low chloride permeability.The behavior of FCC is comparable to that of metakaolin, a pozzolanic material well known worldwide.In general, blended concretes have greater susceptibility to carbonation, so it is recommended that they undergo an initial curing process.The optimal percentage of FCC is 10%, but in chloride environments, it is recommendable to use 20%. The results obtained in this study allow us to conclude that the FCC is an additive that could compete in the market with other types of pozzolans, contributing positively to the strength and durability of concrete as well as environmental sustainability.. Figure 1 . Figure 1.X-ray diffractograms of FCC, MK, SF (F: faujasite, Q: quartz, K: kaolinite).The samples were cured in water saturated with Ca(OH)2 at room temperature up to 360 days.Compressive strength properties and durability such as water absorption, chloride permeability and resistance to concrete carbonation were evaluated at different ages.The water absorption and capillary absorptivity tests were carried out according to ASTM C642 (Standard Test Method for Density, Absorption and Voids in Hardened Concrete) and ASTM C1585 (Standard Test Method for Measurement of Rate of Absorption of Water for Hydraulic-Cement Concretes).The test provides a measure of the total water permeable pore space and the second represents effective porosity or porosity that is accessible to water and therefore, to aggressive environmental agents.
4,380.6
2014-03-19T00:00:00.000
[ "Materials Science" ]
Three regions of the NIP5;1 promoter are required for expression in different cell types in Arabidopsis thaliana root ABSTRACT Arabidopsis thaliana NIP5;1, a boric acid diffusion facilitator, is involved in the acquisition of boron (B) from soil for growth under B limitation. AtNIP5;1 is expressed mainly in roots, where its expression is highest in the root cap and elongation zone. Here, we studied the role of the AtNIP5;1 promoter in the expression of this gene in roots. We fused a series of AtNIP5;1 promoter variants with deleted 5′-fragments to the GUS reporter gene and investigated the expression patterns by histochemical staining. We found that three regions of the AtNIP5;1 promoter are required for specific expression in the root cap and elongation zone (−880 to −863 bp from the translation start site), distal side of the differentiation zone (−747 to −722 bp), and basal side of the differentiation zone (−661 and −621 bp). The results suggest that at least three regions of the AtNIP5;1 promoter each confer different cell-type-specific expression. Introduction A promoter is a sequence upstream of the transcription start site that often contains specific sequence motifs, including cisacting elements, involved in time-and space-dependent expression, tissue-and organ-specific expression, or regulation by environmental changes. Trans-acting factors (transcription factors) bind to the cis-acting elements to regulate gene expression. Since the cis-acting elements are common to all cells of an organism, the specificity of gene expression is determined by the activity of transcription factors in specific organs and tissues. Information on the positions of cis-acting elements and on the transcription factors is invaluable in elucidating their biological functions. In crops, this regulation has the potential to determine agronomically important traits, placing this topic in the center of attention of plant biologists 1 Since boron (B) was proven to be an essential micronutrient for plants in 1923, 2 evidence has accumulated that it is required for normal growth not only of vascular plants, but also of diatoms, cyanobacteria, and a number of species of marine algal flagellates. 2,3 It is also reported that excess B is toxic. 4 In plants, B is important for maintaining cell wall structures for normal growth. Cross-linking of B with rhamnogalacturonan-II (RG-II), a complex pectic polysaccharide in the cell wall, is required for normal expansion of leaves. 5,6 Since B is transported along the transpiration stream and is supposed to be phloem-immobile in many plants, it accumulates at the end of the transpiration stream. 7 Thus, B deficiency and toxicity symptoms in plants are often observed in the growth of apical meristems, affecting root elongation, leaf expansion, and fertilization. [8][9][10] Consequently, both B deficiency and B toxicity decrease crop yield and quality. 11 Nodulin 26-like intrinsic proteins (NIPs) are diffusion facilitators of water and small uncharged molecules such as boric acid, silicic acid, glycerol, lactic acid urea and formamide. 12 In A. thaliana, NIP5;1, NIP6;1, and NIP7;1 are boric acid diffusion facilitators and mainly expressed in roots, nodes, and anthers, respectively. [13][14][15] NIPs are well conserved among plant species. Rice NIP3;1 and maize NIP3;1 are orthologues of AtNIP5;1 and are boric acid diffusion facilitators. 16,17 OsNIP3;1 is expressed in the roots as well as in the shoot, and is involved in B uptake from the soil and B transport in the shoots, while, zmNIP3;1 is expressed mainly in silk, and is involved in inflorescence development under limiting B conditions. For the mechanism of B-dependent regulation, AtNIP5;1 is the most evident. AtNIP5;1 is regulated at the posttranscriptional level, including via mRNA degradation and translation efficiency. 18,19 We previously demonstrated that a minimum upstream open reading frame (ORF), AUGUAA, which contains only the start and stop codons and is present in the AtNIP5;1 5′ untranslated region (5′-UTR), is required for B-dependent AtNIP5;1 expression. 19 During translation, ribosomes are likely to stall at AUGUAA under high-B conditions, reducing translation efficiency of its main ORF and inducing mRNA degradation. 19 The sequence of the 5ʹ-UTR is well conserved among species, especially AUGUAA and the around sequences. 19 It is likely that B-dependent regulation via AUGUAA is conserved among species. Histochemical analysis has shown that the AtNIP5;1 expression pattern in roots differs among cell types. AtNIP5;1 expression is stronger in the elongation zone than in other root zones. 13 This finding implies that AtNIP5;1 has different cisacting elements to regulate root-cell-type-specific expression. To obtain insights into this regulation, here we conducted deletion analysis of the AtNIP5;1 promoter and found that basal levels of AtNIP5;1 expression in each root cell type are regulated by distinct promoter regions. Plant growth conditions A. thaliana 4-, 5-, and 11-day-old seedlings were grown on plates with solid medium 20 containing 1% (w/v) sucrose, 1.5% (w/v) gellan gum (Wako Pure Chemicals, Osaka, Japan), and different concentrations of boric acid (Wako Pure Chemicals). Surfacesterilized seeds were sown on the plates and incubated for 1-2 days at 4°C. The plates were then placed vertically at 22°C in a growth chamber under long-day conditions (16/8-h light/dark cycle). Twenty-one-day-old plants were grown on plates with solid medium containing 1% (w/v) sucrose, 0.1% (w/v) gellan gum and 0.3 µM boric acid. The plates were placed horizontally at 22°C in a growth chamber under long-day conditions. Plasmid construction and plant transformation The AtNIP5;1 promoter with serial deletions (Supplemental Table 1) from the 5′ end was fused to the β-glucuronidase (GUS) reporter gene. The P −2492 -GUS construct carried a fragment from −2492 to +1 bp (nucleotide numbering from the translation start site) of the wild-type AtNIP5;1 gene and the GUS reporter gene ("P −2180 -GUS" 18 ). The P −2492∆UTR558-313 -GUS construct carried a deletion of −558 to −313 bp in the 5′-UTR of AtNIP5;1 ("P −2180∆UTR312 -GUS 18 ). In brief, to construct P −2492 -GUS, the region from −2492 to +1 bp was amplified by the bacterial artificial chromosome (BAC) clone F24G24 that harbors the AtNIP5;1 locus obtained from the Arabidopsis Biological Research Center (http://abrc.osu.edu/). All primers mentioned in this subsection are shown in Supplemental Table 1. The amplified fragment was digested with BamHI and NcoI and subcloned into pTF456 (a derivative of pBI221 21 containing a GUS ORF with an NcoI site at its 5′-end). The resultant plasmid was named pMW1. 13 The BamHI-NotI fragment of pMW1 (−2492 to +1 bp fragment and the GUS gene) was subcloned into BamHI-Bsp120-I-digested pTkan + (provided by K. Schumacher, University of Heidelberg). To construct P −2492∆UTR558-313 -GUS, the region from −2492 to −313 bp was amplified by PCR from pMW1. A. thaliana (L.) Heynh. ecotype Columbia (Col-0) plants were transformed using the floral-dip method. 22 At least three independent homozygous T 3 plants were established for each transgenic line and were used for analysis. Root cell type-specific AtNIP5;1 expression and its B-dependent expression are regulated by different pathways GUS expression driven by the AtNIP5;1 promoter with the 5′-UTR, which starts −2492 bp upstream of the main ORF (referred to as P −2492 -GUS), is reportedly strongly induced by B deficiency in roots and is high in the elongation zone. 13 To investigate the root-specific GUS expression patterns in root regions, the same transgenic plants carrying P −2492 -GUS were grown under low (0.3 µM) or high (100 µM) B conditions ( Figure 1). Under low-B, although GUS staining was observed in the whole roots, its patterns differed among the regions: it was strongest in the root elongation zone and the root cap, followed by the differentiation zone, and weak in the apical meristem region. Under high-B, the overall GUS expression was weaker, but the pattern was similar to that under low-B. To investigate whether the AtNIP5;1 expression pattern in roots is altered when the responsiveness to B is abolished, we examined the GUS expression pattern in transgenic plants carrying the AtNIP5;1 promoter without a portion of 5′-UTR (referred to as P −2492∆UTR558-313 -GUS); these plants have no B response in GUS activity assay. 18 The GUS expression patterns in roots were similar between B conditions and similar to those in plants carrying the AtNIP5;1 promoter with wild-type 5′-UTR under low-B (Figure 1b). This result confirms that the 5′-UTR is involved in B-dependent expression in roots and indicates that it is not involved in cell-type-specific AtNIP5;1 expression in roots. Thus, specific promoter regions might confer distinct cell-type-specific AtNIP5;1 expression in roots and might control basal levels of AtNIP5;1 expression in each root region. Distinct promoter regions confer different cell-typespecific expression in roots To investigate cell-type-specific expression in roots, we used a 5′-deletion series. 18 The positions are −1559, −880, −762, −700, and −600, numbered from the translation start site, +1) of the AtNIP5;1 promoter. These were fused to the GUS e1993654-2 reporter gene (referred to as P −1559 -GUS, P −880 -GUS, P −762 -GUS, P −700 -GUS, and P −600 -GUS) 18 (Figure 2, Supplemental Figures S1 and S2). The deletion of these promoter regions except for P −600 -GUS reportedly does not affect B response in a GUS activity assay, while GUS activity in transgenic plants carrying P −600 -GUS is close to the detection limit. 18 For each construct, GUS expression was observed in at least three independent lines; representative images are shown in Figure 2. The GUS staining patterns in roots differed among the lines. In those carrying P −1559 -GUS and P −880 -GUS, the expression patterns were almost identical to those in lines carrying P −2492 -GUS (Figures 2b, c). No GUS expression was observed in the apical meristem in the lines carrying P −880 -GUS, unlike in those carrying P −2492 -GUS and P −1559 -GUS, presumably owing to a difference in the level of GUS expression among lines. On the other hand, GUS expression in lines carrying P −762 -GUS was undetectable in the root cap and elongation zone, but was detectable from the region where xylem emerges (Figure 2c). GUS expression in lines carrying P −700 -GUS was observed only in the upper part of the differentiation zone (i.e., in the basal root region). No GUS expression was detectable in lines carrying P −600 -GUS. These trends were also observed in 11-day and 21-day-old plants (Supplemental Figure S2). Interestingly, it is likely that GUS expression in lines carrying P −700 -GUS was observed when the lateral roots started to emerge (Supplemental Figure S2A). It suggests that the difference between the upper and lower part of the differentiation zone is due to the difference in the area where the lateral roots can emerge or not. In addition, as is the case of the main root, the expression pattern in the lateral root was also regulated by the promoter region (Supplemental Figures S2C and S2D). These observations suggest that the AtNIP5;1 promoter region contains different elements for root cell-typespecific AtNIP5;1 expression, including expression in the root cap elongation zone, and the distal and basal regions of the differentiation zone. Identification of AtNIP5;1 promoter regions required for cell-type-specific expression in roots To investigate AtNIP5;1 expression specific to the root cap and elongation zone, we made transgenic plants with promoters starting before or after position −880, namely, at positions −900, −863, or −819 of the AtNIP5;1 promoter. These regions were fused with GUS (referred to as P −900 -GUS, P −863 -GUS, and P −819 -GUS; Figure 3a). We basically made constructs with deletions of roughly 20 bp interval. GUS expression was observed in the root cap in lines carrying P −900 -GUS and P −880 -GUS, but not in those carrying P −863 -GUS or P −819 -GUS (Figure 3b). In addition, in lines carrying P −880 -GUS the expression was observed in the elongation zone (Figure 2b), whereas in lines carrying P −863 -GUS and P −819 -GUS it was observed in and above the region where the xylem appears (Figures 3c,d). These data indicate that the sequence from −880 to −863 is necessary for AtNIP5;1 expression specific to the root cap and elongation zone. Next, to examine AtNIP5;1 expression specific to the differentiation zone, we made transgenic plants with promoters starting before or after Discussion Here we show that three distinct promoter regions are required for cell-type-specific AtNIP5;1 expression in roots namely in the root cap and elongation zone (−880 to −863 bp from translation start site), distal part of the differentiation zone (−747 to −722 bp), and basal part of the differentiation zone (−661 to −621 bp) ( Figure 5). B-unresponsive transgenic plants carrying a partial deletion of the 5′-UTR had higher GUS activity in the root cap and elongation zone than in the other root zones under both B conditions (Figure 1). Thus, whereas B-dependent AtNIP5;1 expression is regulated by ribosome stalling at AUGUAA at the 5′-UTR, 19 For regulatory elements in the AtNIP5;1 promoter, we searched the Plant Promoter Database (Ppbd) 23 and identified three such elements at −844 to −837 bp, −768 to −760 bp, and −646 to−639 bp. These positions are very close or almost identical to those we identified in this study. The −844 to −837 bp sequence is a W-box motif recognized by WRKY transcription factors in response to salicylic acid. 24 The −767 to −761 bp sequence is the site YAACKG or CNGTTR recognized by MYB transcription factors in response to dehydration. 25,26 The W-box motif and MYB recognition site are slightly out of alignment with the regions we identified, and their functions contribute to the stress response. Thus, they are unlikely to be involved in root-cell-type-specific expression, and other transcription factors might be involved. In Ppbd, the −646 to −639 bp sequence is listed as a regulatory element whose function is unknown. This sequence lies within our identified region and may be essential for specific expression in the basal part of the differentiation zone. We compared the promoter regions between AtNIP5;1 and its rice ortholog OsNIP3;1, but the similarity was low and the root-specific sequences found in AtNIP5;1 were not detected in OsNIP3;1. The latter is also a boric acid facilitator and expressed in both shoots and roots. 16 In A. thaliana, AtNIP6;1, a paralog of AtNIP5;1, is expressed mainly in shoots, especially in the nodal region. 14 The expression pattern of OsNIP3;1 corresponds to the combined patterns of AtNIP5;1 and AtNIP6;1. Rice, sorghum, and maize have only AtNIP5;1 orthologs, whereas soybean, citrus, grape, and poplar have orthologs of both AtNIP5;1 and AtNIP6;1. 27 It seems likely that NIP5;1 and NIP6;1 diversified during evolution, and their promoter regions were altered and gained tissue-specific expression. We hypothesize that plants have evolved to adjust to a variety of environments by the species-specific evolution of NIP genes. It seems likely that rootspecific expression of AtNIP5;1 became regulated differently in the three root portions, achieving further functional differentiation to fulfill the demand for B in each portion of the root. The physiological roles of AtNIP5;1 may differ in different portions of the root. One possible role for AtNIP5;1 is B transport for RG-II-B dimer formation for cell expansion, and the other one is B transport to shoots. In radish root caps, RG-II is present mainly on the inner surface of the primary cell wall, very close to the plasma membrane. 28 AtBOR2, an effluxtype B transporter, is involved in RG-II-B dimer formation and is expressed in the root cap and elongation zone. 29 According to the coexpression database Atted II, 30 AtNIP5;1 is coexpressed with AtBOR2. Given that B is relatively immobile in the phloem, 31 B required for cell expansion must be transported from the root cap or elongation zone; thus, regionspecific AtNIP5;1 expression would be necessary to fulfill the B requirement for RG-II-B dimer formation. AtNIP 5;1 expressed in the differentiation zone may be involved in B transport to the xylem and shoots. We found that specific AtNIP5;1 expression in the basal and distal parts of the root differentiation zone was regulated by different promoter regions. The basal part of the roots is attached to the leaves and is where the lateral roots are emerged. The presence or absence of B transported from the lateral roots may be responsible for the differential expression of root differentiation regions. Since the requirements for B differ among root regions, the possible need to control the basal level of AtNIP5;1 expression in each root tissue may have led to the development of regulation by different promoter regions. In conclusion, our analysis indicates that root-specific expression of A. thaliana AtNIP5;1 is governed by three distinct root-cell-type-specific elements, which are responsible for expression in the root cap and elongation zone, in the distal part of the differentiation zone, and in its basal part.
3,843
2021-11-09T00:00:00.000
[ "Biology", "Environmental Science" ]
The First Metal Complexes of 4,6-diamino-1-hydro-5-hydroxy-pyrimidine-2-thione: Preparation, Physical and Spectroscopic Studies, and Preliminary Antimicrobial Properties The new complexes [M2O5L2(H2O)2] · H2O (M = Mo, 1; M = W, 2), [RuL2(H2O)2] · H2O (3), [ML3] · xH2O (M = Rh, x = 2, 4; M = Ir, x = 1, 5), [RhL2(PPh3)2](ClO4) · 2H2O (6), [PdL2] · 2H2O (7), [PdL(phen)]Cl · H2O (8), [Re OL2(PPh3)]Cl (9) and [UO2L2] (10) are reported, where LH is 4,6-diamino-1-hydro-5-hydroxy-pyrimidine-2-thione. The complexes were characterized by elemental analyses, physical techniques (molar conductivity, room-temperature magnetic susceptibility), and spectroscopic (IR, Raman, UV/VIS/ligand field, NMR, mass) methods. The ligand L− is in its thione form and behaves as a bidentate chelate with the deprotonated (hydroxyl) oxygen and the nitrogen of one amino group as donor atoms. Oxobridged dinuclear (1, 2) and various mononuclear (3–10) structures are assigned for the complexes in the solid state. The metal ion coordination geometries are octahedral (1–6, 9, 10) or square planar (7, 8). The free ligand LH and complexes 1, 4, 7, and 8 were assayed in vitro for antimicrobial activity against two bacterial and two fungal cultures. 2-Mercaptopyrimidine nucleotides have been detected in Escherichia Coli sRNA and yeast tRNA; it has been found that they inhibit the synthesis of tRNA, thus acting as antitumour and antithyroid agents [1]. A similar inhibitory effect has been observed for pyrimidine-2-thione (I in Scheme 1) and its derivatives, which also show pronounced in vitro bacteriostatic activity [1]. Metal complexes of pyrimidine-2-thione or its pyrimidine-2-thiol tautomeric form [1,2] and its amino [2,3] or hydroxy [4][5][6] derivatives have been prepared and studied (for representative ligands see Scheme 1). Such complexes exhibit rich structural chemistry, and interesting thermal, magnetic, sorptive, and biological properties. However, the coordination chemistry of ligands based on the 2-mercaptopyrimidine moiety and containing both hydroxy and amino substituents on the pyrimidine ring is completely unkown. We now describe here the preparation and characterization of the first metal complexes of 4,6-diamino-5hydroxy-2-mercaptopyrimidine (LH, Scheme 2). We also report the antimicrobial activity of the free ligand and four representative complexes against two bacteria and two fungi. This work can be considered as a continuation of our interest on the coordination chemistry of derivatized pyrimidines [7]. in H 2 O/EtOH. [ReOCl 3 (PPh 3 ) 2 ] was synthesized as previously reported [8]. DMSO used in conductivity measurements was dried over molecular sieves. The DMSOd 6 protons (NMR) were referenced using TMS. Warning: perchlorate salts are potentially explosive; such compounds should be used in small quantities and treated with utmost care at all times. Elemental analyses (C, H, N, S) were performed by the University of Ioannina (Greece) Microanalytical Unit with an EA 1108 Carlo-Erba analyzer. The water content of the complexes was also confirmed by TG/DTG measurements performed on a Shimadzu Thermogravimetric Analyzer TGA-50. IR spectra were recorded on a Matson 5000 FT-IR spectrometer with samples prepared as KBr pellets. Far-IR spectra were recorded on a Bruker IFS 113 v FT spectrometer with samples prepared as polyethylene pellets. FT Raman data were collected on a Bruker IFS 66 v interferometer with an FRA 106 Raman module, a CW Nd: YAG laser source, and a liquid nitrogen-cooled Ge detector. Solution electronic spectra were recorded using a Unicam UV 2−100 spectrophotometer. Solid-state (diffuse reflectance, DRS) electronic spectra in the 300-800 nm range were recorded on a Varian Cary 3 spectrometer equipped with an integration sphere. 1 4 ; diamagnetic corrections were estimated using Pascal's constants. Preparation of the complexes An aqueous solution (5 cm 3 ) of (NH 4 ) 2 [MoO 4 ] (0.24 g, 1.0 mmol) was added to a solution of LH (0.16 g, 1.0 mmol) in EtOH (25 cm 3 ). The obtained slurry was heated and the resulting orange solution was refluxed for 4 hours, during which time an orange precipitate is formed. The solid was collected by filtration, washed with ethanol (2 cm 3 ) and diethyl ether (2 × 5 cm 3 ) and dried in vacuo. The yield was 35% (based on the metal). Elemental analytical calculation for C 8 Solid RuCl 3 · 3H 2 O (0.12 g, 0.46 mmol) was added to a solution of NaO 2 CMe (0.62 g, 7.5 mmol) in water (30 cm 3 ). Solid LH (0.24 g, 1.5 mmol) was then added and the resultant reaction mixture was refluxed for 12 hours. The deep brown solid formed was collected by filtration while the reaction mixture was hot, washed with hot water, and dried in vacuo. The yield was 30% (based on the metal). Elemental analytical calculation for C 8 A hot ethanolic solution (20 cm 3 ) of LH (0.25 g, 1.6 mmol) was added to a solution of RhCl 3 · 3H 2 O (0.21 g, 0.8 mmol) in 6 M HClO 4 (15 cm 3 ). The resultant orange solution was refluxed for 4 hours and to this was added a solution of PPh 3 (0.43 g, 1.6 mmol) in hot ethanol (15 cm 3 ). The new solution was refluxed for a further 3 hours and filtered, and its volume decreased in vacuo to give a red-brown solid. The solid was collected by filtration, washed with hot water (2 x 2 mL) and hot ethanol (2 x 3 cm 3 ), and dried in vacuo. The yield was 25% (based on the metal). Elemental analytical calculation for C 44 To a stirred slurry of LH (0.16 g, 1.0 mmol) in methanol (15 cm 3 ) was added an aqueous solution (15 cm 3 ) of K 2 [PdCl 4 ] (0.16 g, 0.5 mmol). The resulting suspension was stirred at 40 • C for 60 hours and the brown solid formed was collected by filtration, washed with water (5 × 3 cm 3 ) and cold methanol (2 × 5 cm 3 ), and dried in air. The yield (based on the metal) was 50%. Elemental analytical calculation for C 8 [PdL(phen)]Cl · H 2 O (8) To a stirred yellow slurry of [PdCl 2 (phen)] (0.18 g, 0.5 mmol) in a methanol/benzene solvent mixture (15 cm 3 , 3:2 v/v) was added a solution of KOH (0.055 g, 1.0 mmol) in methanol (15 cm 3 ). Solid LH (0.08 g, 0.5 mmol) was added to the reaction mixture which soon dissolved. The solution was filtered and stirred for 48 hours at room temperature. During this time, a brown precipitate formed which was collected by filtration, washed with water (1 cm 3 ) and methanol (2 × 3 cm 3 ), and dried in air. The yield was 40% (based on the ligand). Elemental analytical calculation for C 8 [UO 2 L 2 ] (10) Solid LH (0.08 g, 0.5 mmol) was added to a stirred solution of [UO 2 (NO 3 ) 2 ] · 6H 2 O (0.25 g, 0.5 mmol) in methanol (10 cm 3 ). The solid soon dissolved. The resultant yellow solution was filtered and refluxed for 4 hours, during which time a red microcrystalline solid was precipitated. The product was collected by filtration, washed with methanol (5 cm 3 ) and diethyl ether (2 × 5 cm 3 ), and dried in vacuo. The yield was 55% (based on the metal). Elemental analytical calculation for C 8 Antimicrobial activity The bacterial strains (S. aureus and P. aeruginosa) were grown in Nutrient agar slants and the fungal strains (A. niger and C. albicans) were grown in Sabouraud dextrose agar slants. The viable bacterial cells were swabbed onto Nutrient agar plates, while the fungal spores onto Sabouraud dextrose agar plates. The free ligand and complexes 1, 4, 7 were dissolved in DMSO, while complex 8 was dissolved in H 2 O with 10, 20, 50, and 100 mg/mL concentrations. The blank was DMSO in saline buffer. The bacterial and fungal plates were incubated for 36 and 72 hours, respectively, and the activity of the compounds was estimated by measuring the diameter of the inhibition zone (the affected zone by the compounds) around the respective zone (the normal place in the agar). The incubation temperature was 27 ± 0.5 • C. Synthetic comments and physical characterization The preparative reactions for selected complexes can be represented by the stoichiometric equations (1)- (7); no attempts were made to optimize the yields, [PdCl 2 (phen)] The metal is reduced (Ru III → Ru II , Ir IV → Ir III ) during the preparation of complexes 3 and 5 although the reactions are performed in air. The redox reaction may be facilitated by the reducing character of LH, the products from the oxidation of the ligand remaining in the solution. Thus, LH possibly plays two roles in the reactions, that is, the role of the ligand and that of the reducing agent. It is well known that Ru(III) can undergo reduction reactions and that the [Ir IV Cl 6 ] 2− ion is a convenient oneelectron oxidant [9]. The use of a base (KOH) in the preparation of 8 is necessary to obtain the complex in pure form; otherwise, the produced aqueous HCl decomposes the compound. Complexes 1-5, 7 and 10 are nonelectrolytes in DMSO [10]. Complexes 7 and 10 exhibit slightly increased molar conductivity values in DMSO. Since DMSO is a good donor solvent, this may be due to the partial displacement of one L − ligand by two DMSO molecules. Assuming an equilibrium between the neutral and the resulting cationic complex, this displacement changes the electrolyte type of the compound explaining the increased Λ M value [10]. From the molar conductivities in DMSO (complexes 6 and 9) and DMF (complex 8), it is concluded that compounds 6, 8, and 9 behave as 1 : 1 electrolytes, supporting their ionic formulation [10]. All the complexes are diamagnetic, as expected [9]. It should be mentioned at this point that the π bonding in the {Re V = O} 3+ unit of 9 causes sufficient splitting of the t 2g (in O h ) set (5d xz , 5d yz 5d xy ) that diamagnetism occurs through the configuration (5d xy ) 2 . Complexes 1-10 are microcrystalline or powder-like, stable in the normal laboratory atmosphere, and soluble only in DMF and DMSO. We had hoped to structurally characterized one or two complexes by singlecrystal X-ray crystallography (working mainly with DMF or DMF/MeCN), but were thwarted on numerous occasions by twinning problems or lack of single crystals. Thus, the characterization of the complexes is based on spectroscopic methods. Electronic spectra The band at 335 nm in the DRS spectrum of 1 is assigned to an O 2− → Mo VI p-d LMCT transition and is characteristic of the {MoO 2 } 2+ moiety [11] in octahedral complexes. The transition appears at 337 nm as a shoulder in solution (DMSO). The DRS spectrum of 3 is indicative of its lowspin octahedral structure. The ground term is 1 A 1g and the two spin-allowed transitions to 1 T 1g and 1 T 2g are observed at 565 and 420 nm, respectively [12]; the corresponding bands in DMSO are at 560 and 430 nm. The DRS spectra of the Rh(III) complexes 4 and 6 both exhibit bands at ∼470 and ∼380 nm; the spectra resemble those of other six-coordinate Rh(III) compounds and the bands are assigned as transitions from the 1 A 1g ground state to the 1 T 1g and 1 T 2g upper states in octahedral symmetry in decreasing order of wavelength [12]. The lower wavelength band may also have a chargetransfer character. Both complexes exhibit an additional band in the blue region of the spectrum (∼520 nm) which is responsible for their red-brown colors; a possible origin of this band is the singlet-triplet, spin-forbidden transition 1 A 1g → 3 T 2g [12]. The spectrum of the Ir(III) complex 5 shows two bands at 380 and 335 nm, which have a similar interpretation; the 1 A 1g → 3 T 2g transition is not observed. A weak shoulder in the spectrum of 9 is assigned to the 3 T 1g (F) → 3 T 2g transition in a d 2 octahedral environment, while an intense band at 375 nm most probably has an LMCT origin [12]. The ligand-field spectra of 7 and 8 are typical of a square planar environment around pd II with a mixed N,O-ligation; the bands at 480, 375, and 330 nm are assigned [12] to the 1 A 1g → 1 A 2g , 1 A 1g → 1 E g , and 1 A 1g → 1 B 1g transitions, respectively, under D 4h symmetry. The spectra in DMSO exhibit only two bands at 480 and 330 nm. NMR studies Diagnostic 1 H NMR assignments (in DMSO-d 6 ) for representative complexes are presented in Table 1. The study was based on comparison with the data obtained for diamagnetic complexes with similar ligands [7,13,14]. In all the spectra Sahar I. Mostafa et al. 5 studied, the integration ratio of the signals is consistent with the assignments. The spectrum of LH exhibits two singlets at δ 6.07 and 6.18 assigned to the -N(4)H 2 /-N(6)H 2 (for the numbering scheme see Scheme 2) amino hydrogens, respectively, and two relatively broad singlets at δ 7.43 and 9.13 due to the amide and hydroxyl protons -N(1)H-and -O(5)H, respectively. The appearance of these four peaks is consistent with the exclusive presence of the thione form of LH (Scheme 2) in solution. The proton of the hydroxyl group is not observed in the spectra of the complexes confirming its deprotonation and coordination to the metal ions. In the spectra of 1, 3, 4, and 6-8, the -N(1)H-signal undergoes a marginal shift to indicate the noninvolvement of this group in coordination; a relatively large downfield shift would be expected if coordination had occurred. In the same spectra, two signals appear for -NH 2 protons, as expected. The most pronounced variation in chemical shift is the downfield shift of one signal. Since more specific assignments of these two signals seem impossible, it is difficult to conclude which amino nitrogen is coordinated. NMR evidence for the presence of thione -thiol tautomerism in the metal complexes in solution was not found. The 1 H NMR spectrum of 4 confirms that the three N,Obidentate (vide infra) ligands are equivalent (C 3 symmetry), and, therefore, the complex has the fac stereochemistry [15]. The 31 P{ 1 H} NMR spectrum of the Re(V) complex 9 in DMSO-d 6 consists of a sharp singlet at δ − 16.89, a value which is typical for PPh 3 -containing oxorhenium(V) species [17]. Vibrational spectra Tentative assignments of selected IR ligand bands for complexes 1-10 and free LH are listed in Table 2. The assignments have been given by studying literature reports [3,13,14], comparing the spectrum of LH with the spectra of the complexes and by performing deuterium isotopic substitution experiments in few cases. As a general remark, we must emphasize that some stretching and deformation modes are coupled, so that the proposed assignments should be regarded as approximate descriptions of the vibrations. In the v(OH) water region, the spectra of complexes 1-3 show one medium-intensity band at ∼3420 cm −1 attributed to the presence of coordinated water [13]. The same spectra exhibit, in addition to the relatively sharp band of coordinated water, a weaker broad continuous absorption covering the 3400-3200 cm −1 region; this is apparently due to the simultaneous presence of crystal and coordinated water in these complexes [14]. In the spectra of 4-8, a medium broad absorption indicates the presence of exclusively crystal (lattice) water. The absence of an IR or Raman band at ∼2600 cm −1 in the spectrum of free LH suggests that the ligand exists in its thione form (see Scheme 2) [18]. This is corroborated by the appearance of the medium v(C=S) band at 1177 cm −1 (this vibration appears as a strong peak at 1160 cm −1 in the Raman spectrum) and the strong IR v(N-H) band at 2970 cm −1 (this vibration appears as a medium peak at ∼3000 cm −1 in the Raman spectrum); the broadness and low frequency of the latter IR band are both indicative of the involvement of the -NH-group in strong hydrogen bonding. The medium IR band at 3305 cm −1 in the spectrum of free LH is assigned to the v(OH) vibration. This band does not appear in the spectra of the complexes indicating deprotonation of the -OH group and suggesting coordination of the resulting, negatively charged oxygen atom. The absence of large systematic shifts of the δ(N-H), δ(NH), v(C=N), v(C 2 -N 1 )/v(C 2 -N 3 ), and v (C=S) bands in the spectra of the complexes implies that there is no interaction between the ring nitrogen atoms or the exocyclic sulfur atom and the metal ions. The v as (NH 2 ) and v s (NH 2 ) bands are doubled in the spectra of the complexes. One band for each mode appears at almost the same wavenumber compared with the corresponding band in the spectrum of free LH, whereas the other band of each pair is significantly shifted to lower wavenumbers. This fact is a strong evidence for the presence of one coordinated and one "free" (i.e., uncoordinated) amino group per L − in the complexes [7]. The presence of coordinated PPh 3 groups in 6 and 9 is manifested by the strong IR bands at ∼1100 and ∼750 cm −1 , attributed to the v(P-C) and δ(CCH) vibrations, respectively [17]; the former band overlaps with the ClO 4 − stetching vibration in the spectrum of the Rh(III) complex 6. In the spectrum of 8, the bands at 1627, 1591, 1510, 1485, and 1423 cm −1 are due to the phen stretching vibrations [16]; these bands are at higher wavenumbers compared with the free phen indicating chelation. The bands at 854, 841, 743, and 725 cm −1 are assigned to the γ(CH) vibrations of the coordinated phen [16]. The vibrational spectra of the inorganic "parts" of complexes 1, 2, 6, 9, and 10 are also diagnostic. The IR spectrum of 6 exhibits a strong band at ∼1100 and a medium band at 624 cm −1 due to the v 3 (F 2 ) and v 4 (F 2 ) modes of the uncoordinated T d ClO 4 − ion [19], respectively, the former having also v(P-C) character [17]. In the 1000-750 cm −1 region, the spectra of 1 and 2 show bands characteristic of the cis-MO 2 2+ units and the {O 2 M-O-MO 2 } 2+ core (M=Mo, W) [20,21]. The IR bands at 930 and 912 cm −1 in 1 are assigned to the v s (MoO 2 ) and v as (MoO 2 ) modes, respectively [19,20]; the corresponding Raman bands appear at 910 and 896 cm −1 . As expected [19], the symmetric mode is weak in the IR spectrum and strong in the Raman spectrum, while the opposite applies for the asymmetric mode. The appearance of two stretching bands is indicative of the cis configuration [19]. The strong IR band at 745 cm −1 is assigned to the v as (Mo-O-Mo) mode [20], indicating the presence of a μ-O 2− group. The v s (WO 2 ), v as (WO 2 ), and v as (W-O-W) bands appear at 945, 922, and 755 cm −1 , respectively, in the IR spectrum of complex 2 [19,21]; the v s (WO 2 ) and v as (WO 2 ) Raman bands are at 940 and 917 cm −1 , respectively. The v s (WO 2 ) and v as (WO 2 ) modes are at higher wavenumbers when compared to those of the analogous Mo(VI) complex 1, suggesting [21] that the cis-WO 2 2+ group has some "triple" bond character [21]. In the spectra of 9, the band attributed to ν (Re=O) appears at 956 (IR) and 968 (Raman) cm −1 [17,19]. The IR spectrum of the uranyl complex 10 exhibits only one U=O stretching band, that is, v as (UO 2 ), at 940 cm −1 (not observed in the Raman spectrum) indicating its linear transdioxo configuration [19]. The v s (UO 2 ) mode appears as a strong Raman peak at 905 cm −1 , and, as expected, the corresponding IR band is very weak. The bands at 345 and 298 cm −1 in the far-IR spectrum of 7 are assigned to the v(Pd-NH 2 ) and v(Pd-O) vibrations, respectively. The appearance of one band for each mode (B 3u and B 2u under D 2h ) is consistent with a trans structure [19]. Antimicrobial activity studies The free ligand LH and its complexes 1, 4, 7, and 8 were assayed in vitro for antimicrobial activity against two bacterial (S. aureus and P. aeruginosa) and two fungal (A. niger and C. albicans) cultures. The hot plate diffusion method was adopted for the activity measurements [22]. Results are listed in Tables 3 and 4. In general, the Pd(II) complexes 7 and 8 were found to have higher efficacy than 1, 4, and LH at the measured concentrations. The water-soluble complex 8 is the most active against the pathogens studied. It is remarkable that the antifungal activity of 8 is comparable with, or even better than, the activity of the antifungal drug nystatin, and this may be due to the simultaneous presence of phen and L − in the complex. The activity of the Pd(II) complexes 7 and 8 is tentatively attributed to their inhibition of the DNA replication (by interacting with enzyme prosthetic groups and altering the microbial metabolism) and their ability to form hydrogen bonds with the cell wall and cell constituents [23]. The weaker activity of 4 is noteworthy; the reason for this is not clear. CONCLUSIONS The M/LH general reaction system fulfilled its promise as a source of interesting complexes. From the overall evidence presented before, it seems that the ligand L − behaves as a bidentate chelate in all the prepared complexes with the deprotonated oxygen and most probably the amino nitrogen of the position 6 of the pyrimidine ring being the donor atoms, see Scheme 3. However, the participation of the amino nitrogen of the position 4 of the ring cannot be ruled out. The nonparticipation of the sulfur atom in coordination in complexes 7 and 8 may be seen as unusual given the soft character of Pd(II) in the context of the HSAB concept. The chelate effect (a stable chelating ring with the participation of the sulfur atom cannot be formed due to the geometry of L − ) seems to govern the thermodynamic stability of these complexes. The proposed gross schematic structures for 1-10 are shown in Figure 1. Due to the fact that single-crystal, X-ray crystallographic studies are not available, few structural features (e.g., the symmetric structures of 1-3, 6, 7, and 10) are tentative. The metal ions adopt octahedral (1-6, 9, 10) or square planar (7,8) stereochemistries. Finally, complexes 1, 4, 7, and 8 are new welcome additions in the growing family of metal complexes with antimicrobial activity. The results described in this report represent the initial study of the coordination chemistry of LH and the biological activity of its complexes. Further studies with 3d-metal ions are in progress.
5,425
2009-03-23T00:00:00.000
[ "Chemistry" ]
Neutrino predictions from a left-right symmetric flavored extension of the standard model We propose a left-right symmetric electroweak extension of the Standard Model based on the $\Delta \left( 27\right)$ family symmetry. The masses of all electrically charged Standard Model fermions lighter than the top quark are induced by a Universal Seesaw mechanism mediated by exotic fermions. The top quark is the only Standard Model fermion to get mass directly from a tree level renormalizable Yukawa interaction, while neutrinos are unique in that they get calculable radiative masses through a low-scale seesaw mechanism. The scheme has generalized $\mu-\tau$ symmetry and leads to a restricted range of neutrino oscillations parameters, with a nonzero neutrinoless double beta decay amplitude lying at the upper ranges generically associated to normal and inverted neutrino mass ordering. I. INTRODUCTION The SU(3) C ⊗ SU(2) L ⊗ U(1) Y gauge theory provides a remarkable description of the interactions of quarks and leptons as mediated by intermediate vector bosons associated to the Standard Model gauge structure. However, it is well-known to suffer from a number of drawbacks. Most noticeably, it fails to account for neutrino masses, needed to describe the current oscillation data [1]. Likewise, it does not provide a dynamical understanding of the origin of parity violation in the weak interaction. Last, but not least, it also fails in providing an understanding of charged lepton and quark mass hierarchies and mixing angles from first principles. Left-right symmetric electroweak extensions of the Weinberg-Salam theory address the origin of parity violation [2,3], while models based on non-Abelian flavor symmetries [4] address the flavor issues [5,6]. Combining these features is a desirable step forward. Indeed, a predictive Pati-Salam theory of fermion masses and mixing combining both approaches has been suggested recently [7]. In this paper we propose a less restrictive left-right symmetric electroweak extension of the Standard Model based on the SU(3) ⊗ SU(2) L ⊗ SU(2) R ⊗ U(1) B−L gauge group and the ∆ (27) family symmetry. Of the Standard Model fermions only the top quark acquires mass through a tree level renormalizable Yukawa interaction. Exotic charged fermions acquire mass from their corresponding tree level mass terms, while gauge singlet fermions can also have gauge-invariant tree level Majorana mass terms. The masses for the other electrically charged Standard Model fermions, namely quarks lighter than the top, as well as charged leptons, are all induced by a Universal Seesaw mechanism mediated by the exotic fermions. The mass hierarchies as well as the quark mixing angles arise from the spontaneous breaking of the ∆ (27) ⊗ Z 6 ⊗ Z 12 discrete family group, and the radiative nature of the inverse seesaw mechanism is guaranteed by spontaneously broken Z 4 and Z 12 symmetries, with Z 12 spontaneously broken down to a preserved Z 2 symmetry. The Cabibbo mixing arises from the up-type quark sector, whereas the down-type quark sector contributes to the remaining CKM mixing angles. On the other hand, the lepton mixing matrix receives its main contributions from the light active neutrino mass matrix, while the Standard Model charged lepton mass matrix provides Cabibbo-like corrections to these parameters. Finally, the masses for the light active neutrinos emerge from a low-scale inverse/linear seesaw mechanism [8][9][10][11][12] with one-loop-induced seed mass parameters [13,14]. II. THE MODEL The model is based on the left-right gauge symmetry SU(3) ⊗ SU(2) L ⊗ SU(2) R ⊗ U(1) B−L supplemented by the discrete ∆ (27) ⊗ Z 4 ⊗ Z 6 ⊗ Z 12 family group. The particle content and gauge quantum numbers are summarized in table I, while the transformation properties of the fields under the discrete symmetries are presented in tables II, III and IV. Here ω = e Notice that the fermion sector of the original left-right symmetric model has been extended with two vectorlike up-type quarks T k , k = 1, 2, three vectorlike down type quarks B i , three vectorlike charged leptons E i and six neutral Majorana singlets S i , Ω i , with i = 1, 2, 3. The role of the new exotic vectorlike fermions is to generate the masses for Standard Model charged fermions from a Universal Seesaw mechanism. Neutrino masses are in turn produced by an inverse seesaw mechanism, triggered by a one loop generated mass scale [13,14] from the interplay of the gauge singlet fermions S i and Ω i . Let us note that the neutrino Yukawa terms given in Eq. (6) have accidental U symmetries described in Table V. These are spontaneously broken by the VEVs of the scalar fields charged under these symmetries. As a result, massless Goldstone bosons are expected to arise from the spontaneous breakdown of these symmetries. However, these can acquire masses from scalar interactions like λρ 2 ξ 2 and M χ † 2L Φχ 2R , invariant under the symmetry group G of our model, but not under the accidental U symmetries, respectively. We now explain the different group factors of the model. In the present model, the ∆ (27) group is responsible for the generation of a neutrino mass matrix texture compatible with the experimentally observed deviation of the tribimaximal mixing pattern. In addition it allows for renormalizable Yukawa terms only for the top quark, the gauge singlet Majorana fermions Ω i (i = 1, 2, 3) and tree level mass terms for the exotic charged fermions. This allows for their masses to appear at the tree level. Let us note that the ∆(27) discrete group is a non trivial group of the type ∆(3n 2 ), isomorphic to the semi-direct product group (Z 3 × Z 3 ) × Z 3 [4]. This group was proposed for the first time in Ref. [15] and it has been employed in order to construct the Pati-Salam electroweak extension proposed in [7]. This group has also been used in multiscalar singlet models [16], multi-Higgs doublet models [17,18], Higgs triplet models [19] SO(10) models [20][21][22], warped extra dimensional models [23], and models based on the SU (3) C ⊗SU (3) L ⊗U (1) X gauge symmetry [24][25][26]. The auxiliary Z 6 and Z 12 symmetries select the allowed entries of the charged fermion mass matrices and shape their hierarchical structure, so as to get realistic SM charged lepton masses as well as quark mixing out of order one parameters. We assume that the Z 12 symmetry is broken down to a preserved Z 2 symmetry, which allows the implementation of an inverse/linear seesaw mechanism [8][9][10][11][12] for the generation of the light active neutrino masses. This is triggered by one-loop-induced seed mass parameters, in a manner analogous to the models discussed in [13,14]. The spontaneously broken Z 4 symmetry also ensures the radiative nature of the inverse seessaw mechanism. This group was previously used in some other flavor models and proved to be helpful, in particular, in the context of Grand Unification [27][28][29], models with extended SU (3) C ⊗ SU (3) L ⊗ U (1) X gauge symmetry [30,31] and warped extradimensional models [32]. It is worth mentioning that one or both of the Z 6 and Z 12 discrete groups were previously used in some other flavor models and proved to be useful in describing the SM fermion mass and mixing pattern, in particular in the context of three Higgs doublet models [33], models with extended SU (3) C ⊗ SU (3) L ⊗ U (1) X gauge symmetry [14,30,34], Grand Unified theories [28] and models with strongly coupled heavy vector resonances [35]. Quark masses and mixing parameters are modeled with the help of the scalar singlets σ and η. We assume that these scalars acquire vacuum expectation values of order λΛ, where λ = 0.225 is the Cabibbo angle and Λ is the cutoff of our model. Consequently, we set the VEVs of the scalar fields to satisfy the following hierarchy: Here v = 246 GeV is the electroweak symmetry breaking scale and v kR ∼ few TeV (k = 1, 2) the scale of breaking of the left-right symmetry. The resulting mixing angles of ξ with ρ, η and τ are very tiny since they are suppressed by the ratios of their VEVs, which is a consequence of the method of recursive expansion proposed in Ref. [36]. Thus, the scalar potential for ξ can be studied independently from the corresponding one for ρ, η, τ . As shown in detail in Ref. [7], the following VEV alignments for the ∆(27) scalar triplets are consistent with the scalar potential minimization equations for a large region of parameter space: Summarizing, the full symmetry of the model exhibits the following spontaneous breaking pattern A. Charged lepton sector From the charged lepton Yukawa terms in Eq. (5), we find that the mass matrix containing the charged leptons in the basis (l 1L , l 2L , l 3L , E 1L , E 2L , E 3L ) versus (l 1R , l 2R , l 3R , E 1R , E 2R , E 3R ) takes the form: Given that the exotic charged lepton masses m Ei (i = 1, 2, 3) are much larger than v L and v R , it follows that the SM charged leptons get their masses from an Universal seesaw mechanism mediated by the three charged exotic leptons E i (i = 1, 2, 3). Then, the SM charged lepton mass matrix becomes where the effective Yukawas e ij are naturally expected to be of order one. The Standard Model charged lepton mass matrix is diagonalized by a unitary matrix through . In order to illustrate how the charged lepton mass spectrum arises from Eq. (10) B. Neutrino sector From the neutrino Yukawa interactions in Eq. (6), we obtain the following mass terms: where the neutrino mass matrix M ν is given in block form as: with m R = m Re ϕ and m I = m Im ϕ and the loop function [37]: The one-loop Feynman diagrams contributing to the entries of the Majorana neutrino mass submatrix µ are shown in Fig. 1. The splitting between the masses m Re ϕ and m Im ϕ arises from the term κ For the sake of simplicity, we assume that the singlet scalar field ϕ is heavier than the right-handed Majorana neutrinos Ω i (i = 1, 2, 3), so that we can restrict to the scenario and m 2 R − m 2 I m 2 R + m 2 I , for which the submatrix µ takes the form The structure of the resulting neutrino mass is a particular case of that in Ref. [38] (see below). Besides the three active Majorana neutrinos the physical states include the six heavy exotic neutrinos. After seesaw blockdiagonalization [39] we obtain where we have simplified our analysis setting γ i = κ i (i = 1, 2, 3). Here M (1) ν corresponds to the effective active neutrino mass matrix resulting from seesaw diagonalization, whereas M corresponds to the inverse seesaw piece [8,9], while the latter comes from the linear seesaw contribution [10][11][12]. Finally, the light active neutrino mass spectrum is in terms of a common mass scale m 0 C. Lepton mixing matrix The lepton mixing matrix is thus given by where we take V l to be approximated as with η 1 and η 2 of the same order as the Cabibbo parameter λ, as indicated by our estimate in Eq. (12). In the fully "symmetrical" presentation of the lepton mixing matrix [43,44] with c ij = cos θ ij and s ij = sin θ ij , we find the relation between mixing angles and the entries of U to be given as The Jarlskog invariant J CP = Im (U * 11 U * 23 U 13 U 21 ), takes the form [44] J CP = 1 8 sin 2θ 12 sin 2θ 23 sin 2θ 13 cos θ 13 sin(φ 13 This is the CP phase relevant for the description of neutrino oscillations. The two additional Majorana-type rephasing invariants I 1 = Im U 2 12 U * 2 11 and I 2 = Im U 2 13 U * 2 11 are given as I 1 = 1 4 sin 2 2θ 12 cos 4 θ 13 sin(−2φ 12 ) and I 2 = 1 4 sin 2 2θ 13 cos 2 θ 12 sin(−2φ 13 ) . In Figure (2) we show the allowed values for the leptonic Dirac CP violating phase δ versus the atmospheric mixing parameter sin 2 θ 23 , for both normal and inverted neutrino mass orderings. These values were generated by randomly varying the model parameters ω 13 , ω 23 , ψ, |η 1 | and |η 2 | within a range that covers reactor and solar mixing angles inside the 2σ experimentally allowed range. In particular, we varied |η 1 | and |η 2 | in the range [0.5λ, 3λ]. Furthermore, the light active neutrino mass scale was randomly varied in the range 10 −4 eV < m 0 < 1eV, consistent with 2σ allowed values for the neutrino mass squared splittings. To close this section we note that, in contrast to the Left-right symmetric model of Ref. [45], where the µ − τ symmetry is broken softly, our departure from µ − τ symmetry is induced by the mixing in the charged lepton sector, parameterized by the η 1 and η 2 angles, assumed to be of the same order as the Cabibbo angle λ. IV. NEUTRINOLESS DOUBLE BETA DECAY In this section we present the model predictions for neutrinoless double beta (0νββ) decay. The effective Majorana neutrino mass parameter is where m νi are the light active neutrino masses and U 2 ei are the squared lepton mixing matrix elements, respectively. The current experimental sensitivity on the Majorana neutrino mass parameter is illustrated by the horizontal band in Fig. (3) and comes from the KamLAND-Zen limit on the 136 Xe 0νββ decay half-life T 0νββ 1/2 ( 136 Xe) ≥ 1.07 × 10 26 yr [46], which translates into a corresponding upper bound on |m ββ | ≤ (61 − 165) meV at 90% C.L. as indicated by the horizontal band in Fig. (3). For those of other experiments see Ref. [47][48][49][50][51]. The "expected" regions for the effective Majorana neutrino mass parameter |m ββ | consistent with the constraints from the current neutrino oscillation data at the 2σ level are indicated by the other broad shaded bands. There are two cases, corresponding to normal and inverted neutrino mass orderings. These are generic, arising only by imposing current oscillation data. In contrast, the thinner (darker) bands include also the model predictions described in the previous section. These regions are obtained from our generated model points by imposing current neutrino oscillation constraints at the 2σ level. One sees that our "predicted" ranges for the effective Majorana neutrino mass parameter have lower bounds in both cases, of normal and inverted mass orderings, indicating that a complete destructive interference amongst the three light neutrinos is always prevented in our model. These lower bounds for the 0νββ amplitude are general predictions of the present model, and can easily be understood. In fact, as mentioned above, the structure of our µ − τ symmetric neutrino mass matrix is a particular case of that in Ref. [38]. Comparing with the results of Ref. [41] one sees that, indeed, the possible destructive interference amongst the three light neutrinos is prevented (as lim η 1 →0 I 1,2 = 0), thus explaining the absolute lower bound we obtain. The experimental sensitivity of 0νββ searches is expected to improve in the near future. For our model, the predicted 0νββ decay rates may be tested by the next-generation bolometric CUORE experiment [52], as well as the next-to-next-generation ton-scale 0νββ-decay experiments [46,[53][54][55]. V. QUARK MASSES AND MIXINGS In this section, we illustrate how the model is capable of reproducing the correct masses and mixings in the quark sector. From the quark Yukawa interactions, we find that the up-type mass matrix in the basis while the down type quark mass matrix is given as: Assuming the exotic quark masses to be sufficiently larger than v L and v R , it follows that the SM quarks lighter than the top quark all get their masses from a Universal seesaw mechanism mediated by the three exotic up-type and down-type quarks U i and D i (i = 1, 2, 3). It is worth mentioning that the top quark does not mix with the remaining up-type quarks. As a result, the SM quark mass matrices take the form: a22 + κλ 2 λ 6 a 12 λ 5 0 a 12 λ 5 y 2 11 + y 2 13 λ 2 λ 4 y 13 y 23 λ 5 y 13 y 33 λ 3 y 13 y 23 λ 5 m B 3 m B 2 y 2 22 + y 2 23 λ 2 λ 2 y 23 y 33 λ 2 y 13 y 33 λ 3 y 23 y 33 λ 2 y 2 where we have set Let us note that in our model, the dominant contribution to the Cabbibo mixing arises from the up-type quark sector, whereas the down-type quark sector contributes to the remaining CKM mixing angles. In order to recover the low energy quark flavor data, we assume that all dimensionless parameters of the SM quark mass matrices are real, except for b 13 , taken to be complex. Starting from the following benchmark point: invariant [58] are indeed consistent with the experimental data, as shown in Table VI. This establishes the viability of our model also for the quark sector. Note that the dimensionless parameters of the benchmark point (48) are all ∼ O(1) in absolute value. This means that our model reproduces the quark mass and mixing hierarchy by its symmetries resulting in certain distribution of the powers of λ among the entries of the mass matrices (46), (47). VI. FEATURES OF THE MODEL We now sum up the main theoretical features of our model. 2. The masses for the SM charged fermions lighter than the top quark arise from a Universal Seesaw mechanism mediated by charged exotic fermions. The quark mixing angles and the hierarchy between quark masses arise from the spontaneous breaking of the Z 6 ⊗ Z 12 discrete group. 3. The Cabibbo mixing arises from the up-type quark sector, whereas the down-type quark sector induces the remaining CKM mixing angles. On the other hand, the leptonic mixing parameters receive their dominant contributions from the light active neutrino mass matrix, whereas the charged lepton mass matrix provides Cabibbo-sized corrections to these parameters. 4. The masses for the light active neutrinos emerge from a one loop level inverse seesaw mechanism, whose radiative nature is guaranteed by the spontaneously broken Z 4 and Z 12 symmetries, with Z 12 spontaneously broken down to a preserved Z 2 symmetry. 5. The mass terms for the gauge singlet sterile neutrinos S i (i = 1, 2, 3) are generated from at one loop level, mediated by the real and imaginary components of the electrically neutral gauge singlet scalar ϕ as well as by the gauge singlet Majorana neutrinos Ω i (i = 1, 2, 3). These mass terms break lepton number by two units, triggering the one loop level inverse seesaw mechanism responsible for the light active neutrino masses. VII. DISCUSSION AND CONCLUSIONS In summary, we have built a viable extension of the left-right symmetric electroweak extension of the Standard Model capable of explaining the current pattern of SM fermion masses and mixings. Our model is based on the ∆(27) discrete symmetry, supplemented by the Z 4 ⊗ Z 6 ⊗ Z 12 discrete family group. In our model, the masses of the light active neutrinos emerge from a one loop level inverse seesaw mechanism, whereas the masses of the Standard Model charged fermions lighter than the top quark are produced by a Universal Seesaw mechanism. Of the Standard Model fermions only the top quark acquires mass through a tree level renormalizable Yukawa interaction. In our model the Cabibbo mixing arises from the up-type quark sector whereas the down-type quark sector contributes to the other CKM mixing angles. On the other hand, the leptonic mixing parameters receive their dominant contributions from the light active neutrino mass matrix, whereas the SM charged lepton mass matrix provide Cabibbo sized corrections. The observed hierarchy of SM charged fermion masses and mixing angles is caused by the spontaneous breaking of the ∆ (27) ⊗ Z 6 ⊗ Z 12 discrete flavor group, whereas the radiative nature of the inverse seessaw mechanism is guaranteed by spontaneously broken Z 4 and Z 12 symmetries, having Z 12 spontaneously broken down to a preserved Z 2 symmetry. Our model features a generalized µ−τ symmetry and predicts a restricted range of neutrino oscillations parameters, with the neutrinoless double beta decay amplitude lying at the upper ranges associated to normal and inverted neutrino mass ordering. Notice also that our low-scale left-right symmetric radiative seesaw scheme not only accounts for the light neutrino masses and mixings that lead to oscillations and 0νββ-decay, but can also lead to signatures that can make it testable at collider experiments such as the LHC. For example, the heavy quasi Dirac neutrinos can be produced in pairs at the LHC, via a Drell-Yan mechanism mediated by a heavy non Standard Model neutral gauge boson Z . These heavy quasi Dirac neutrinos can decay into a Standard Model charged lepton and W gauge boson, due to their mixings with the light active neutrinos. Thus, the observation of an excess of events in the dilepton final states with respect to the SM background, would be a signal supporting this model at the LHC. Moreover, lepton flavor violation is expected in these decays, even if suppressed at low energies [59,60]. A detailed study of the collider phenomenology of this model is beyond the scope of the present paper and is left for future studies.
5,149.2
2018-11-07T00:00:00.000
[ "Physics" ]
Alternative Methods of the Largest Lyapunov Exponent Estimation with Applications to the Stability Analyses Based on the Dynamical Maps—Introduction to the Method Controlling stability of dynamical systems is one of the most important challenges in science and engineering. Hence, there appears to be continuous need to study and develop numerical algorithms of control methods. One of the most frequently applied invariants characterizing systems’ stability are Lyapunov exponents (LE). When information about the stability of a system is demanded, it can be determined based on the value of the largest Lyapunov exponent (LLE). Recently, we have shown that LLE can be estimated from the vector field properties by means of the most basic mathematical operations. The present article introduces new methods of LLE estimation for continuous systems and maps. We have shown that application of our approaches will introduce significant improvement of the efficiency. We have also proved that our approach is simpler and more efficient than commonly applied algorithms. Moreover, as our approach works in the case of dynamical maps, it also enables an easy application of this method in noncontinuous systems. We show comparisons of efficiencies of algorithms based our approach. In the last paragraph, we discuss a possibility of the estimation of LLE from maps and for noncontinuous systems and present results of our initial investigations. Introduction Lyapunov exponents are invariants characterizing numerous aspects of nonlinear systems' dynamics from complexity, stability, loss of information about a system's dynamical state, the type and structure of attractor-manifold to which the solution tends. The full spectrum of LE consists of a number of indicators equal to the analyzed system's dimension. As Lyapunov exponents contain information about the limit of an exponential change of initial perturbation for infinite time range, procedures of LE estimation are very time intensive. Therefore, new methods which could increase the efficiency of LE estimation are still being developed. Even a comparatively minor improvement of a method means huge time savings. As far as investigations into the stability of dynamical systems are concerned, an application of the largest LE is warranted. Since analyzing the stability of dynamical systems is one of the most important challenges in science and engineering, we decided to attempt a development of the LLE estimation method. In the article, we try to demonstrate that our method is both simple and efficient. Additionally, we present the basics for its development, allowing further increase of the efficiency and potential for application for maps and systems with discontinuities. Recently, we have studied different aspects of the nonlinear systems' control with the use of different new nonlinear methods. We investigated the stability of continuous systems [51] and systems with discontinuities [9,28], control system's optimization [52], synchronization phenomena of energy flow [48,53,54] and chaos-based control of energy flow [55][56][57]. We have also investigated efficiency of our novel method of Lyapunov spectrum estimation in [58] and showed that it allows for significant computation time savings. As far as investigations into the stability of dynamical systems are concerned, application of the largest LE is warranted. Aproximately 60% of scientific research utilizes this simpler and faster indicator. In view of the above, we decided to extend studies of LLE's properties and present the results of our new investigations. The Method Assume that a dynamical system is described by a set of differential equations in the form: where x is a state vector, t is time and f is a vector field that (in general) depends on x and t. Consider a situation in which the state vector x is disturbed by an infinitesimal perturbation z (Figure 1). Evolution of the perturbation z can be determined by linearization of Equation (1): where U(x, t) = ∂f ∂x (x, t) is the Jacobi matrix obtained by differentiation of f with respect to x. If the Jacobi matrix was constant, then the evolution of the perturbation z in directions of subsequent eigenvectors would be specified by corresponding eigenvalues of that matrix. However, as long as the system (1) is nonlinear, the Jacobi matrix varies along the trajectory meaning that the evolution of the perturbation z cannot be directly predicted from properties of the Jacobi matrix. In such a case, Lyapunov exponents are applied to describe an average rate of expansion or contraction of a perturbation. Consequently, Lyapunov exponents can be treated as generalization of eigenvalues [59] of the Jacobi matrix. Moreover, according to [59] during an evolution of the system, eigenvectors connected with the largest eigenvalue spans the linear subspace which tends to align with the direction of the perturbation z(t). As such, all the analyses of LLE can be focused on this direction. Following this eigenvalue idea, during numerical integration for each istep of n integration steps, Equation (2) in the actual z i direction can be presented in the following scalar form: where λ i tends to the largest eigenvalue of U(x, t), and its average value is equal to LLE. Equation (3) can be expressed in the form: In the case where the perturbation is normalized before each integration step, z i = 1: For n numerical integration steps, from Equation (5), averaged perturbation: Finally, ∑ n 1 dz n t = LLE From formula (7), one can see that LE can be treated as a dimensionless perturbation change averaged per time unit. It constitutes the basis for the first of the new methods (M1). As perturbation change dz is the scalar obtained from the differences between norms of z before and after each integration step, the method can be applied in the estimation of LLE from any given map. Additionally, it can be also applied for all the systems with any given discontinuities. Moreover, following Equation (3), when the perturbation is normalized before each integration step: As the value of λ has to be averaged during evolution of the system to obtain LLE, from Equation (8), one can see that LLE equals the averaged speed v of perturbation changes: The above constitutes a basis for the second of the new methods (M2). Incidentally, both of the methods can be treated as identical. They differ only in the way the computed values are averaged during numerical integration. In the first one (Equation (7)), the values of dz are summed up and then averaged by division by time t of calculations. Additionally, regarding the averaged speed (Equation (9)), the actual speed is computed and summed up and the final value of LLE is obtained by division by number n of times of integration. As n·dt = t, both of the methods are equivalent. Methodology All the programs for conducting numerical simulations have been written in C++ by means of the Code: Blocks environment. The Runge-Kutta method of the fourth order (RK4) has been used to solve ordinary differential equations. The integration step has been adjusted for each analyzed system separately, based on its own time scale. We have studied perturbation change averaged per time unit and averaged speed v of perturbation change and compared them with three other methods. All of the considered algorithms of the LLE estimation require integration of the system (1) along with the Equation (2) in order to obtain the state vector x(t) and the perturbation z(t) in subsequent moments of time. Depending on the method, the vector z(t) was either normalized before each integration step or normalized only in the case of excessively high or low values of perturbation length z(t). All the programs for estimation of the LLE share the same code for integration of the systems (1), (2). The only difference between these programs is the method of the LLE calculation. Method 1 (M1) In the first one, the value given by the Equation (7) is calculated after each integration step. Value dz was obtained from the differences between norms of z before and after each integration step. Vector z was normalized before each integration step. Method 2 (M2) In the second case, the value given by Equation (9) is calculated from projection of the vector dz dt onto the direction of normalized vector z according to formula: As vector z was normalized before each integration step, The third case involves application of the classical method [59] for vector z normalized in the case of excessively high or low values of perturbation length z(t): Method 4 (M4) The fourth case is application of the classical method [59] for vector z normalized before each integration step: Method 5 (M5) The last case is the application of our effective method presented in [58]. In this case, the value given by Equation (9) is calculated from the projection of the vector dz dt onto the direction of not normalized vector z according to: In simulation algorithms, conditions for termination of calculations have to be selected. It seems reasonable to finish the estimation procedure if the obtained value of the LLE stabilizes at some fixed value and does not display any relevant fluctuations. In order to measure stabilization of the LLE value, the authors propose to define a buffer of a fixed size. In this research, the buffer capacity equal to 100 was selected. After each calculation step, the current value of the LLE was saved to the buffer. When the buffer was full, the standard deviation of all the LLE values in the buffer was calculated. If the standard deviation related to actual average LLE was below a specified threshold, the value of the LLE was considered as stable and the calculations could be terminated. Failing that, the buffer was cleared and the procedure repeated. The value of the selected threshold corresponded to the desired accuracy of estimation. Lowering the threshold meant higher accuracy, but, consequently, a longer estimation time. Considering the standard deviation threshold, two methods, with relative and not relative deviation value, can be applied. They differ in accuracy of LLE estimation depending on the dynamical state of the system. In the regions of higher absolute LLE values for high accuracy, it proves advantageous to use nonrelative deviation; in the case of quasiperiodic regions, relative value will produce more accurate results. As insignificant differences in values in the periodic and chaotic regions are not of considerable importance, and conversely, detection of the exact bifurcation point is one of the most important considerations in nonlinear systems investigations, relative deviation was applied in our simulations. As regards the threshold of excessively high or low values of perturbation's vector z length, the normalization condition was associated with the product of the first two coordinates of vector z. It allowed for introducing a condition, which does not burden simulation procedures much. Results of Numerical Simulations In order to verify the presented methods of the LLE estimation, two typical nonlinear systems have been analyzed. What follows are the results obtained for Duffing and Van der Pol systems with external forcing. Since the details that follow are organized in the same manner, in order to avoid repeating the same description, specification of the graphs is provided only once below. The first type of the graphs that follow provides the obtained values of the LLE along with computation time lengths for all the investigated methods. Ratios of the program execution times t 1 , . . . , t 5 for all of the five methods represent the execution time of LLE estimation for the specified bifurcation parameter and method, respectively. In the article, we have associated uniform color schemes and types of curves with respective methods. Subsequently, efficiency analysis is presented. Special efficiency indicators are introduced. Let T 1 , . . . , T 5 be sums of t i values, presenting the time measured from the beginning of simulations to the moment a specified bifurcation parameter for each of the five methods has been reached. Let us use these values to introduce efficiency indicators: Relations of η i , with respect to bifurcation parameter of the investigated algorithms M1, M2, M4, M5 as compared to the classical method M3 are presented on subsequent charts. The efficiency gain of the four investigated methods in comparison to the commonly applied method M3 is appreciable. In the following charts, dependence of LLE on bifurcation parameter is presented along with focused analyses presenting the accuracy of LE estimation for three different dynamical states: periodic, quasiperiodic and chaotic. Duffing Oscillator The Duffing oscillator can be described by the following set of differential equations: Based on Equation (2), the Jacobi matrix is necessary to observe evolution of a perturbation. For the Duffing oscillator, the Jacobi matrix is defined as follows: The plot of the LLE for different values of the parameter q and graphs depicting computation time ratios are presented in Figure 2. It is evident that the longest times occur in chaotic regions, and in instances when the system is approaching bifurcation points. This is related to a longer time which is required to stabilize LLE in a chaotic regime in the first case. In the second case, the main reason was given above and is connected with computing relative or not relative standard deviation in the procedure concluding LLE computations. Since minor differences in values in the periodic and chaotic regions are not highly important, and, conversely, the detection of the exact bifurcation point is one of the most important issues in nonlinear systems investigations, relative deviation was applied in our simulations. Obviously, this increased the time needed to satisfy the required LLE value stability condition. The efficiency analysis of four methods M1, M2, M4, M5 with respect to M3 is presented in Figure 3. From the method of construction of the η i indicators, one can deduce that these values for the specified bifurcation parameter q show the average efficiencies of computations of each method from the beginning of calculations until the parameter q is reached. Therefore, the values η i corresponding to the last values of bifurcation parameter present all the average efficiencies of each of the methods. As is evident from Figure 3, only method M4 has efficiency which is not superior to M3. This is to be expected, as M4 is based on M3 and utilizes normalization of perturbation vector in each integration step, while, in M3, the normalization is carried out only in the cases of excessively high or low values of perturbation's vector z length. The final efficiency η 4 is equal to 0.997. Both of the new presented methods, M1 and M2, offer better efficiency than M3. Method M1, which has the potential to be applied in non-continuous systems, offers the final efficiency η 1 equal to 0.927. Therefore, on average, M1 saves about 7% of the computation time. The effect will be even more pronounced when applied for the maps. Method M2 has the final η 2 equal to 0.853, so it saves on average about 14% of the computation time. Finally, method M5 has the best average efficiency η 2 equal to 0.780, so M5 saves on average about 22% of the computation time. The results for M1, M2, and M5 will be marginally inferior for more complex systems, as shown in [58]. However, they will be invariably superior to M3. Figure 3, where the LLE dependence on bifurcation parameter q on a low scale is shown. It can be seen that there exists correspondence between the results of all five of the methods. Higher scale results for the three different dynamical states of the system are presented in the upper part of Figure 4. There is good agreement for M2 . . . M5 methods and only minor differences exist for the M1 method in the periodic and quasiperiodic regions. As these are fourth order level differences, they do not disqualify the M1 method, especially that its efficiency will be considerably higher when applied in LLE estimation from maps. The Van der Pol Oscillator The Van der Pol oscillator can be described by the following set of differential equations: Jacobi matrix was used to simulate evolution of a perturbation according to Equation (2). For the Van der Pol oscillator, the Jacobi matrix is defined as follows: The plot of the LLE for different values of the parameter µ, together with computation times, is presented in Figure 5, whereas graphs depicting computation time ratios are presented in Figure 6. For the same reasons as in the case of Duffing system, the longest computations appear in chaotic regions and when the system is approaching bifurcation points. These effects connected with the application of the relative deviation can be also observed in the regions with high absolute values of negative LLE. As the demanded accuracy in these regions decreases together with the values of LLE, one can see short computation times in these areas. What is important and also evident from the lower part of Figure 5, the influence of such variable accuracy on the estimated values of LLE is negligible-no significant noise caused by this effect can be observed on the LLE graph. It can also be seen in Figure 5 that, even in the Van der Pol system, its divergence varies in time of oscillations-as its dumping is nonlinear, less time is needed to stabilize LLE in chaotic and qusiperiodic regions than in the case of the Duffing system. Maximal values of time for Van der Pol are approximately 120 [s], while, for the Duffing system, they are approximately 160 [s]. As divergence of the system is equal to the Lyapunov exponents' sum, it would appear that the varying divergence could disrupt the stabilization process of LLE values. Apparently, not only does it disrupt the process, but it speeds it up. The effects of variable divergence can be also observed in the values of LLE in periodic regions. In the case of the Duffing system, the values of LLE are constant and equal to half of the divergence (the second Lyapunov exponent is equal to LLE). In the case of Van der Pol, there exist no regions of constant LLE. Efficiency analysis of the four methods M1, M2, M4, M5 with respect to M3 is presented in Figure 6. For the same reasons as in the case of the Duffing system, it is only method M4 that has no superior efficiency compared to M3. Both of the new presented methods, M1 and M2, have better efficiency than M3. Method M1, which has the potential to be applied in non-continuous systems, has the final efficiency η 1 equal to 0.928 (Duffing 0.927). Therefore, on average, M1 saves about 7% of the computation time. The effect will be even more pronounced when applied for maps. Method M2 has the final η 2 equal to 0.867 (Duffing 0.853), so it saves on average about 14% of the computation time. Finally, method M5 has the best average efficiency η 2 Equal to 0.837 (Duffing 0.780), which translates into an average of approximately 16% savings of the computation time. These results confirm conclusions for the Duffing system. Accuracy comparison of LLE estimation is presented in Figure 7. In the bottom section, one can see LLE dependence on bifurcation parameter q on a low scale. Similarly to the Duffing system, there exists a good agreement between the results of all five methods. Higher scale results for the three different dynamical states of the system are presented in the upper part of Figure 4. It can be appreciated that, unlike for the Duffing system, the results instead merge and cannot be accurately determined. Largest Lyapunov Exponent (LLE) from Maps As it was proved in [60], with the use of our method, perturbation behavior of a perturbation can be reconstructed based on the time series of the dynamical system, without reconstruction of the Jacobi matrix. It can be combined with the approach presented above and then applied for dynamical maps. The first approach comes directly from application of the method M1. In this case, the value of the sum of perturbations was averaged while the trajectory x(t) crossed the hyperplane π-see Figure 8. A time series comparison with the method M1 is shown in Figure 9. It can be seen that estimation error is within the same range as for the method M1. The second approach requires an extended analysis. In Figure 8, a trajectory x(t) of a dynamical system and the perturbed system trajectory y(t) can be seen. While these trajectories cross the hyperplane π, one obtains perturbation z and then next perturbation z 1 from the next points of crossing trajectories through the hyperplane π. After projection of the difference of the vectors z 1 − z on to the direction of perturbation z, one obtains a differential dz. It allows for substituting the lengths z and dz into Equation (4) to find λ value. Alternatively, dz can be calculated from the difference of the norms of vectors |z 1 | − |z|. However, in this case, the estimation error is expected to be higher. During the evolution of the system, obtained values have to be averaged and then recalculated according to the error correction analysis presented below. Error Correction Analysis Between the trajectory crossing the hyperplane π, there were calculated i steps of numerical integration. During numerical calculation of LLE, in each integration step, values λ i are obtained, and then averaged in time in order to obtain LLE. Following reasoning that justified scalar notation of Equation (3), we can continue in the same vein in the case of maps. Then, the value of the proposed indicator for a map is: Consider where δ is LLE estimation error. While the final value of LLE is an average of λ i : where T is the time from one to the next crossing the map. To simplify the analysis of the correction error, let us start with i = 5. Then, Finally, n-th power of dt is connected with i n − 1 combinations of products of (n + 1) of λ j , where: j = 0 . . . i−1. Obviously, λ j are unknown while calculating λ map . In order to estimate the value of the correction error, we have assumed that λ j equals the average value λ av . Then: As λ av = LLE and for nondimensional T = 1 i = 1 dt , we obtain the final correction error CE: Finally, LLE can be estimated from the following dependence: The presented approach was applied to estimate LLE of the Duffing system (Equations (16) and (17)). Time series of LLE obtained from numerical simulations can be seen in Figure 10. As is evident, the estimated value of LLE = −0.0237. As the dumping coefficient b = −0.05 and the system remains within the range of the periodic regime LLE = b/2 = −0.025. Thus, the error of the estimated value is 0.0013. From Figure 11, it can be seen that the correction error is within the range of 0.0025. After correction of the obtained LLE value, finally LLE = −0.0262. It means that the error of the presented LLE estimation is about 5%. However, as the value of LLE is computed only while the trajectory intersects the map, the method is expected to be much faster than the continuous ones. In our next article, we will present an extended study of the presented method. Conclusions The present article introduces new methods of LLE estimation for continuous systems and maps. We have proved that the sum of dimensionless perturbations, averaged per time unit of measuring the evolution of the system, constitutes the value of the LLE. We have shown that this approach works also in the case of dynamical maps. Additionally, we have proved that LLE can be also equated to the average speed of perturbation change. The basic background of the methods was presented. The results were compared with other methods. Investigations were carried out for two typical nonlinear systems. We have shown a good agreement of the results obtained with the use of the new approaches with respect to the other methods. In the case of continuous systems, we have also compared efficiencies of algorithms based on these methods. We have shown that the new presented methods have better efficiency than the commonly applied M3. We have shown that M1 can save about 7% of the computation time. Method M2 is faster and saves on average about 14% of the computation time. We have also shown that the fastest method, M5, saves on average about 16-22% of the computation time. We have also discussed basic aspects of the application of the presented methods in estimation of LLE from maps and for noncontinuous systems and showed the initial results of our approach. An extended study of this section of the article will be presented in the next publication.
5,759.6
2021-11-25T00:00:00.000
[ "Computer Science", "Mathematics" ]
TSML: A New Pig Behavior Recognition Method Based on Two-Stream Mutual Learning Network Changes in pig behavior are crucial information in the livestock breeding process, and automatic pig behavior recognition is a vital method for improving pig welfare. However, most methods for pig behavior recognition rely on human observation and deep learning. Human observation is often time-consuming and labor-intensive, while deep learning models with a large number of parameters can result in slow training times and low efficiency. To address these issues, this paper proposes a novel deep mutual learning enhanced two-stream pig behavior recognition approach. The proposed model consists of two mutual learning networks, which include the red–green–blue color model (RGB) and flow streams. Additionally, each branch contains two student networks that learn collaboratively to effectively achieve robust and rich appearance or motion features, ultimately leading to improved recognition performance of pig behaviors. Finally, the results of RGB and flow branches are weighted and fused to further improve the performance of pig behavior recognition. Experimental results demonstrate the effectiveness of the proposed model, which achieves state-of-the-art recognition performance with an accuracy of 96.52%, surpassing other models by 2.71%. Introduction Behavior changes play a crucial role in the pig breeding process. Accurately monitoring and understanding pig behavior is essential for improving pig welfare [1], predicting their health status, and facilitating the development of intelligent farming. To achieve promising pig behavior recognition performance, numerous researchers have conducted extensive studies. These studies can be broadly classified into two categories: sensor-based and computer vision-based approaches. The first group of techniques relies on sensor-based monitoring of pig behavior. Several researchers have designed automatic monitoring systems that use sensors, such as infraredsensitive cameras for real-time monitoring of pig activities [2] and behavior measurement. Other methods employ high-frequency radiofrequency identification (HF RFID) systems for monitoring individual drinking behavior [3] or pressure pads to track lame behavior in pigs [4]. However, these techniques involve physical contact with the pigs that can lead to stress and inaccurate measurements. The second group of methods is based on computer vision. For instance, Zhang et al. [5] proposed a two-stream convolutional neural network for pig behavior recognition, where the feature extraction network is either a residual network (ResNet) or an inception network. Zhuang et al. [6] developed a pig feeding and drinking behavior recognition model based on three models: VGG19, Xception, and MobileNetV2. They also designed two systems to monitor pig behaviors. Their final results demonstrated that the MobileNetV2trained model had a significant advantage in pig behavior recognition, with a recall rate above 97%. Wang et al. [7] implemented an improved HRNet-based method for joint point detection in pigs. By employing CenterNet to determine the posture of pigs (whether they are lying or standing), and then implementing the HRST approach for joint point detection in standing pigs, they achieved an average detection accuracy of 77.4%. Luo et al. [8] proposed a channel-based attention model for real-time detection of pig posture. They compared their model with other popular network models, such as ResNet50, DarkNet53, and MobileNetV3, and showed that their proposed model outperformed the other models in terms of accuracy. They proved that the channel-based attention model is a promising approach for real-time pig posture detection [9]. Zhang et al. [10] presented an SBDA-DL, which is a deep learning-based real-time behavior-detection algorithm for sows. They designed it to detect three typical behaviors of sows: drinking, urinating, and sitting. The algorithm utilizes a combination of convolutional neural networks (CNN) and recurrent neural networks (RNN), along with a transfer learning approach, to achieve a high level of accuracy in behavior detection. The experimental results showed that the average detection accuracy, measured by mean average precision (mAP), reached 93.4%, indicating the effectiveness of the proposed approach. The SBDA-DL algorithm provides a non-invasive method for monitoring sow behavior, which can reduce labor costs and enhance animal welfare in pig farming. Li et al. [11] proposed a multi-behavioral spatio-temporal network model for pigs. By comparing it with a single-stream 3D convolutional model, the proposed model achieved a top-one accuracy of 97.63% on the test set. This multi-behavioral spatio-temporal network model provides a new approach for recognizing pig behaviors [12]. It has the potential to improve the efficiency of pig farming and to ensure animal welfare [13]. In summary, sensor-based methods are vulnerable to collision damage, resulting in inaccurate recognition and causing stress to the pigs both mentally and physically. Meanwhile, although deep-learning-based methods have achieved successful recognition results, their large parameter sizes lead to lengthy training and testing times, limiting their practical deployment on low-memory and low-capacity devices. To overcome these challenges, we propose a novel two-stream mutual-learning (TSML) model for pig behavior recognition, aiming at improving the efficiency of pig farming and ensuring animal welfare. In comparison to other methods, TSML is more accurate and efficient in recognizing pig behavior. Our method is characterized by the cooperation between the RGB and flow streams that enables it to extract both appearance and temporal information efficiently. It also allows the model to extract critical feature information while avoiding irrelevant interference. Moreover, the mutual learning strategy improves the accuracy of behavior recognition by enabling the two student networks in each stream to learn collaboratively, gaining more robust and richer features in a shorter time. Compared with other methods that use either single-stream convolutional networks or multi-stream networks, our proposed model outperforms them in terms of accuracy, while being more efficient with a smaller number of parameters. This makes it more feasible to deploy on low-memory and low-capacity devices. Additionally, our unique dataset of pig behavior videos allows for more precise and reliable behavior detection and analysis, making our method practical for use in pig farming applications. Overall, our proposed two-stream mutual-learning method offers significant improvements over existing methods in terms of accuracy and efficiency while being practical for real-world applications. The impact of our research on pig breeding is significant. Efficient monitoring of pig behavior is essential for improving pig welfare and for increasing the economic benefits of pig farms. Accurately monitoring and understanding pig behavior also allows for the prediction of their health status and facilitates the development of intelligent farming. The proposed TSML model offers a non-invasive and efficient method for monitoring pig behavior. In addition, by utilizing our unique dataset of pig behavior videos, future pig farming can be modernized with more precise and reliable behavior detection and analysis. Overall, our proposed method and dataset could significantly impact the pig breeding industry and enhance animal welfare. Overall, the contributions of this paper can be summarised below: • We established a novel dataset of pig behavior recognition dataset, which contains six categories. To provide a comprehensive understanding of pig behavior recognition, we have included six categories in our dataset, with each category consisting of roughly 600 videos. Each video varies in length from 5 to 10 s, providing sufficient footage to detect and analyze behavior patterns in pigs. These videos were collected over a period of one month utilizing six Hikvision cameras capturing over 85 pigs on a farm. All of the factors mentioned above have contributed to the creation of a unique and diverse dataset, collected on this farm, that exhibits better diversity in terms of illumination, angles, and other variables. This approach ensures that the dataset accurately represents the various scenarios and environments in which pigs behave, thereby resulting in more precise and reliable behavior detection and analysis. • We first propose a novel pig behavior recognition method based on a two-stream mutual-learning framework. This model can efficiently extract more robust and richer features via mutual learning in RGB and flow paths separately and will extract both appearance and temporal information. Simultaneously, the decisions of the RGB and flow branches can be merged to gain improved pig behavior recognition performance. Specifically, our model achieves the best performance for pig behavior recognition task, with about a 2.71% improvement in the existing model. • Several experiments were conducted to validate the superiority of the proposed model. The experiments included evaluating the performance of the proposed models, evaluating the behavior recognition performance of different models with or without mutual learning, evaluating the performance of the proposed model based on two identical networks, and evaluating the performance of the proposed model based on two different networks. The rest of this paper can be organized as follows: Section 2 provides a detailed description of the methods and dataset used in the study. Section 3 presents the experimental results and analysis. In Section 4, we discuss the findings of our research. Finally, we conclude the paper in Section 5. Datasets The video data were collected from a pig farm located in Xiangfen County Agricultural Green Park Agricultural Company Limited, Linfen City, Shanxi Province. The farm encompasses 20 pig barns, each of which contains drinking water and feeding equipment as shown in Figure 1. For this study, six barns were selected, housing a total of 85 threeyuan fattening pigs. To ensure effective data collection, one camera was installed on each of the six barns at a height of approximately 3 m from the ground. The cameras were angled at 45 degrees diagonally toward the aisle and recorded videos at 25 fps with a resolution of 1920 × 1080 pixels. The specific camera utilized in this research was Hikvision DS-2DE3Q120MY-T/GLSE, and the whole data collection process lasted for 45 consecutive days, from 12 August 2022 to 25 September 2022. The final pig behavioral recognition dataset contains six categories, including fighting, drinking, eating [14], investigating, lying, and walking (as shown in Figure 2). Specifically, each category consists of approximately 600 videos, each lasting between 5 and 8 s. In total, the dataset contained 3606 videos, of which 80% (2886 samples) were utilized for training, and 20% (720 samples) were employed for testing. Further, the detailed distribution of the collected videos of different behavioral categories is shown in Table 1. Problem Definition This paper presents a novel TSML approach for pig behavior recognition. The model comprises two branches, spatial and temporal, each of which contains two student networks that perform mutual learning. The spatial branch extracts appearance features from still image frames while the temporal branch focuses on the optical flow motion in the video frames. The results of the two branches are subsequently weighted and fused to yield the final recognition result for pig behavior. The two-stream strategy employed in this approach effectively captures the complementary nature of the appearance and motion information underlying the video [15], while the mutual learning design further enhances the efficiency and accuracy of the model in recognizing pig behavior [16]. The framework of the proposed TSML is presented in Figure 3. The input to the , of the RGB image x s i from the ith video v i belonging to class c in the first student network of the spatial stream can be calculated as follows: Here, S c s1 (x s i ) represents the logit output of the softmax layer from the first student network in the spatial stream for input x s i . The probability, p c s2 (x s i ), of the RGB image x s i from the ith video v i belonging to class c in the second student network of the spatial stream can be written as follows: Similarly, S c s2 (x s i ) represents the logit output of the softmax layer from the second student network in the spatial stream for input x s i . The probability, p c t1 (x t i ), of flow image x t i corresponding to the RGB image x s i from the ith video v i belonging to class c in the first student network of the temporal stream can be described as follows: On the other hand, S c t1 (x t i ) denotes the logit output of the softmax layer from the first student network in the flow stream for input x t i . The probability, p c t2 (x t i ), of flow image x t i corresponding to the RGB image x s i from the ith video v i belonging to class c in the second student network of the temporal stream can be calculated as follows: Similarly, S c t2 (x t i ) represents the logit output of the softmax layer from the second student network in the flow stream for input x t i . The loss functions for the spatial and temporal two branches in the TSML can be defined as: Here, L s and L t represent the loss of the spatial stream and the temporal stream, respectively. The hyperparameter α controls the balance between these two loss terms. Furthermore, L s1 and L s2 denote the losses for the two student networks in the spatial stream, while L t1 and L t2 denote the losses for the two student networks in the temporal stream. The formulations for L s1 and L t1 are as follows: Here, L c_s1 and L c_t1 represent the cross-entropy loss that measures the difference between the predicted value and the actual value. D KL (p s1 ||p s2 ) represents the Kullback-Leibler (KL) divergence between the probability distributions p s1 and p s2 . L c_s1 and L c_t1 can be calculated using the following equation: Among them, y c i is an indicator, if y i = c, y c i = 1; and if y i = c, y c i = 0. In the spatial stream, to enhance the generalization capacity of the first student network on testing samples, another peer network is employed to provide training experience via its posterior probability p2. The KL divergence is used to quantify the matching degree between the predictions p1 and p2. D KL (p s2 ||p s1 ) indicates the KL distance from p s1 to p s2 and can be calculated using the following formula: Here, p s1 and p s2 represent the predicted probability distributions from the first and second student networks, respectively, in the spatial stream. In the temporal stream, D KL (p t2 ||p t1 ) indicates the KL distance from p t1 to p t2 and shares a similar meaning with D KL (p s2 ||p s1 ) in the spatial stream. Moreover, the meanings of L s2 and L t2 are similar to those of L s1 and L t1 . Student Network2 Video class Figure 3. The structure diagram of the two-stream network model based on the idea of mutual learning. The Implementation Details The software and hardware system settings used in this paper are presented in Table 2. For fair comparison, we optimized all experimental models with a gradient descent algorithm using a momentum of 0.9, a batch size of 16, a learning rate of 0.001, and an Alpha value of 0.5, and we trained them for 500 epochs. Evaluation Criteria In order to compare the performance of different models, several evaluation criteria were used, including accuracy, parameters, FLOPs (floating point operations per second), and loss. The accuracy reflects how well the model performs, while the number of parameters indicates the efficiency of the model-a smaller number of parameters is generally better. The FLOPs metric also indicates efficiency-again, a smaller number is better. It specifies the number of floating point operations required per second. All experiments were conducted on TITANX GPUs. Experimental Results and Analysis In this section, we will provide a detailed report on the experimental results and analysis. The overall experiment consists of several design parts, including evaluating the superiority of the proposed model, evaluating the efficiency of two stream mutual learning based on two identical networks, and evaluating the efficiency of two-stream mutual learning based on two different networks. Evaluating the Superiority of the Proposed Model To validate the superiority of the proposed TSML, several models were utilized for comparison, including ResNet18, ResNet34, ResNet50, Vgg16 [17] and MobileNetv2 [18]. The results are shown in Table 3. Table 3 shows that the proposed model outperforms other common models in pig behavior recognition. Specifically, the proposed model achieves 96.52% accuracy, which is 4.51%, 0.87%, 2.19%, 2.18% better than the accuracy rates of ResNet18 [19], ResNet50 [19], MobileNetV2, VGG16, respectively. These results demonstrate the superiority of the proposed model. Furthermore, to provide readers with a more intuitive understanding of the superiority of TSML in pig behavior recognition, we report the accuracies and losses of different comparison models under different epochs in Figure 4. Here, Figure 4a shows the accuracy of different models under different epochs, while Figure 4b shows the loss of different models under different epochs. The results in Figure 4 demonstrate that the accuracy and loss of TSML exceed those of other models, which further validates the effectiveness of the proposed TSML. The outstanding performance of the TSML model can be attributed to its ability to effectively capture richer appearance and motion features [20], resulting in improved accuracy in pig behavior recognition tasks [21]. Evaluating the Efficiency of the Two-Stream Network in Pig Behavior Recognition To validate the effectiveness of the two-stream network setting in the pig behavior [22] recognition framework, we compared the single RGB stream, single flow stream, and the fusion of two streams for several models [23], including ResNet18, ResNet34, ResNet50, Vgg16, MobileNetv2. The results of the comparison are displayed in Table 4. Table 4 clearly demonstrates that the two-stream network setting consistently outperforms the single RGB and the flow networks by a significant margin. To be more specific, the two-stream version of the ResNet18 model achieved an accuracy of 92.35%, which is 1.22% and 50.23% higher than its corresponding RGB and flow versions, respectively. The two-stream version of the ResNet50 model achieved an accuracy of 95.69%, which is 2.70% and 10.79% better than its corresponding RGB and flow versions, respectively. The two-stream version of the MobileNetv2 model achieved an accuracy of 94.45%, which is 0.60% and 9.52% better than its corresponding RGB and flow streams. The two-stream version of the Vgg16 model achieved an accuracy of 94.44%, which is 0.45% and 13.36% better than its corresponding RGB and flow versions, respectively. Furthermore, to provide readers with a more intuitive understanding of the twostream network, we include As depicted in Figure 5, the fusion of RGB and flow into two streams consistently achieved better results compared to using the RGB or flow streams alone. These results clearly demonstrate the superiority of the two-stream settings in the pig behavior recognition task. The use of both streams provides complementary information, allowing for more accurate and robust recognition of pig behaviors [24]. The fusion of multiple modalities has been a popular trend in many computer vision tasks, and our results provide evidence supporting this trend in the field of pig behavior recognition. The results of these comparisons provide evidence of the superiority of the two-stream network in the pig behavior recognition task. The reason for this is that the two-stream network is capable of capturing both the appearance and motion information in the video, so that effective spatiotemporal features can be extracted, ultimately facilitating improved performance in pig behavior recognition. The RGB stream is capable of capturing appearance features such as color and texture, while the flow stream focuses on motion features such as the intensity and direction of movement. By combining both streams, our proposed two-stream network can effectively capture the complex spatiotemporal information for more precise and reliable recognition of pig behavior. Compared with traditional single-stream convolutional networks [25], using two streams allows for more efficient extraction of information. This approach reduces noise and irrelevant information while improving the accuracy of the recognition process. As a result, our proposed two-stream network provides a practical and viable approach for reliable pig behavior recognition in real-world applications. In summary, the two-stream network is considered superior for pig behavior recognition tasks due to its ability to capture both appearance and motion information effectively. By processing this information jointly, our TSML model can generate more robust and accurate feature representation, making it a promising choice for pig behavior recognition. Evaluating the Efficiency of TSML Based on Two Identical Networks In this section, we evaluate the performance of our proposed TSML approach based on two identical student networks. Specifically, TSML utilized different backbone architectures, including ResNet18, ResNet50, MobileNetV2, to validate the generalization of the proposed approach. To simplify the explanation, we refer to these models as Res18, Res50, and Mobilev2, respectively. The comparison results are shown in Table 5. Among Table 5, SigRes18 refers to the RGB and flow two-stream networks that comprise a single Res18 network. MulRes18(Res18) indicates that both the RGB and flow networks consist of two student networks that perform mutual learning, with each branch of the student network based on the Res18 architecture. Other single models (SigRes50 and SigMobv2) and other mutual models (MulRes18, MulRes50 and MulMobv2) share similar meanings with those of Sig18 and MulRes18 (Res18). Furthermore, MulRes18(18)-i, denotes the index of two mutual-learning [26] models. Table 5 illustrates that the TSML with two identical networks achieves significantly and consistently superior performance than those of the single network. Specifically, MulRes18(Res18)-1/MulRes18(Res18)-2 obtain 2.26%/2.41% better accuracy than that of sigRes18; MulRes50(Res50)-1/MulRes50(Res50)-2 obtain 0.86%/0.57% better accuracy than that of sigRes50; and MulMobv2(Mobilev2)-1/MulMobilev2(Mobilev2)-2 obtain 0.29%/0.15% better accuracy than that of SigMobilev2. These results validate the superiority of the TSML approach, which is based on two identical student networks for both the RGB and optical flow branches. Additionally, in order to provide readers with a more intuitive understanding and visualization of the superiority of TSML based on two identical networks, we include Figures 6 and 7 demonstrate that the accuracy and the loss of MulRes18(Res18) and MulMobilev2(Mobilev2) outperform that of SigRes18 and SigMobilev2, which validates the effectiveness of the TSML based on two identical student networks. The reason why the TSML model based on two identical student networks achieves better performance is as follows. Although the two student networks in the TSML model have the same network structure, their initial parameter values differ, resulting in the acquisition of different knowledge. Therefore, during the training process, they can obtain diverse knowledge and experience from each other, leading to the model producing better and more efficient behavior recognition performance. Evaluating the Efficiency of TSML Based on Two Different Networks In this section, we evaluate the performance of the proposed TSML approach using two different student networks.TSML utilized different backbone architectures, including ResNet18, ResNet34, and ResNet50. For ease of reference, we will refer to these models as Res18, Res34, Res50, and Mobilev2. The comparison results are shown in Table 6. In Table 6, the SigRes18 model refers to both the RGB and optical flow streams of TSML comprising a single Res18 network. Other single models share similar meanings as SigRes18. MulRes18(Res34) and MulRes34(Res18) indicate the two different mutual-learning student networks in the two streams of TSML that share the same structure with that of ResNet18 and ResNet34. Other mutual-learning models, such as Mul-Res18(Res50)/MulRes50(Res18) and MulRes34(Res50)/MulRes50(Res34), share similar meanings as MulRes18(Res34)/MulRes34(Res18). Table 6 demonstrates that TSML with two different networks consistently achieves significantly superior performance compared to the corresponding single networks. Specifically, MulRes18(Res34)/MulRes34(Res18) achieve 2.41%/1.61% better accuracy than Si-gRes18/SigRes34; MulRes18(Res50)/MulRes50(Res18) demonstrate 2.71%/0.52% superior accuracy than SigRes18/SigRes34; and MulRes34(Res50)/MulRes50(Res34) achieve 1.77%/0.72% better accuracy than SigRes18/SigRes34. These results highlight the superiority of the TSML approach that employs two different student networks for both the RGB and optical flow branches. In some cases, smaller student networks with mutual learning can outperform larger single neural networks. The above experimental results indicate that the TSML model based on different student networks has superiority. This is attributed to the fact that in this model, two student networks have different network structures and initial parameter values, resulting in different knowledge. Consequently, their collaborative learning allows them to obtain different knowledge and experience from their peers, thereby achieving superior performance. Discussions The proposed TSML approach leverages both the mutual-learning and two-stream network strategies to gather enhanced appearance and motion information underlying video in an interactive manner. The cooperation between the RGB and flow streams enables the TSML to achieve promising accuracy and efficiency. The mutual-learning strategy allows the two student networks in each stream to learn collaboratively, gaining more robust and richer features in a shorter time, which further enhances the accuracy of pig behavior recognition. Our approach not only improves the accuracy of pig behavior recognition, but it also enhances the efficiency of the recognition process. To validate the superiority of TSML, several experiments were designed and conducted, including evaluation of the superiority of the TSML model and evaluation of the TSML model based on two of the same or different student networks. The experiments demonstrated that our proposed TSML model outperforms other models for pig behavior recognition, achieving an improvement of about 2.71% in accuracy. Specifically, the TSML model achieved 96.52% accuracy, which is 4.51%, 0.87%, 2.19%, 2.18% better than those of ResNet18, ResNet50, MobileNetV2, and Vgg16, respectively. To sum up, the experimental results demonstrate that our TSML model outperforms the competition in terms of accuracy when applied to the pig behavior recognition task. The outstanding performance of the TSML model can be attributed to its ability to effectively capture richer appearance and motion features. By leveraging the twostream mutual-learning framework, the model can efficiently extract both appearance and temporal information, leading to enhanced feature representation and improved accuracy in pig behavior recognition tasks. The RGB stream captures appearance features such as color and texture, while the flow stream captures motion features such as the intensity and direction of movement. By combining both streams and by collaboratively learning between them, our TSML model is better able to capture the complex visual cues that are critical for pig behavior recognition. In contrast to other approaches, our TSML model is specifically designed to balance the performance and efficiency trade-off in pig behavior recognition tasks. By utilizing mutual-learning and two-stream network strategies, the model can capture more robust features with fewer parameters, making it more practical for real-world applications. This approach provides a comprehensive understanding of pig behavior and further insights on the creation of a robust deep network that can be applied to various tasks. Furthermore, our experimental results demonstrate that the TSML model with two different or same networks in both the RGB and flow streams consistently achieves significantly superior performance compared to their corresponding single network. This improvement can be attributed to several factors. Firstly, by using two student networks with unique initial parameter values or network structures, the TSML model can gain different knowledge and acquire a more comprehensive understanding of the appearance or flow of information in the videos. This approach allows the networks to learn from each other, leading to a more robust and comprehensive feature representation that enhances the accuracy of pig behavior recognition. Additionally, the collaborative learning of the student networks allows them to acquire different knowledge and experience from their peers. This approach enhances their ability to recognize pig behavior more accurately and efficiently. By combining these mechanisms, our proposed model achieves a high level of performance in pig behavior recognition. In summary, our experimental results suggest that using multiple student networks within the TSML model can significantly improve pig behavior recognition accuracy and efficiency. The benefit of mutual learning and information fusion between different networks provides a substantial gain that can be performance-driven in various domains. However, one potential disadvantage of our TSML model is that it requires a larger amount of training data to achieve optimal performance. Nonetheless, given the significant improvement in accuracy, this method is considered suitable for practical applications in pig farming. To further improve the accuracy and efficiency of the model, future work could explore the use of other advanced machine learning techniques such as reinforcement learning, transfer learning, and attention mechanisms. Additionally, future studies could apply our proposed approach to other domains such as wildlife conservation for animal behavior recognition. Conclusions This paper proposes a novel approach for pig behavior recognition, named TSML, which combines mutual learning with two stream neural networks that separately learn both appearance and motion information from videos. The mutual-learning strategy ensures that the basic student neural networks in the model update parameters collaboratively and gain information from each other throughout the training process. Furthermore, the two-stream network collects both appearance and motion information via its RGB and flow branches. Leveraging mutual learning and the two-stream network, the TSML model achieves excellent pig behavior recognition performance with higher efficiency and effectiveness. The experimental results show that the TSML model can greatly improve pig behavior recognition performance, delivering 2.71% higher accuracy in comparison to other models. In terms of future work, we will explore the application of the proposed model to behavior recognition tasks for other livestock such as cattle and sheep. Additionally, we will continue to investigate more efficient and effective network structures to enhance the accuracy and efficiency of pig behavior recognition. Lastly, we will explore effective methods for identifying complex group pig behaviors. Data Availability Statement: The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request. Conflicts of Interest: We declare that this paper has no conflict of interest. Furthermore, we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.
6,830.6
2023-05-26T00:00:00.000
[ "Computer Science" ]
Robust Visual Tracking Based on Adaptive Convolutional Features and Offline Siamese Tracker Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. The existing spatially regularized discriminative correlation filter (SRDCF) method learns partial-target information or background information when experiencing rotation, out of view, and heavy occlusion. In order to reduce the computational complexity by creating a novel method to enhance tracking ability, we first introduce an adaptive dimensionality reduction technique to extract the features from the image, based on pre-trained VGG-Net. We then propose an adaptive model update to assign weights during an update procedure depending on the peak-to-sidelobe ratio. Finally, we combine the online SRDCF-based tracker with the offline Siamese tracker to accomplish long term tracking. Experimental results demonstrate that the proposed tracker has satisfactory performance in a wide range of challenging tracking scenarios. Introduction Target tracking is a classical computer vision problem with many applications. In generic tracking, the goal is to estimate the trajectory and size of a target in an image sequence, given only its initial information [1]. Target tracking has significantly progressed, but challenges still remain due to appearance change, scale change, deformation, and occlusion. Researchers have been tackling these problems by using the learning discriminative appearance model of the target. This method describes the target and background appearance based on rich feature representation. As such, this paper investigates deep robust feature representations, adaptive model updates, and Siamese offline tracker for robust visual tracking. Danelljan et al. [2] proposed the spatial regularization correlation filter (SRDCF), which introduced learning to the penalize correlation filter coefficients depending on their spatial location. The SRDCF framework has been significantly improved by including scale estimation [3], non-linear kernels [4], long-term memory [5], and by removing the periodic effects of circular convolution [2,6,7]. However, three main problems limit the SRDCF formulation. Firstly, the dimension of the deep features significantly limits the tracking speed. Secondly, short-term target tracking algorithms cannot handle the out-of-view problem. Thirdly, online updates with fixed rate cause drift when suffering heavy occlusion. Advances in visual tracking have been made for the features learned from deep convolutional neural networks (DCNNs). However, the outperforming deep features rely heavily on training on large-scale datasets. Thus, most state-of-the-art trackers use pre-trained networks to extract deep features. However, these improvements in robustness cause significant reductions in tracking speed. Siamese Networks have also been used to solve the tracking problem. The matching mechanism in Siamese Network approaches prevent model contamination and achieve better tracking performance. To perform long-term tracking, some methods implement a failure detection mechanism to combine multiple detectors with complementary characteristics at the different tracking stages. However, these approaches only use online update tracking and cannot unite the Siamese Trackers. Based on the discussion above, we propose a novel SRDCF tracking framework that synthetically uses DCNN and failure detection combined with Siamese trackers. The main contributions of this paper are as follows: (1) We propose a method to obtain a specific feature map considering the tradeoffs between spatial information and semantic information though convolutional feature response, and use an adaptive projection matrix to obtain the principal component of the corresponding feature map, which reduces the computational complexity during feature extraction. (2) We propose a novel adaptive model updating method. First, we obtain the confidence of the target position based on the peak-to-sidelobe ratio (PSR), and then explore the confidence map to obtain the PSR, which is highly reliable. Finally, the weight is calculated by the given PSR and is used to achieve adaptive model updating. (3) We also combine the SRDCF frameworks with a Siamese Tracker by assigning the threshold; we infer the tracker status and warn of potential tracking failures in order to achieve long-term tracking by switching two different trackers. The rest of the paper is organized as follows: in Section 2, we review related research work. In Section 3, we present the proposed visual tracking framework in detail. Numerous experimental results and analyses are shown in Section 4. In Section 5, we provide the conclusions to our work. Tracker with Correlation Filter Discriminative Correlation Filters (DCFs) [2,8,9] have outstanding results for visual tracking. This approach uses the circular correlation properties to train a regressor using a sliding window. At first, DCF methods [8,10] were limited to a single feature channel. Some approaches have extended the DCF framework to multi-channel feature maps [11][12][13]. The high-dimensional features are exploited in multi-channel DCF for improved tracking. The combination of the DCF framework and deep convolutional features [14] has significantly improved tracking ability. Danelljan et al. [3] proposed scale estimation to achieve spatial evaluation. Danelljan et al. [2] also introduced spatial regularization in order to alleviate the boundary effect in SRDCF. Valmadre et al. [15] constructed a convolutional neural network (CNN) that contains a correlation filter as the part of the network and uses end-to-end representation learning based on the similarity between correlation and convolution operations. Tracker with Deep Features The introduction of CNNs has significantly progressed the field of computer vision, including visual tracking. Wang et al. [9] proposed a deep learning tracker (DLT) that is based on the combination of offline pre-training and online fine-tuning. Wang et al. [16] designed the structured output deep learning tracker (SO-DLT) within the particle filters framework. Trackers were introduced that learn target-specific CNNs without pre-training to prevent the problems caused by offline training, which treat the CNN as black box [17,18]. In order to learn multiple correlation filters, Ma et al. [19] extracted the hierarchical convolutional features (HCF) from three layers of related networks. Danelljan et al. [20] proposed a tracker by learning continuous convolution operators (CCOT) to interpolate discrete features and train spatial continuous convolution filters, which enabled the efficient integration of multi-resolution deep feature maps. Danelljan et al. [21] also designed an efficient convolution operator (ECO) for visual tracking using a factorized convolution operation to prevent the low computational efficiency caused by CNN operation. Trackers with Feature Dimensionality Reduction Dimensionality reduction is widely used in visual tracking due to the computational complexity. Danelljan et al. [22] minimized the data term used in Principal Component Analysis (PCA) on the target appearance. In order to achieve sparse representation of the related target, Huang et al. [23] used sparse multi-manifold learning to achieve semi-supervised dimensionality reduction. Cai et al. [24] designed an adaptive dimensionality reduction method to handle the high-dimensional features extracted by deep convolutional networks. To model the mapping from high-dimensional SPD manifold to the low-dimensional manifold with an orthonormal projection, Harandi et al. [25] proposed a dimensionality reduction method to handle high-dimensional SPD matrices by constructing a lower-dimensional SPD manifold. Trackesr with Siamese Networks Siamese architecture has been exploited in the tracking field, performing impressively without any model update. Tao et al. [26] trained a Siamese network to identify candidate image locations that match the initial object appearance, and called their method the Siamese Instance Search Tracker (SINT). In this approach, many candidate patches are passed through the network, and the patch with the highest matching score is selected as the tracking output. Held et al. [27] introduced GOTURN, which avoids the need to score many candidate patches and runs at 100 fps. However, a disadvantage of their approach is that it does not possess intrinsic invariance for translating the search image. Later, Bertinetto et al. [28] trained a similar Siamese network to locate an example image within a large search image. The network parameters were initialized by the pre-trained networks through ILSVRC2012 (Large Scale Visual Recognition Challenge) [29] image classification problem, and then fine-tuned for the similarity learning problem in the second offline phase. Baseline The SRDCF tracker [2] is a spatially regularized correlation filter obtained by exploiting the sparse nature of the proposed regularization in the Fourier domain. The tracker effectively reduces the boundary effect and has achieved better tracking performance in OTB2015 benchmark compared with other correlation filter tracking algorithms. In the learning stage, the SRDCF tracker introduces a spatial weight function ω to penalize the magnitude of the filter coefficient f . The regularization weights ω determine the importance of the correlation filter coefficients f depending on their spatial locations. Coefficients in f residing outside the target region are suppressed by assigning higher weights to ω and vice versa. The resulting optimization problem is expressed as: x l k f l represents the convolution response of the filter to samples x k and l is the dimension of feature. The desired output y k is a scalar valued function over the domain that includes a label for each location in the sample, k denotes the number of frames, t represents the total number of samples, and d donates the dimension of the feature map. By applying Parseval's theorem to Equation (1), the filter f can equivalently be obtained by minimizing the resulting loss function in Equation (2) over the DFT coefficientsf: The symbol denotes DFT, M, N represents the sample size, D x l k denotes the diagonal matrix with the elements of the vectorx l k in the diagonal, C(ŵ) represents the circular two-dimensional (2D) convolution in the function (i.e.; C(ŵ)f l = vec ŵ f l ), and vec(·) is the vector representation. By applying unitary MN × MN matrix, B, and the real-valued part off l , we obtain f l = Bf l . The loss function is then simplified by defining the fully vectorized real-valued filter where D l = D 1 k , . . . , D d k , D l k = BD x l k B H , and y k = B y k , C = BC(ŵ)B H /MN. We defined W as the dMN × dMN block diagonal matrix with each diagonal block being equal to C. Finally, the regularized correlation filter is obtained by solving the normal equation The SRDCF model is updated first by extracting a new training sample x t centered at the target location. Here, t denotes the current frame number. We then update A t in Equation (4) and b t in Equation (5) with a learning rate γ ≥ 0: Adaptive Convolutional Features By applying the convolutional features of the pre-trained VGG-Net [12], we used an adaptive dimension reduction method to construct the feature space, then designed the peak-to-sidelobe ratio to choose more reliable results in order to update the model. For long-term tracking, we designed a novel failure detection mechanism in the tracking procedure. By combining the online updating method and the offline tracker, we not only addressed the issues in the SRDCF framework, but also solved the occlusion, deformation, and out-of-view problems present in long-term tracking. The flow chart of proposed the tracking algorithm is shown in Figure 1. Convolutional Features Convolutional neural networks (CNNs) have successfully applied to large image classification and detection by extracting features or by directly performing the task, such as with AlexNet [30], GoogleNet [31], ResNet [32], and VGG-Net [12]. VGG-Net was trained by 1.3 million images in the ImageNet dataset, and achieved the best result in a classification challenge. Compared with most CNN models of only five to seven layers, VGG Net has a deeper structure with up to 19 layers, 16 convolution and three fully-connected layers, which contain spatial information and semantic information, respectively, which can identify deeper features. Research indicates that the features extracted by convolution layer features are better than extracted from fully-connected layers. As shown in Figure 2, the feature extracted by the Conv3-4 layer in the VGG-Net model maintains spatial details, especially some information that is useful for accurate tracking (Figure 2b). Figure 2d illustrates the Conv5-4 layer of the VGG-Net model, which contains more semantic information. The semantic information effectively achieves better feature extraction when experiencing deformation in the tracking process. We chose the Conv3-4 feature in this paper considering the tradeoff between spatial information and semantic information. The feature mapping of Pool5 is only 7 × 7. Achieving accurate location depending on such low resolution is impossible. Bilinear interpolation is typically used to solve this problem in mapping space, Convolutional Features Convolutional neural networks (CNNs) have successfully applied to large image classification and detection by extracting features or by directly performing the task, such as with AlexNet [30], GoogleNet [31], ResNet [32], and VGG-Net [12]. VGG-Net was trained by 1.3 million images in the ImageNet dataset, and achieved the best result in a classification challenge. Compared with most CNN models of only five to seven layers, VGG Net has a deeper structure with up to 19 layers, 16 convolution and three fully-connected layers, which contain spatial information and semantic information, respectively, which can identify deeper features. Research indicates that the features extracted by convolution layer features are better than extracted from fully-connected layers. As shown in Figure 2, the feature extracted by the Conv3-4 layer in the VGG-Net model maintains spatial details, especially some information that is useful for accurate tracking (Figure 2b). Figure 2d illustrates the Conv5-4 layer of the VGG-Net model, which contains more semantic information. The semantic information effectively achieves better feature extraction when experiencing deformation in the tracking process. We chose the Conv3-4 feature in this paper considering the tradeoff between spatial information and semantic information. Convolutional Features Convolutional neural networks (CNNs) have successfully applied to large image classification and detection by extracting features or by directly performing the task, such as with AlexNet [30], GoogleNet [31], ResNet [32], and VGG-Net [12]. VGG-Net was trained by 1.3 million images in the ImageNet dataset, and achieved the best result in a classification challenge. Compared with most CNN models of only five to seven layers, VGG Net has a deeper structure with up to 19 layers, 16 convolution and three fully-connected layers, which contain spatial information and semantic information, respectively, which can identify deeper features. Research indicates that the features extracted by convolution layer features are better than extracted from fully-connected layers. As shown in Figure 2, the feature extracted by the Conv3-4 layer in the VGG-Net model maintains spatial details, especially some information that is useful for accurate tracking (Figure 2b). Figure 2d illustrates the Conv5-4 layer of the VGG-Net model, which contains more semantic information. The semantic information effectively achieves better feature extraction when experiencing deformation in the tracking process. We chose the Conv3-4 feature in this paper considering the tradeoff between spatial information and semantic information. The feature mapping of Pool5 is only 7 × 7. Achieving accurate location depending on such low resolution is impossible. Bilinear interpolation is typically used to solve this problem in mapping space, The feature mapping of Pool5 is only 7 × 7. Achieving accurate location depending on such low resolution is impossible. Bilinear interpolation is typically used to solve this problem in mapping space, where the weight β ki depends on the location of kth frame and ith adjacent eigenvectors, and h represents the feature space. Adaptive Dimensionality Reduction The feature dimension of Conv3-4 layer is 56 × 56 × 256, which contains less information and increases computation time. We used an adaptive dimensionality reduction to preserve the main component of Conv3-4, depending on the principal component analysis (PCA) of the related layer. After applying this method, the feature dimension was reduced to 130 from 256. As shown in Figure 3, the contribution of the feature under adaptive dimensionality reduction was 98% in sequence MotorRolling. where the weight ki β depends on the location of kth frame and ith adjacent eigenvectors, and h represents the feature space. Adaptive Dimensionality Reduction The feature dimension of Conv3-4 layer is 56 × 56 × 256, which contains less information and increases computation time. We used an adaptive dimensionality reduction to preserve the main component of Conv3-4, depending on the principal component analysis (PCA) of the related layer. After applying this method, the feature dimension was reduced to 130 from 256. As shown in Figure 3, the contribution of the feature under adaptive dimensionality reduction was 98% in sequence MotorRolling. We used singular value decomposition (SVD) of the matrix t R to solve Equation (9). The projection matrix is chosen from the first 2 D feature vectors from matrix t R : x t denotes the D 1 -dimensional feature learned from frame t. Adaptive dimensionality reduction results in the projection matrix P t , which contains an orthogonal vector in D 1 × D 2 dimension, and P T t P t = I. By applying the projection matrix P t , we achieved the new D 2 -dimensional feature space: where η 1 , . . . , η t denote weights and ξ We used singular value decomposition (SVD) of the matrix R t to solve Equation (9). The projection matrix is chosen from the first D 2 feature vectors from matrix R t : where G t denotes the covariance matrix of; Λ t represents the diagonal matrix with D 2 × D 2 , which contains ξ We obtain the adaptive projection matrix though a fixed learning rate λ. The matrix R t and the variance matrix Q t are updated using linear interpolation at every time step. Use the fixed learning rate γ ≥ 0 to simultaneously update the appearance feature spacex t . x t donates the feature space determined through Equation (8). Due to the Pooling operation, the feature space contains more semantic information: Fast Sub-Grid Detection At the detection stage, the location of the target in a new frame t is estimated by applying the filterf t−1 that was updated in the previous frame. Apply the filter at multiple resolutions to estimate changes in the target size. where i denotes the imaginary unit. We iteratively maximize Equation (16) using Newton's method by starting at the location u (0) , v (0) ∈ Ω. The gradient and Hessian in each iteration are computed by analytically differentiating Equation (16) to the maximum score: Adaptive Model Update The SRDCF framework uses the fixed learning rate to update the tracking model. Once the target is occluded, the appearance model is negatively affected, which leads to tracking drift. The proposed method uses the PSR R PSR to compute the confidence of the target position [33]. Through this method, we update the model depending on the confidence. PSR has been widely used in signal processing; usually the peak intensity of the signal can be expressed as: where S f (x t ) represents the convolution response to the correlation filter of the sample, and ϕ t and σ t denote the mean and standard deviation of convolution response to the sample x t , respectively. The PSR distribution of the David3 dataset is shown in Figure 4. The higher the PSR, the higher the confidence score of the target location. The target is completely occluded by the tree in the 84th frame, so the corresponding PSR drops to the extreme point, as seen in point A in Figure 4. The PSR gradually increase in the following frames. When the target was completely occluded by the trees in the 188th frame, the corresponding PSR decreases to the extreme point again, as shown by point B in Figure 4. The tracking results of point A and B are apparently unreliable, which cannot be used The PSR distribution of the David3 dataset is shown in Figure 4. The higher the PSR, the higher the confidence score of the target location. The target is completely occluded by the tree in the 84th frame, so the corresponding PSR drops to the extreme point, as seen in point A in Figure 4. The PSR gradually increase in the following frames. When the target was completely occluded by the trees in the 188th frame, the corresponding PSR decreases to the extreme point again, as shown by point B in Figure 4. The tracking results of point A and B are apparently unreliable, which cannot be used to update the model. The experiments show that the tracking result is highly reliable when PSR is around 10-18. Therefore, it is possible to determine whether the target is affected by the occlusion according to PSR in order to assign weight to the model update: The model is updated by using the learning rate η as follows: Therefore, it is possible to determine whether the target is affected by the occlusion according to PSR in order to assign weight to the model update: The model is updated by using the learning rate η as follows: Long-Term Tracking Mechanism Based on Siamese Offline Tracker Studies have shown the impressive performance of Siamese networks without any model update [26][27][28]. Compared with online trackers, these Siamese-network-based offline trackers are more robust to noisy model updates. Moreover, state-of-the-art tracking performance was achieved with a rich representation model learned from the large IILSVRC15 dataset [29]. However, these Siamese-network-based offline trackers are prone to drift in the presence of distractors that are similar to the target or when the target appearance in the first frame is significantly different from that in the remaining frames. Motivated by the complementary traits of online and offline trackers, we equipped our online update method with an offline-trained fully convolutional Siamese network [28]. By using this method, the stability-plasticity dilemma was balanced. In long term tracking, tracking-learning-detection (TLD) [34] implements the long-term tracking mechanism in each frame of the image sequence. The proposed algorithm used threshold θ re to activate the long-term tracking mechanism. When max(s r ) < θ re , the tracking method switches to the offline Siamese tracker. When max(s r ) is less than the activation threshold, the algorithm elects the offline Siamese tracker to track the target. The process is executed once, when max(s r ) < θ re . The implementation details of the fully convolutional Siamese Network were provided in a previous study [28]. The ablation study in Section 4.2 shows that the proposed offline tracker can avoid noisy model updates to achieve some improvements. The overall tracking algorithm is described in Algorithm 1. Algorithm 1: Proposed tracking algorithm. Input: Image I; Initial target position u (0) , v (0) and scale a r 0 ; previous target position u (t−1) , v (t−1) and scale a r t−1 Output: Estimated object position u (t) , v (t) and scale a r t . For each I t Extract the deep feature spacex t thought the pre-trained VGG-Net; Update matrix R t and Q t by linear interpolation using Equation (13) and (14). The SVD is performed and a new P t is found; Update the low dimensional appearance feature spacex t using Equation (15); Compute the confidence of the target position using Equation (18); Update the tracking model A t , b t andx t using Equations (19)-(22); Compute the estimated object position u (t) , v (t) and scale a r t using fast sub-grid detection; If max(s r ) < θ re , Update the estimated object position and scale using the offline Siamese tracker; Else Output the estimated object position and scale directly; End Experimental Results and Analysis This section presents a comprehensive experimental evaluation of the proposed tracker. Implementation Details The configuration used was an Intel (R) Core ™ I74790 CPU, 3.6 GHz, 16 GB RAM, NVIDIA Tesla K20 m GPU standard desktop. The weight function ω was constructed by starting from a quadratic function ω(m, n) = τ + ξ (m/P) 2 + (n/Q) 2 . The minimum value of ω was set to ω = τ = 0.1, and the impact of the regularizer was set to ζ = 3. P × Q denotes the target size. The number of the scale was set to S = 7, and a = 1.01 denotes the scale incremental factor. During adaptive dimensionality reduction, the feature dimension of Conv3-4 was set to D 1 = 256, which was reduced to D 2 = 130. During linear interpolation, the learning ratio was set to λ = 0.15, γ = 0.025. θ re = 0.5 was used to activate the offline Siamese tracker; the tracker used the same parameters as in a previous study [20]. The R PSR,t was set to 10 during model update, and the learning ratio was set to η = 0.01. Our MATLAB implementation ran at 4.6 frames per second with MatConvNet [35]. Reliablity Ablation Study An ablation study on VOT2016 was conducted to evaluate the contribution of the adaptive dimensionality reduction, adaptive model update, and Siamese tracker in the proposed method. The results of the VOT primary measure expected average overlap (EAO) and two supplementary measures, accuracy (A) and robustness (R), are summarized in Table 1 We provide the details of performance measures and evaluation protocol of VOT2016 in Section 4.4. Performance of the various modifications of the proposed method are discussed in the following. Applying the adaptive dimensionality reduction reliability is equivalent to extracting the principle component from the original image feature space. It not only reduces the computational complex, but also improves the sematic representation during the procedure. The performance drop in EAO compared to the proposed method was 11%. Replacing the adaptive model updating means that Ours Adr− does not use the PSR (R PSR ) to compute the confidence of the target position and completed the updating procedure based on the confidence. Since the updated filter drifted due to the deformation and occlusion, which affect the appearance of the tracking object, this version reduced our tracker performance by over 22% in EAO. R av remained unchanged in this experiment, whereas the A av of this version dropped by over 40%. Replacing the Siamese tracker from the proposed method mainly affected the performance of long-term tracking. The performance drop in EAO compared with the proposed method was around 10%, and the A av dropped 20% due to the lack of a failure detection mechanism. This clearly illustrates the importance of our combination of the online tracker and Siamese tracker as outlined in Section 3.3. OTB-2015 Benchmark The OTB100 [36] benchmark contains the results of 29 trackers evaluated on 100 sequences using a no-reset evaluation protocol. We measured the tracking quality using precision and success plots. The success plot shows the fraction of frames with an overlap between the predicted and ground truth bounding box greater than a threshold with respect to all threshold values. The precision plot shows similar statistics on the center error. The results are summarized by areas under the curve (AUC) in these plots. Here, we only show the results for top-performing recent baselines to avoid clutters, including Struck [8], TGPR [37], DSST [3], KCF [4], SAMF [38], RPT [39], LCT [5], and results for recent top performing state-of-the-art trackers SRDCF [2] and MUSTER [40]. The results are shown in Figure 5. The proposed method performed the best in OTB100 and outperformed the baseline tracker, SRDCF. The OTB success plots computed on these trajectories and summarized by the AUC values are equal to the average overlap [41]. Here, we only show the results for top-performing recent baselines to avoid clutters, including Struck [8], TGPR [37], DSST [3], KCF [4], SAMF [38], RPT [39], LCT [5], and results for recent top performing state-of-the-art trackers SRDCF [2] and MUSTER [40]. The results are shown in Figure 5. The proposed method performed the best in OTB100 and outperformed the baseline tracker, SRDCF. The OTB success plots computed on these trajectories and summarized by the AUC values are equal to the average overlap [41]. VOT2016 Benchmark We compared the proposed tracker with other state-of-the-art trackers in VOT2016, which contains 60 sequences. The trackers were restarted at each failure. The set is diverse, with the topperforming trackers come from various classes including correlation filter methods such as CCOT [20], The proposed method outperforms the compared trackers, except for ECO and CCOT, with an EAO score of 0.329. The proposed method significantly outperformed the correlation filter approaches that apply deep ConvNets, and also outperforms the trackers that apply different detection-based approaches. The detailed performance scores for the 10 top-performing trackers are shown in Table 2. Per-Attribute Analysis The VOT2016 dataset is per-frame annotated with visual attributes to allow the detailed analysis of per-attribute tracking performance. Figure 6 shows the per-attribute plot for the top-performing trackers on VOT2016 in EAO. The proposed method was consistently ranked among the top three trackers on the five attributes. The proposed method performed the best in terms of size change, occlusion, camera motion, and unassigned. During the illumination change challenge, the proposed tracker did not perform better than four trackers, including ECO, CCOT, MLDF, and SSAT. Per-Attribute Analysis The VOT2016 dataset is per-frame annotated with visual attributes to allow the detailed analysis of per-attribute tracking performance. Figure 6 shows the per-attribute plot for the top-performing trackers on VOT2016 in EAO. The proposed method was consistently ranked among the top three trackers on the five attributes. The proposed method performed the best in terms of size change, occlusion, camera motion, and unassigned. During the illumination change challenge, the proposed tracker did not perform better than four trackers, including ECO, CCOT, MLDF, and SSAT. Tracking Speed Analysis Speed measurements on a single CPU were computed using an Intel ® Core™ I74790 CPU, 3.6 GHz, 16 GB RAM, NVIDIA Tesla K20 m GPU standard desktop. Compared with the two best-performing methods, ECO and CCOT, the proposed method was slower than ECO, while being four times faster than CCOT. Compared with other trackers that apply deep ConvNets, such as DeepSRDCF [14] and SiamFC, the proposed tracker had better tracking results and was twice as fast as DeepSRDCF. The Tracking Speed Analysis Speed measurements on a single CPU were computed using an Intel ® Core™ I74790 CPU, 3.6 GHz, 16 GB RAM, NVIDIA Tesla K20 m GPU standard desktop. Compared with the two best-performing methods, ECO and CCOT, the proposed method was slower than ECO, while being four times faster than CCOT. Compared with other trackers that apply deep ConvNets, such as DeepSRDCF [14] and SiamFC, the proposed tracker had better tracking results and was twice as fast as DeepSRDCF. The proposed tracker performs nearly two times slower than the baseline SRDCF, but achieved better tracking results. Compared with baseline real-time trackers like KCF, DSST, and Staple, the proposed tracker performed poorly, but the tracking performance of the proposed tracker was much better. The speed of trackers in terms of frames per second is shown in Table 3. The average speed of the proposed tracker measured on the VOT 2016 dataset was approximately 4.6 fps or 217 ms/frame. Figure 7 shows the processing time required by each step of the proposed method. Among them, the Fast Sub-Grid Detection process required 173 ms, the Adaptive Model Update required 67 ms, and the offline Siamese Tracker required 136 ms. The condition max(s r ) depends on whether or not the offline Siamese Tracker is employed. Due to the adaptive dimensionality reduction, the proposed tracker can save time than when directly using deep features. Qualitative Evaluation on the OTB Benchmark In this section, we focus on the tracking results for objects experiencing severe occlusion, illumination, and in-plane rotation on OTB100. The compared trackers included the baseline SRPDCF, MUSTER, LCT, RPT, and SAMF. The tracking results are shown in Figure 8. Given the rich representation of deep ConvNet, the proposed tracker outperformed other trackers given complex attributes. In sequence Car4 and CarDark, the illumination occurs in frames 205 and 333, respectively. In the sequence FaceOcc2, the target is occluded by a cap and book. In the Freeman sequence, the target is suffering from severe in-plane rotation. Due to the adaptive model update, the model is updated based on the peak-to-sidelobe ratio, which prevents the correlation filter from learning background information and tracking the object. Due to the deep ConvNet features, the proposed tracker contains rich representation that performs well when experiencing illumination change in the Car 4 and CarDark sequences. Notably, the proposed tracker succeeds in tracking the target until the In this section, we focus on the tracking results for objects experiencing severe occlusion, illumination, and in-plane rotation on OTB100. The compared trackers included the baseline SRPDCF, MUSTER, LCT, RPT, and SAMF. The tracking results are shown in Figure 8. Given the rich representation of deep ConvNet, the proposed tracker outperformed other trackers given complex attributes. In sequence Car4 and CarDark, the illumination occurs in frames 205 and 333, respectively. In the sequence FaceOcc2, the target is occluded by a cap and book. In the Freeman sequence, the target is suffering from severe in-plane rotation. Due to the adaptive model update, the model is updated based on the peak-to-sidelobe ratio, which prevents the correlation filter from learning background information and tracking the object. Due to the deep ConvNet features, the proposed tracker contains rich representation that performs well when experiencing illumination change in the Car 4 and CarDark sequences. Notably, the proposed tracker succeeds in tracking the target until the very end of the FaceOcc2 and Freeman sequences. The offline Siamese Tracker is activated to achieve long-term tracking to prevent tracking failure from the online model update. In this section, we focus on the tracking results for objects experiencing severe occlusion, illumination, and in-plane rotation on OTB100. The compared trackers included the baseline SRPDCF, MUSTER, LCT, RPT, and SAMF. The tracking results are shown in Figure 8. Given the rich representation of deep ConvNet, the proposed tracker outperformed other trackers given complex attributes. In sequence Car4 and CarDark, the illumination occurs in frames 205 and 333, respectively. In the sequence FaceOcc2, the target is occluded by a cap and book. In the Freeman sequence, the target is suffering from severe in-plane rotation. Due to the adaptive model update, the model is updated based on the peak-to-sidelobe ratio, which prevents the correlation filter from learning background information and tracking the object. Due to the deep ConvNet features, the proposed tracker contains rich representation that performs well when experiencing illumination change in the Car 4 and CarDark sequences. Notably, the proposed tracker succeeds in tracking the target until the very end of the FaceOcc2 and Freeman sequences. The offline Siamese Tracker is activated to achieve long-term tracking to prevent tracking failure from the online model update. Qualitative Evaluation on VOT Benchmark In this section, we focus on the tracking results of objects undergoing severe occlusion, scale change, and camera motion on VOT2016. The compared trackers included CCOT, ECO, Staple, SiamFC, and the baseline SRDCF. The tracking results are shown in Figure 9. The proposed tracker outperformed the other trackers in terms of occlusion, scale change, and camera change, which is illustrated in Section 4.5. In the Tiger sequence, the target is occluded frequently during the entire procedure. The tracker based on deep ConvNet performed well in this sequence, since the high number of layers retains rich semantics information. In the Bolt1 and Dinosaur sequence, the target experiences scale change. Compared with the other trackers, the proposed tracker performed well, due to the long-term mechanism of the offline Siamese tracker. In the Racing sequence, the camera changes throughout the sequence. Nearly all the trackers can track the target successfully, whereas the proposed tracker achieved the most accurate tracking, which can be seen in Figure 9d. (a) Tiger Qualitative Evaluation on VOT Benchmark In this section, we focus on the tracking results of objects undergoing severe occlusion, scale change, and camera motion on VOT2016. The compared trackers included CCOT, ECO, Staple, SiamFC, and the baseline SRDCF. The tracking results are shown in Figure 9. The proposed tracker outperformed the other trackers in terms of occlusion, scale change, and camera change, which is illustrated in Section 4.5. In the Tiger sequence, the target is occluded frequently during the entire procedure. The tracker based on deep ConvNet performed well in this sequence, since the high number of layers retains rich semantics information. In the Bolt1 and Dinosaur sequence, the target experiences scale change. Compared with the other trackers, the proposed tracker performed well, due to the long-term mechanism of the offline Siamese tracker. In the Racing sequence, the camera changes throughout the sequence. Nearly all the trackers can track the target successfully, whereas the proposed tracker achieved the most accurate tracking, which can be seen in Figure 9d. procedure. The tracker based on deep ConvNet performed well in this sequence, since the high number of layers retains rich semantics information. In the Bolt1 and Dinosaur sequence, the target experiences scale change. Compared with the other trackers, the proposed tracker performed well, due to the long-term mechanism of the offline Siamese tracker. In the Racing sequence, the camera changes throughout the sequence. Nearly all the trackers can track the target successfully, whereas the proposed tracker achieved the most accurate tracking, which can be seen in Figure 9d. Conclusions In this paper, we propose a visual tracking framework that combines deep ConvNet features, adaptive model updates, and an offline Siamese tracker. The proposed tracker outperformed other state-of-the-art methods in complex attributes. The adaptive dimensionality reduction provides low dimensional features for the correlation filter to reduce computational complexity. The adaptive model updating method improves the tracking performance in occlusion situations. The offline Siamese tracker enables long-term tracking. Numerous experimental results demonstrated that the proposed tracker outperforms state-of-the-art trackers, highlighting the significant benefits of our method.
8,710.8
2018-07-01T00:00:00.000
[ "Computer Science" ]
Efficient Task Offloading for 802.11p-Based Cloud-Aware Mobile Fog Computing System in Vehicular Networks Jiangsu Key Laboratory of Traffic and Transportation Security, Huaiyin Institute of Technology, Huaian 223003, China Jiangsu Laboratory of Lake Environment Remote Sensing Technologies, Huaiyin Institute of Technology, Huaian 223003, China School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, China Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, NJ 07102, USA Introduction Intelligent connected vehicles can improve the traffic safety and ensure passenger comfort by supporting various applications such as autonomous driving, safety early warning, natural language processing, advertisements, and entertainments in the vehicular environment [1][2][3][4]. These applications consist of enormous latency-sensitive/nonlatency-sensitive and computation intensive tasks [5]. However, the computational capability of the vehicles is limited, which makes it difficult to support latency-sensitive application. Vehicular fog computing (VFC) has been emerged as an efficient approach to tackle this issue in the vehicular network, where the computational resources are pushed at the network edge to satisfy the requirements of latencysensitive tasks [6,7]. Nevertheless, the computational resources of the VFC system are not sufficient because the number of tasks generated by the vehicular applications is so huge. Therefore, it is critical to propose a new computing paradigm for the vehicular network. Mobile fog computing (MFC) system has been proposed as an efficient computing paradigm for the vehicular network, which extends the computational capability of the VFC system by jointly working with the remote cloud. In the MFC system, we consider each vehicle as a computational resource unit (RU) with the same computational capability [8], and the vehicles adopt the IEEE 802.11p protocol to communicate with each other [9]. The 802.11p employs the enhanced distributed channel access (EDCA) mechanism to provide different quality-of-service (QoS) supports. Specifically, the EDCA mechanism defines different access categories (ACs) with different priorities to transmit different data traffic [10]. Since the vehicles in the MFC system generate enormous latency-sensitive/nonlatency-sensitive and computation intensive tasks due to various vehicular applications and these kinds of tasks are with different delay requirements, we consider that the latency-sensitive and computationintensive tasks are of high priority and transmitted by higher priority AC to obtain higher-level QoS, while the nonlatency-sensitive tasks are low priority and granted with lower priority AC. When a vehicle generates a high priority task, i.e., service requester, it can offload the task to other vehicles or remote cloud, i.e., accepting by the vehicular fog or transmit it to the remote cloud. Since the QoS of the low priority task is not stringent, service requester can only offload low priority tasks to other vehicles in the vehicular fog, or the tasks are rejected by the system. Once the tasks are accepted by the vehicular fog, the system needs to determine how many RUs to be assigned to obtain maximal long-term expected reward. Note that the main goal for the MFC system is to reduce the executing time of tasks [11]. The MFC system in this paper includes the features which are proposed in previous literatures: (1) vehicles arrive/depart the system; and (2) computing tasks arrive/depart the system, resulting in that the number of available resources in the system is variable. In addition, the MFC system has its unique feature, i.e., considering different priorities of tasks transmitted by different ACs of the 802.11p EDCA mechanism, which makes it challenge to find an optimal task offloading strategy to maximize the long-term expected system reward. To the best of our knowledge, although there are extensive studies on the task offloading scheme in the MFC system for vehicular network, no existing literature considers the feature, i.e., tasks with different delay requirements are transmitted by different ACs of the 802.11p EDCA mechanism, which poses a significant challenge to construct model to find the optimal task offloading policy. Thus it is necessary to propose an optimal offloading policy while considering the high/low priority tasks transmitted by different ACs, which motivates us to do this work. In this paper, we consider that the different priorities of tasks transmitted by different ACs and propose an optimal task offloading policy for the MFC system. The main contributions of this paper are summarized as follows. (1) We propose an offloading strategy to obtain maximal long-term expected reward in the MFC system for vehicular network while jointly considering the impact of computation requirements of tasks, vehicle mobility, and the arrival/departure of high/low priority task. Specifically, we transform the task offloading process into a semi-Markov decision process (SMDP) model where all the system state set, action set, state transition probabilities, and system reward function are defined. To solve the problem efficiently, we adopt the relative value iterative algorithm to find the optimal offloading strategy for the MFC system (2) To demonstrate the performance of the proposed optimal strategy, we perform extensive experiments for our proposed strategy and the greedy algorithm (GA) under the same condition and obtain numerical results. The results indicate that the performance of our proposed strategy has been significantly improved compared to the GA method The rest of this paper is organized as follows. Section 2 discusses the related work on the task offloading strategy in the vehicular network. The MFC system is described in Section 3. We construct the SMDP model to formulate the task offloading problem in Section 4. The relative value iteration algorithm is introduced in Section 5. Section 6 provides the numerical results and the corresponding performance analysis. The conclusion of this paper is given in Section 7. Related Work The computing paradigm VFC and MFC are widely used to the vehicular network. In this section, we first review the related works on task offloading in the VFC system, and then the works about offloading in the MFC system are discussed. 2.1. Task Offloading in the VFC System. Zhu et al. [6] considered the fog node capacity, constraints on service latency, and quality loss to propose a solution for task allocation in the VFC system. They first changed the process of task allocation into a joint optimization problem which is a biobjective optimization problem. To solve the problem, they also proposed an event-triggered dynamic task allocation framework based on the linear programming-based optimization and binary particle swarm optimization. Wu et al. [11] considered the transmission delay caused by 802.11p and proposed a task offloading strategy for the VFC system. They first transformed the task offloading problem into an SMDP model, and then they adopted an iterative algorithm to solve the SMDP to attain optimal strategy. Zhou et al. [12] first proposed an efficient incentive mechanism based on contract theoretical modelling which is tailored for the unique characteristic of each vehicle type to motivate vehicles to share their resources. Then, they formulated the task assignment problem as a two-sided matching problem which is solved by a pricing-based stable matching algorithm to minimize the network delay. Zhao et al. [13] proposed a contract-based incentive mechanism, which combines resource contribution and utilization. Then, they adopted distributed deep reinforcement learning (DRL) to reduce implementation complexity in the VFC system. Finally, they proposed a task offloading scheme based on the queuing model to avoid the task offloading conflict. Lin et al. [14] proposed a resource allocation management scheme to reduce servicing time. They first introduced a serving model, and then they built a VFC system utility model based on the serving model which is solved by a two-step method. Specifically, they first presented all the suboptimal solution based on a Lagrangian algorithm, and then they provided an optimal solution selection process. Xie et al. [15] jointly considered the effect of vehicle mobility and time-varying computation capability 2 Wireless Communications and Mobile Computing and proposed an effective resource-aware based parallel offloading policy for the VFC system. Task Offloading in the MFC System. Ning et al. [4] developed an energy-efficient task offloading scheme. Specifically, they first formulated the optimization problem to minimize energy consumption through jointly considering load balance and delay constraint. Then, they divided the optimization problem into two stages, i.e., flow redirection and offloading decision. Finally, they adopted Edmonds-Karp and deep reinforcement learning-based Minimizing Energy Consumption algorithms to solve the optimization problem. Zheng et al. [8] considered the variability feature of resources and proposed an optimal computation resource allocation strategy to maximize the long-term expected reward of the MFC system in terms of power and processing time. Specifically, they first transformed the optimization problem into an SMDP, and then they adopted the iteration algorithm to find the optimal scheme. Lin et al. [16] took into account the heterogeneous vehicles and roadside units, then the resource allocation problem was formulated as an SMDP model. Afterwards, the formulated problem is solved by a proposed method. Zhao et al. [17] first changed a collaborative computing offloading problem into a constrained optimization by jointly optimizing computation offloading decision and computation resource allocation. They also adopted a collaborative computation offloading and resource allocation optimization scheme to solve the optimization problem. Wu et al. [18] considered the vehicles which are processing tasks and proposed a task offloading scheme to maximize the longterm system reward leaving system. Specifically, they first formulated the offloading problem as an infinite SMDP. Then, they employed the value iteration algorithm to tackle this problem. Wang et al. [19] jointly considered the heterogeneous delay requirements for vehicular applications and the variable computation resources to propose an efficient offloading policy. They first introduced a priority queuing system to model the MFC system, and then found an application-aware offloading policy by an SMDP. Liu et al. [20] developed an offloading strategy in the MFC system to minimize the task offloading delay which is consisted of transmission delay, computational delay, waiting delay, and handover delay. Specifically, they first established the task offloading delay model, and then they developed a pricing-based one-to-one matching algorithm and pricing-based one-to-many matching algorithm to obtain offloading strategy. From the mentioned above, we can find there is no existing work considering the different priorities of tasks transmitted by different ACs of the 802.11p, which is the motivation of our work. System Model In this section, the MFC system is first described in detail, and then we introduce how vehicles employ the IEEE 802.11p EDCA mechanism to transmit high/low priority computing tasks. 3.1. System Description. The MFC system has jointly considered the impact of priority of tasks, vehicle mobility, and arrival/departure of the computing tasks. The scenario is shown as Figure 1; it consists of both the vehicular fog and remote cloud. Once a high priority task is generated, the system needs to make a decision to assign it to the desirable computing entity, i.e., offloading the task to the vehicular fog or transmitting it to the remote cloud. Since the tasks with low priority are not latency-sensitive, the system is more likely to execute them in the vehicular fog or drop them. In addition, if a task is received by the vehicular fog, the system further determines how many available vehicles, i.e., available RUs, are assigned to execute it. A simple example is shown in Figure 1. The task from vehicle C 1 arrives at the system, and two RUs are assigned to handle it. Afterwards, the service requester receives the computing results from RUs. We assume that the maximal number of computational resources Wireless Communications and Mobile Computing is M in the MFC system, and the computational service rate of each vehicle for high/low priority tasks is μ t . The vehicles move into or depart from the MFC system according to the Poisson process with parameter λ v and μ v , respectively. Similarly, the arrival rate of high and low priority computing tasks follows Poisson distribution with parameter λ 1 and λ 2 . 3.2. Data Transmission of 802.11p EDCA Mechanism. The 802.11p EDCA defines four ACs with different priorities to transmit different types of tasks. Each AC queue employs its own parameters, i.e., arbitration interframe space number (AIFSN), minimum contention windows, and maximum contention windows [21]. Denote CW i,min and CW i,max as the minimum contention windows and maximum contention windows of AC i ði = 1, 2Þ, respectively. Thus, the maximal times that the contention window of AC i can be doubled, i.e., R i , is expressed by Equation (1). In this paper, the high priority tasks are transmitted by AC 1 queue and low priority tasks are transferred by AC 2 queue in the broadcast mode. Similar to most related work [22][23][24][25], we assume that the channel is ideal. The transmitting procedure of tasks in the MFC system is described as follows. Specifically, when an AC queue in a vehicle has a task to transmit and the channel is idle for the time duration of arbitration interframe space (AIFS), a backoff process is initiated. Specifically, a random value is selected between zero and the minimum contention windows as the value of the backoff counter. Then, if the channel is kept idle for one slot, the backoff timer is decreased by one. Otherwise, the backoff timer is frozen until the channel keeps idle for the duration of AIFS. If the value of the backoff timer is decreased to zero, the packet is transmitted. Once the two ACs in a vehicle are transmitting simultaneously, the internal collision happens. In this case, the high priority task will be transmitted, while the low priority task is retransmitted with a new backoff procedure with doubled contention windows. If the number of retransmissions is more than the retransmission limit L i , the task will be dropped. SMDP Model In this section, the task offloading process is transformed into an SMDP model. To clarify the problem, we have defined the system state set, action set, state transition probabilities, and system reward function of the system. 4.1. System State Set. The system state x includes the number of vehicles, i.e., K, the number of high/low priority computing tasks which are executed by a different number of RUs and the event. The event is denoted as e, where e ∈ κ = fA 1 , Here, A 1 means that the service requester generates a high priority computing task, i.e., arrival of high priority computing task; A 2 means that the service requester generates a low-priority computing task, i.e., arrival of low priority computing task; D i,j means that the task with priority i processed by j RUs departs from the MFC, i.e., completion of a task with priority i processed by j RUs; F +1 means a vehicle moving into the MFC system, i.e., arrival of a vehicle and F −1 means an available vehicle leaving the system, i.e., departure of a vehicle. Thus, the system state set can be expressed as where s i,j ði = 1, 2 ; j = 1, 2,⋯,NÞ means the number of the tasks with priority i processed by j RUs and N means that the maximal number of RUs for processing a task. It is no doubt that the number of busy vehicles must be smaller than The action aðxÞ is related with the system current state x and reflects the decision of the MFC system under the current event. The action belongs to the set ζ = f−1, 0, 1 ,⋯,Ng, where aðxÞ = −1 indicates that the system takes no action when the event is one of the completion of a task, vehicle moving into or departing from the system; aðxÞ = 0 indicates that the system transmits the high priority computing task to the remote cloud or the system rejects the low priority task due to lack of computational resources; aðxÞ = j indicates that j RUs are assigned to execute a task. Thus, the action set can be expressed as State Transition Probabilities. In the SMDP model, the next state is related to the current state and action, and the state transition probabilities represent the relationship between the current state and the next state. Thus, the state transition probabilities are defined as the ratio between the arrival rate of the next event and the sum of the arrival rate of all events. Given the current state x = ðK, s 1,1 ,⋯,s 1,N ,⋯, s 2,N , eÞ and action a, denote Pðk | x, aÞ as the transition probability from the current state x to the next state k after taking action aðxÞ, and βðx, aÞ as the sum of arrival rate for all events. To model state transition probabilities, the arrival rate of the next event should be discussed first, which is different according to the current event and action. The detailed procedure is as follows. As a service requester generates a high priority computing task, the MFC system transmits it to the remote cloud. In this case, the arrival rate of event A i ði = 1, 2Þ is Kλ i . The number of busy RUs does not change when the task is executed in the remote cloud. The arrival rate of event D i,j , i.e., completion of a task with priority i and processed by j RUs, is s i,j •jμ t . The arrival rate of event F +1 and F −1 are λ v and μ v , respectively. If a high/low priority task is generated and received by the vehicular fog, j RUs are assigned to execute it. In this case, when the next event is one of the arrival of computing task, vehicle moving into or departing from the system, the arrival rate of the next event is Kλ i , λ v , and μ v , respectively. Let E be the next event. When the event, i.e., the completion of a task with priority i processed by j RUs, 4 Wireless Communications and Mobile Computing occurs, the arrival rate of the next event is analyzed as follows: This case indicates that the service requester generates a computing task with priority i, and it is executed by j RUs. Thus, the number of tasks with priority i handled by j RUs increases, and the arrival rate of event D i,j is ðs i,j + 1Þ•jμ t . This case represents that a task with priority i processed by m RUs is accomplished. Thus, the arrival rate of event This case means that the task with priority n processed by w RUs is accomplished. The arrival rate of event D n,w is s n,w •wμ t . In conclusion, given the current state x = ðK, s 1,1 ,⋯,s 1,N , ⋯,s 2,N , A i Þ and action a, the transition probabilities can be expressed as Equations (4) and (5), which are shown on the next page. Similarly, as the current event is one of the completion of a task with priority i processed by j RUs, vehicle moving into or departing from the system. The state transition probabilities can be calculated by Equations (6) Since βðx, aÞ is the sum of arrival rate for all events, the expression can be denoted by Equation (9), i.e., 4.4. System Reward Function. Given the action, the system is rewarded when the MFC system changes from the current state to the next state. Denote rðx, aÞ as the system reward function, which is expressed by where hðx, aÞ means the revenue of the system by taking action a under state x and gðx, aÞ represents the system cost during the period between two states. Next, we first discuss the revenue, and then the system cost is explained. System Revenue. The main goal for the MFC system is to reduce the executing time of tasks. Let D 1 be the transmission delay from the vehicular fog to the remote cloud, T be the processing time of a task when the requester executes the task locally. Ts i ði = 1, 2Þ be the transmission time from the requester to the vehicular fog and D t ðjÞ be the executing time of a task by j RUs. Similar to previous literatures [26], we do not take into account the feedback time of the analytical results. Note that the remote cloud is equipped with power capability and thus the processing time of remote cloud is ignored. If a high priority computing task arrives at the system and the computation resources are not sufficient, the system transmits the task to the vehicular fog and then transfers it to a remote cloud. In this case, the revenue of the system is denoted as η½T − Ts 1 − D 1 , where η is the price of per unit time. When a low priority task reaches the system where available RUs are insufficient, the task will be rejected and the system is terminated with punishment parameter ϕ. If a high/low priority task is executed by the vehicular fog, the revenue is expressed as η½T − Ts i − D t ðjÞ. If the event is one of the completions of a task, vehicle moving into or leaving the system, the system takes no action and the revenue is zero. A busy vehicle moves out of the MFC system, resulting in the failure of executing tasks, and the system is punished with parameter ξ. Thus, the revenue of the system under different events and actions can be formulated by Equation (11), i.e., Since the computational service rate of each vehicle for high/low priority tasks is μ t , the executing time of a task by j RUs can be expressed as The tasks with different priorities are transmitted by different ACs, i.e., AC 1 and AC 2 , with different transmission delay. Similar to the work [27], we consider the whole tasks transmitting process as a z-domain linear model and adopt the probability generating function (PGF) method to obtain the transmission delay for ACs. P i td ðzÞ is denoted as the PGF of the transmission delay, which can be expressed by Equation (13), i.e., where TRðzÞ is the PGF of average transmission time, G i,m ðzÞ is the PGF of backoff time of AC i when the number of retransmission is m, and ϖ i is the transmission probability of AC i . W i,m is denoted as the maximal contention window (CW) size of AC i ði = 1, 2Þ when the number of retransmission is m, which can be calculated by Equation (14). Let H i ðzÞ be the PGF of average time that the backoff timer reduces by one. Thus, the PGF of G i,m ðzÞ can be calculated as Equation (15). 6 Wireless Communications and Mobile Computing Let T slot be the duration of a slot. The PGF of H i ðzÞ is expressed by Equation (16), i.e., where AIFS i is the time duration of AIFS and p bi is the backoff freezing probability that the requester vehicle senses other vehicles in the MFC system occupying the channel or other access categories attempting to transmit tasks. Since AC 1 needs to sense channel for A more slots than AC 0 , the p bi can be calculated by where A is calculated according to AIFSN½2 and AIFSN½1 as follows. Let SIFS be the time duration of short inter-frame spacing (SIFS). The AIFS i can be expressed by Equation (19) according to the 802.11p. Assuming that all high/low priority tasks have the same size, i.e., EðPÞ, the average transmission time is given by where PHY h and MAC h are the header length of the physical and MAC layer, and σ is the propagation time. R b and R d are the basic rate and data rate, respectively. Thus, the PGF of average transmission time can be expressed by Equation (21), i.e., According to the Markov chain in [27], the transmission probability of AC i can be expressed by Equation (22), i.e., where ρ i is the utilization of AC i , and p ai is the task arrival probability of AC. Since the arrival rate of all computing tasks follows Poisson distribution, the arrival probability p ai can be calculated by Equation (23), i.e., Initialize the utilization ρ i , then ϖ i and p bi can be calculated according to Equations (17) and (22) through iterative method. Substituting ϖ i and p bi into Equation (13), the PGF of transmission time can be obtained. Thus, the transmission time can be obtained by System Cost. Given the current state x and action a, the system cost means the cost caused by executing tasks during the period of two states and is expressed by Equation (25). where cðx, aÞ is the number of busy vehicles under the current event after taking action, which can be calculated by Equation (26) and τðx, aÞ is the corresponding expected time. Denote α as the continuous-time discounted factor. In this paper, we adopt the discounted reward model found in [28], i.e., Thus, the system revenue function can be rewritten as Equation (28). Relative Value Iteration Algorithm In this section, we adopt the relative value iteration algorithm to find the optimal task offloading strategy to maximize the long-term expected reward in terms of reducing the executing time of tasks. Specifically, the relative value iteration algorithm is used to solve the Bellman optimal equation [29]. The Bellman equation is expressed by Equation (29), where γ ∈ ½0, 1 is the discount factor that determines the impact of future reward on current state. Since the continuous-time SMDP is hard to solve and the discrete MDP can be solved directly through iterating the Bellman optimal equation to obtain optimal strategy, we transform the continuous-time SMDP into the discrete MDP by uniformizing the system revenue, the discount factor, and state transition probabilities according to Substituting Equations (30)-(32) into Equation (29), the Bellman optimal equation can be written as The detailed description of the relative value iterative algorithm to solve Equation (33) is presented in Algorithm 1. Numerical Results and Analysis In this section, we conduct extensive experiments in MATLAB 2010a and obtain numerical results to demonstrate the performance of the proposed offloading strategy though comparing it with the greedy algorithm. The GA method indicates that the system is always inclined to allocate as many RUs as possible to execute tasks. The considered scenario is shown in Figure 1. In the experiment, we first initialize the system state set, action set, state transition probabilities, and system reward according to Equations (2)-(10). Then, the relative value iterative algorithm is used to solve the Bellman optimal equation to obtain an optimal offloading strategy. Finally, we compare the proposed strategy with the GA method in terms of the long-term expected reward. We assume that the maximal number of RUs for executing a task is 2. b1 and b2 mean that the system assigns one RU and two RUs to execute a high priority task. Similarly, B1 and B2 indicate that the system allocates one RU and two RUs to process a low priority task. b0 represents that a high priority task is transmitted to remote cloud. B0 means that the system rejects a low priority task. For simplicity, the number of retransmissions limit is set to be 2. The parameters of 802.11p are adopted according to the IEEE 802.11p standard [30], and the main parameters in the experiment are shown in Table 1. Figure 2 shows the transmission delay of high/low priority computing tasks when the maximal amounts of vehicles changes. We can see that, when the maximal amounts of vehicles increases, the transmission delay of tasks keeps increasing. With the increase of the maximal number of vehicles in the system, the packet collision probability is 9 Wireless Communications and Mobile Computing increased, thus incurring the degraded transmission delay. In addition, we can find that the transmission delay of high priority tasks, i.e., Ts 1 , is higher than that of low priority tasks, i.e., Ts 2 . This is because that the contention window of AC 1 queue which is used to transmit high priority tasks is smaller than that of AC 2 . Figure 3 shows the action probabilities of the MFC system when the maximal number of vehicles in the MFC system changes. It can be seen that the probabilities of b0 and B0 become smaller with the increasing of the maximal number of vehicles, which can be explained as follows. When the maximal number of vehicles increases, the computational resources become sufficient, resulting in that the system tends to execute tasks in the vehicular fog. Since the system is inclined to process tasks in the vehicular fog, the probabilities of b1, B1, b2, and B2 become larger. With the number of vehicles further increasing, the available resource is abundant and the system assigns as many RUs as possible to process tasks to maximize the system reward in terms of reducing executing time of tasks. Therefore, the probabilities of b1 and B1 decrease, while those of b2 and B2 continue to increase. Since the difference of transmission delay for high/low priority is small and the computational rate for different priorities of tasks is the same, the difference of system revenue for action b2 and B2 is not obvious. Thus, the probabilities of b2 and B2 are the same. Figure 4 shows the long-term expected reward of the MFC system when the maximal number of vehicles in the MFC system changes. We can see that, when the amounts of vehicles increase, the proposed strategy has a significantly larger system reward than the greedy algorithm. This is because when the available computing resource increases, more tasks can be processed by the vehicular fog. Moreover, we can find that the proposed offloading strategy performs well in the system reward as compared to the GA method. This is because the proposed scheme considers the long- term reward when assigning RUs to execute tasks. However, the GA method only allocates as many RUs as possible to process tasks and does not take into account the long-term system reward. Next, we compare the proposed task offloading strategy for the high priority tasks with that for the low priority tasks, which is shown in Tables 2 and 3, respectively. Note that the blank part of the tables indicates the corresponding state does not exist. It can be seen that when the number of available resources is larger than the maximal number of RUs processing a task, i.e., N, the system allocates as many RUs as possible to execute the high/low priority tasks to maximize the system reward. Moreover, when the number of available RUs is smaller than N and larger than one, the system assigns one RU to process computing tasks. When the number of available resources is very small, the system will adopt different action to execute high/low priority tasks, i.e., transmitting the high priority task to the remote cloud to obtain maximal long-term expected reward and allocating the one RU to process the low priority task or rejecting the low priority task. Conclusions In this paper, we developed a task offloading strategy for the MFC system to maximize the system reward in term of reducing the processing time of tasks while considering the impact of computation requirements of tasks which are transmitted by different ACs of the 802.11p EDCA mechanism, mobility of vehicles, and the arrival/departure of high/low priority tasks. We first transformed the offloading problem into an SMDP model. Afterwards, the relative value iterative algorithm was used to solve the model to obtain the optimal strategy. Finally, we demonstrated the performance of the proposed scheme by comparing it to the GA method. In the future, we will consider the task queuing time and study the task offloading problem in the vehicle platoon. Data Availability The data used to support the findings of this study are included within the article. Conflicts of Interest The authors declare no conflicts of interest.
7,386.4
2020-09-09T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
An optimal design of the broadcast ephemeris for LEO navigation augmentation systems ABSTRACT As the deployment of large Low Earth Orbiters (LEO) communication constellations, navigation from the LEO satellites becomes an emerging opportunity to enhance the existing satellite navigation systems. The LEO navigation augmentation (LEO-NA) systems require a centimeter to decimeter accuracy broadcast ephemeris to support high accuracy positioning applications. Thus, how to design the broadcast ephemeris becomes the key issue for the LEO-NA systems. In this paper, the temporal variation characteristics of the LEO orbit elements were analyzed via a spectrum analysis. A non-singular element set for orbit fitting was introduced to overcome the potential singularity problem of the LEO orbits. Based on the orbit characteristics, a few new parameters were introduced into the classical 16 parameter ephemeris set to improve the LEO orbit fitting accuracy. In order to identify the optimal parameter set, different parameter sets were tested and compared and the 21 parameters data set was recommended to make an optimal balance between the orbit accuracy and the bandwidth requirements. Considering the real-time broadcast ephemeris generation procedure, the performance of the LEO ephemeris based on the predicted orbit is also investigated. The performance of the proposed ephemeris set was evaluated with four in-orbit LEO satellites and the results indicate the proposed 21 parameter schemes improve the fitting accuracy by 87.4% subject to the 16 parameters scheme. The accuracy for the predicted LEO ephemeris is strongly dependent on the orbit altitude. For these LEO satellites operating higher than 500 km, 10 cm signal-in-space ranging error (SISRE) is achievable for over 20 min prediction. Introduction In recent years, the applications of Low Earth Orbiters (LEO) have attracted increasing attention in communication and navigation applications (Wang and El-Mowafy 2021;Parkinson 2014;Reid et al. 2020). A few mega LEO constellations plan has been proposed to provide global, low latency communication service, which also brings new opportunities for navigation applications. Current satellite navigation systems are all based on the Medium Earth Orbiters (MEO) or higher satellites, which have slow geometry variation and weak signals. The fast geometry change of the LEO satellite can dramatically improve the convergence time of precise positioning applications. By integrating with the communication satellites, the corrections of Global Navigation Satellite Systems (GNSS) signals can be transmitted everywhere on the earth to provide precise positioning services (Tian, Zhang, and Bian 2019;Wang et al. 2020b;Sun et al. 2016;Ge et al. 2021). In addition, the integration of the LEO communication and navigation service is a promising solution to solve the precious frequency resources and improve the resilience of the existing Positioning, Navigation, and Timing (PNT) services (Li, Jiang, and Dong 2020;Chen et al. 2019). The LEO Navigation Augmentation (LEO-NA) system is targeting to enhance the service capacity of the existing GNSS and provide globally, real-time precise positioning services and more resilience and secured PNT services (Yang 2016;Li et al. 2018Li et al. , 2019. Recently, several inorbit LEO-NA experiments have demonstrated the benefit of navigation augmentation from LEO satellites, such as the Luojia-1A satellite developed by Wuhan University (Wang et al. 2018(Wang et al. , 2020a. The precise broadcast ephemeris is the prerequisite for real-time precise positioning, while the dynamic condition for most LEO satellites is more complex than the GNSS satellites. Hence, it is a new challenge to design the LEO broadcast ephemeris to support real-time precise positioning applications. The GNSS broadcast ephemeris fitting algorithms have been extensively studied and the 16-parameter scheme based on the Keplerian elements is the most popular broadcast ephemeris form for its efficiency. Xiao et al. used the Expectation-Maximization (EM) algorithm to investigate the impact of introducing two new parameters on the satellite ephemeris precision (Xiao 2013;Xiao et al. 2014Xiao et al. , 2016. Huang et al. proposed a feasible solution to establish the broadcast ephemeris model in a hybrid constellation navigation system (Huang 2012). Wang et al. solved the singularity problem of the BeiDou GEO satellites using the Givens orthogonal transformation method (Wang 2014). Fu et al. proposed the Broadcast Ephemeris Parameter Set (BEPS) concept and presented an improved broadcast ephemeris scheme for different types of Beidou satellites based on simulation (Fu and Wu 2012). Du et al. proposed an improved 18-parameter broadcast ephemeris that solves the orbit singularity problem of GEO satellites in orbit determination (Du et al. 2015). Jin et al. improved the GEO/IGSO Navigation Satellites user algorithm by introducing a few parameters and they also analyzed the influence of range error due to truncation (RET) (Choi et al. 2020). In contrast, the LEO broadcast ephemeris fitting problem has not attracted enough attention (Meng et al. 2021). Fang et al. analyzed the orbit dynamics of LEO satellites and designed a 16/18 parameter fitting algorithm, which successfully solved the small eccentricity singularity problem. They also attempted to design a tabular type and a Keplerian type broadcast ephemeris for the LEO broadcast ephemeris model (Fang 2017;Fang et al. 2019). Xie et al. attempted to improve the accuracy of the LEO broadcast ephemeris by introducing a few more parameters to the ephemeris (Xie et al. 2018). They tested three improved ephemeris schemes, namely 18-parameter, 20parameter, and 22-parameter, and the 20-parameter set is recommended based on the simulation results. However, there are still a few issues in the LEO ephemeris fitting never being touched. At first, the optimal parameter set has not been identified. The existing research mostly introduces the new parameters according to personal experience rather than orbit analysis. Hence, it is difficult to identify the optimal parameter set for the LEO ephemeris design. The second issue is the real-time broadcast ephemeris generation issue in practice has not been fully addressed since current research only focuses on the orbit fitting. The LEO ephemeris should be fitted based on a predicted orbit to provide real-time service; the orbit prediction error is also a major error source for the LEO ephemeris. The current ephemeris fitting algorithm only considers the fitting error, which may lead to an overoptimistic evaluation of the ephemeris accuracy. This study selected the extended LEO broadcast ephemeris parameter set based on LEO satellite orbital characteristic analysis. The optimal LEO Keplerian type broadcast ephemeris is proposed by a careful examination. The remainder of the paper is organized as follows. Section 2 addressed the extended parameter set for LEO broadcast ephemeris based on the LEO orbit analysis. Section 3 introduced the user algorithms for the extended ephemeris. Section 4 evaluated the performance of different parameter sets and discussed several issues in parameter fitting. Section 5 discussed the issues related to parameter fitting with predicted orbit and Section 6 gives the conclusions. Characteristics of the LEO orbit The classical Keplerian type GNSS broadcast ephemeris comprises six basic Keplerian elements and nine perturbed parameters. Combined with the reference time t oe , it was known as the 16 parameter ephemeris. The six Keplerian elements are given as orbital semi-major axis a, orbital eccentricity e, orbital inclination i, right ascension of the ascending node Ω, the argument of perigee ω, and mean anomaly of the reference ephemeris M (Misra and Enge 2006). These elements not only describe the motion status of the satellite but also include the orbit information. The satellite position can be simply expressed with four fundamental elements, given as the radial distance r, the orbit inclination i, the right ascension Ω and the argument of latitude u. The relationship of these four elements is illustrated in Figure 1. The position of the satellite in the ECEF coordinate system can be expressed with: The argument of latitude u can be computed from three Keplerian elements: the semi-major axis a, the eccentricity e and the mean anomaly M. The reduced element set provides a more direct way to analyze the orbit characteristics. The classical GPS ephemeris also employs the reduced 4-parameter set as the intermedium parameter in the satellite position calculation process. In this study, the LEO orbit analysis is based on the reduced element set. Harmonic analysis of LEO satellite orbit elements LEO orbit representation method is similar to the navigation satellites, but its orbit characteristics are more complex. The GPS Legacy Navigation Message (LNAV) introduced six harmonic coefficients to the radial distance r, the argument of latitude u, and orbital inclination i. According to our experience, these parameters may not enough for the LEO satellites since the force model of LEO is more complicated. The sources of perturbation force experienced by low-orbit satellites in orbit mainly include the earth's non-spherical gravity, third-body gravity, atmospheric resistance, solar radiation pressure, earth albedo pressure, tidal deformation, and post-Newton effect (Dong et al. 2016). In order to find the best representation of the LEO orbit, harmonic analysis of the four orbit elements is performed and the results are presented in Figure 2. In the analysis, the variation of the four elements can be characterized as the sum of a linear trend and harmonic terms. A spectrum analysis further identified the periods of the harmonic terms. The results indicate that all four elements have multiple harmonic components, so it is not reasonable to follow the GPS LNAV design for the LEO ephemeris design. In the GPS LNAV, the changing rate of the radial range r and the argument of latitude u are not considered. According to the harmonic analysis results, 18 harmonic terms can be introduced into the LEO ephemeris if all the periodical terms are considered, and they are listed as below: where C rs1 ; C rc1 ; C rs2 ; C rc2 are the amplitude of the harmonic perturbations for radial range, C us1 , C uc1 , C us2 ; C uc2 ; C us3 ; C uc3 are the amplitude of the harmonic perturbations for the argument of latitude, and C Ωs1 ; C Ωc1 ; C Ωs3 ; C Ωc3 are the amplitude of the harmonic perturbations for the right ascension of the ascending node. However, it is not reasonable to introduce all the observed terms into the ephemeris in practice for the following reason: (1) Some harmonic terms have a very small impact on the orbit precision, so they should not be considered in ephemeris fitting to save bandwidth. (2) In the four basic elements set, the right ascension Ω is not perpendicular to the rest three elements, so introducing certain elements may have an impact on other harmonic terms. (3) Overfitting parameterization issues make the parameter fitting system numerically unstable. (4) The ephemeris should be designed as concisely as possible to save bandwidth. Hence, how to optimally select a subset of the ephemeris parameters to fit the LEO orbit is challenging. At first, the singularity issue should be considered in the LEO ephemeris fitting since there are many circular or near-circular orbit LEO satellites (Du et al. 2015). The classical Keplerian elements are designed for the elliptical orbit, which may lead to the singularity issue for the circular or near-circular orbit. In the case of small eccentricity, the argument of perigee ω cannot be distinguished from the mean anomaly M, so it loses its original physical meaning and the two parameters are strongly correlated. The correlation may lead to the fitting algorithm being ill-posed and iteration divergence. To solve this problem, an equivalent singularityfree orbital element can be introduced, which is given as a; i; Ω; e x ; e y ; λ, In this parameter set, the new parameters e x ; e y ; λ are defined as: In this way, the GPS LNAV can be equivalently expressed as: where t oe is ephemeris reference time, a is Square root of the semimajor axis, i is inclination angle at reference time, Δn is mean motion difference from the computed value, _ Ω is the rate of right ascension, idot rate of inclination angle, C rs2 ; C rc2 ; C is2 ; C ic2 ; C us2 ; C uc2 are the amplitude of second-order harmonic perturbations. The singularity-free orbital element can be easily converted to the standard broadcast ephemeris parameters. In the ephemeris fitting stage, the singularityfree elements are used and then converted to the classical elements to avoid changing the user algorithm. The extended LEO broadcast ephemeris parameter set Based on the spectrum analysis, an extended parameter set for LEO broadcast ephemeris is proposed. There are 14 new parameters observed in the LEO orbit analysis. Combining with the two linear terms and six harmonic terms in the GPS LNAV, the full LEO broadcast ephemeris contains 28 parameters in total. The details of the full LEO orbit parameter set are given in Table 1. The full parameter set derived from the spectrum analysis is different from the set given by Xie et al. They considered the second-order rate of the Keplerian elements and all 1-3 order harmonic perturbations. According to the analysis, not all harmonic perturbations should be taken into consideration and the first order changing rate seems good enough to achieve high accuracy ephemeris fitting. The remaining task is to select the optimal subset to balance the precision and bandwidth. User algorithm for the extended LEO broadcast ephemeris Based on the full LEO ephemeris parameter set, the corresponding user algorithm also should be examined. The parameters in the full ephemeris set can be divided into two groups: the GPS LNAV set and the extended 14 parameters x 14 . The extended parameter set is defined as: In this user algorithm, we simply assume all the 14 parameters are introduced. In practice, only a subset of the extended parameter set may be introduced, so the user algorithm can be simplified accordingly. The main computation process of the full LEO broadcast ephemeris can be described as follows. The time relative to the reference epoch is given as: The argument of latitude u can be computed from the radius a, the eccentricity e and the mean anomaly M. The details can be found in [GPS ICD 200]. The argument of latitude can be expressed as: where ϕ k is the uncorrected argument of latitude, v k is the true anomaly. Computing the harmonic corrections to the radial distance r, the orbit inclination i, and the argument of latitude u, given as: The Corrected radial distance r, the orbit inclination i, and the argument of latitude u can be expressed as: The satellite position in the orbit plane can be expressed as: The corrected longitude of ascending node can be expressed: The amplitude of the harmonic perturbations for radial range _ u The argument of latitude rate C us1 ; C uc1 ; C us2 ; C uc2 ; C us3 ; C uc3 The amplitude of the harmonic perturbations for the argument of latitude _ i Rate of the inclination angle The amplitude of the harmonic perturbations for the Inclination angle _ Ω Rate of right ascension The amplitude of the harmonic perturbations for the right ascension of the ascending node Then transform the satellite position from the orbit plane to the earth-fixed coordinate system, which is the same as the algorithms in GPS LNAV. LEO ephemeris fitting performance matrices The Kepler elements are defined in the Earth-Centered Inertial (ECI) framework, however, the classical broadcast ephemeris user algorithm calculates the satellite coordinates in the Earth-centered, Earthfixed (ECEF) framework (Remondi 2004). To facilitate the calculation of partial derivatives, the least-squares algorithm was used for the ephemeris fitting in the ECI framework. The procedure of the LEO ephemeris fitting is illustrated in Figure 3. The initial values of the Keplerian elements are computed from the position and velocity at the reference epochs. The ephemeris fitting is carried out in the ECI framework. The trend terms and oscillating terms are initialized as 0. Then, the ultrarapid precise orbits are used as the observations and the orbit computed from the Keplerian elements using the user algorithms are used as the "computed value". The design matrix was computed from the partial derivates of the parameters. The "observed minus computed" vector was formed to estimate the increment of the parameters. The standard least-squares algorithm was used for the parameter estimation. With the parameter increment estimated, the estimated ephemeris parameters are updated iteratively. The iteration process usually convergence within 4 to 5 iterations. Since the ephemeris fitting is a highly non-linear parameter estimation problem, it needs several iterations to obtain the accurate estimated parameters. The convergence criterion of the iteration in the parameter estimation is defined by the root mean squared error (RMSE) of the ephemeris fitting in the ECI coordinate system, which can be defined as: ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi where m is the epoch number involved in the parameter estimation, and δ generally takes 0.1. where Δx i k ; Δy i k ; Δy i z are the three-dimensional orbital errors in the ECI coordinate system, respectively. If the RMSE of the ephemeris fitting meets the iteration termination condition, the iteration process can be ended. In order to evaluate the impact of the orbit and clock error on user positioning, the Signal-In-Space Ranging Error (SISRE) is adopted as the performance measure. In this study, we only consider the orbit error contribution, and the SISRE can be calculated as follows (Montenbruck, Steigenberger, and Hauschild 2015). SISRE ¼ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi P m k¼1 w 2 where ΔR i k ; ΔC i k ; ΔA i k are the fitting error along the radial direction, along-track direction, and crosstrack direction, w R ; w A;C are the weights of the three directions, which are closely related to the satellite orbit altitude. The details of the SISRE computation are given in (Chen et al. 2013). The optimal parameter set for LEO ephemeris How to identify the optimal subset from the full LEO ephemeris parameter set is a challenging issue. In order to find the solution, we designed a series of different schemes subject to a few empirical constraints. We added 1 -8 new parameters to the 16parameter broadcast ephemeris and formed many new parameter combinations. We tried each of these combinations to address the optimal parameter set. The parameter combination number tested for different parameter number cases are listed in Table 2. We selected the optimal schemes for each parameter number case and then found the global optimal solution based on the requirements and the constraints. In this study, the harmonic parameters can only be introduced pairwise. The 23-parameter set contains 40 feasible schemes. It is noticed that the fitting accuracy is not always improved as the parameter increases. Introducing more parameter may lead to an overparameterization problem and causes an ill-posed issue in the parameter estimation procedure. In this study, introducing more than eight parameter schemes is not considered since they are often numerically unstable. This study selects four LEO satellites with orbit altitudes varying between 300 and 1400 km to evaluate the ephemeris fitting algorithm. The detailed information of the four satellites is shown in Table 3. Their official precise orbit products are used as the reference value to evaluate the fitting accuracy. Generally, these precise orbit product achieves centimeter-level accuracy and can capture the subtle variation of the satellite trajectory. Their orbit characteristics can precisely describe the future LEO navigation augmentation satellites so that the results of our experiments can reflect the real accuracy in practice. JASON-2 satellite is used as an example to identify the optimal ephemeris schemes. We fitted the orbit parameters of the JASON-2 satellites with all possible schemes in Table 2 and compared their fitting accuracy. The fitting window length was set to 20 min and the sampling interval was 15 s. The schemes with the optimal fitting precision in each group are listed in Table 4 and the fitting accuracy improvement subject to GPS LNAV is presented in Figure 4. The table indicated the SISRE of all schemes are smaller than 10 cm for the JASON-2 satellite and the fitting precision can be improved by introducing new parameters. However, the figure also indicates that fitting accuracy is not always reduced as the parameter number increases. Introducing the radial range rate _ r contributes to about 54.5% precision improvement. A more substantial improvement can be achieved by introducing two harmonic terms C us3 and C uc3 , which achieved 76.6% precision improvement subject to the GPS LNAV. The 21 parameter scheme achieves the SISRE of the fitted orbit of about 1.1 cm, and introducing more parameters does not bring obvious precision improvement. Hence, it is not necessary to introduce more parameters into the ephemeris set. The optimal 21 parameter set captured the first and the tertiary peaks of r and u in their spectrum, which also confirmed the reasonability of the selected parameter set. Table 4 also indicates the most optimal parameter set includes both the linear trend terms and new harmonic terms, while good fitting accuracy is achievable with only a few additional parameters, rather than all 14 extended parameters. Within the three directions, the cross-track always achieves the best precision and the along-track achieves slightly poorer accuracy than the other two directions. In most cases, the ephemeris fitting achieves convergence within four iterations. However, when the design matrix is ill-posed, it takes more iterations or even divergence. Ill-posed problem in ephemeris fitting The ill-posed problem is a really annoying issue in the fitting procedure, so introducing new parameters needs to be very careful to avoid this problem. Sometimes, the introduced parameters are strongly correlated to existing parameters, which leads to fitting accuracy degradation or iteration divergence. The ill-posed problem can be diagnosed by examining the condition number of the normal equation, given as (Xu 2003): where λ max and λ min are the maximum and minimum eigenvalue of the matrix to be examined. In the ephemeris fitting problem, many parameter combinations are excluded since they are numerically unstable. Table 5 gives some examples to demonstrate the impact of the ill-pose condition on the orbit fitting. The 20-parameter scheme SISRE is larger than the 19parameter one, which is because _ u is strongly correlated to other elements, which results in a larger design matrix condition number and thus estimated parameters less accurate. For these ephemeris designs with more than 24 parameters, the condition number becomes unacceptable for many combinations, which may lead to the orbit fitting failure. This is also one of the reasons that we attempt to avoid introducing too many parameters in the orbit fitting procedure. Impact of fitting interval on the fitting accuracy The fitting interval is an important parameter for the broadcast ephemeris. GNSS usually employs 1 -2 hours fitting interval to keep the broadcast ephemeris accurate. The LEO satellites have more complicated dynamics and shorter orbit periods, so the fitting window length is much smaller than the GNSS satellites. To identify the optimal fitting interval, we tested the four satellites with their fitting intervals varying from 10 to 60 min. The optimal 21-parameter set identified previously was adopted in this study and the SISRE accuracy is presented in Figure 5. The figure indicates the fitting error are strongly affected by both orbit height and the fitting interval. Generally, satellite with higher orbit height achieves better fitting accuracy and a smaller fitting interval means better fitting accuracy. A 10-min fitting interval achieves centimeter-level fitting accuracy, while a 55-min fitting interval only achieves several decimeters or even over 1-meter fitting accuracy. Assuming the acceptable fitting SISRE criterion is 10 cm, the feasible fitting interval varies between 20 to 35 min. The fitting error time series for JASON-2 Satellite is presented in Figure 6. It reveals that the fitting error may be slightly different over time, but the fitting residuals keep smaller than a few centimeters. There are no systematical biases in the fitting errors, so the fitted orbit can describe the dynamics of the LEO orbit well. The distribution of the fitting residuals over 20 min period was evaluated and the results are presented in Figure 7. The mean values of the fitting residuals are 7 mm, 8 mm and 5 mm in the radial, along-track, and cross-track respectively. The standard deviations of the fitting residuals are less than 1.3 cm for all three directions. Hence, the 21-parameter broadcast ephemeris can describe the characteristics of the LEO orbit well. The fitting accuracy of the proposed parameter set can also be validated with external models. For example, Xie et al. (2018) analyzed the LEO ephemeris fitting problem and recommended a 20-parameter set. We compared our method with Xie's model and the results are presented in Figure 8. Xie recommended introducing four extra parameter to the 16parmeter set, given as (C rs3 ; C rc3 ; _ a; _ n), while our model is a 21-parameter set. The comparing results indicate that our 21-parameter achieves a better ephemeris fitting accuracy for all four satellites and the fitting accuracy is improved by 30% on average. The procedure of real-time ephemeris generation In practice, the broadcast ephemeris needs to be generated with the predicted orbit to support the real-time navigation applications. Hence, the real-time ephemeris contains both the orbit prediction error and the orbit fitting error, while existing research only focused on analyzing the orbit fitting error, which may lead to an over-optimistic expectation of the LEO broadcast ephemeris accuracy. In order to investigate the performance of the real-time LEO broadcast ephemeris, we further tested the orbit fitting precision based on the predicted orbit. In this study, we assume the broadcast ephemeris was generated at the Ground Control Center (GCC). The procedure of the real-time LEO broadcast ephemeris generation can be illustrated in Figure 9. The onboard GNSS observations are downloaded to the ground and the predicted LEO broadcast ephemeris messages are uploaded periodically and the period is illustrated as t 1 -t 3 in this figure. At the t0 epoch, all the onboard GNSS observations before the t 0 epoch are successfully downloaded to the ground center and the ground center started LEO precise orbit determination and prediction. Then the LEO broadcast ephemeris was fitted based on the predicted orbit and then uploaded to the LEO satellite at t 1 and the LEO satellite starts broadcasting the updated ephemeris. The updated ephemeris is valid until the next period starts, which is marked as t 3 in the figure. However, before starting a new period, the onboard GNSS data until t 2 should be successfully collected and the new ephemeris should be generated and uploaded to the LEO satellite. In this workflow, the LEO prediction period is t 0 to t 3 , while the valid ephemeris period is t 1 to t 3 . Based on the LEO precise orbit determination results, a short-term prediction of the LEO orbit can maintain centimeter-level accuracy(see Figure 10). With the given initial position and velocity, the LEO orbit can be propagated with high precision force numerical integration. The prediction accuracy depends on the underlying force models. In this study, the dynamic models used in LEO orbit prediction are listed in Table 6. Then the orbit parameters are fitted to generate the real-time LEO broadcast ephemeris based on the predicted orbit. LEO predicted ephemeris accuracy evaluation The three LEO satellites, SWARM-E, HY-2A and JASON-2 are used to evaluate the accuracy of the predicted LEO ephemeris. The SISRE of the SWARM-E satellite ephemeris error over 20-min, 30min, and 40-min prediction are presented in Figure 11. The 21-parameter scheme was used to fit the predicted orbit. The figure shows that the longer period leads to a larger overall SISRE. The SISRE is less than 10 cm for the 20-min prediction. The SISRE increases to better than 20 cm for 30 min prediction and the 40-min prediction scheme achieves a sub-tics of the LEO orbit well. meter level accuracy. Given the 10-centimeter SISRE criterion, the feasible prediction period is less than 20 min. We tested the SISRE of the predicted ephemeris with the LEO satellite from different orbit heights and the results are presented in Figure 12. The figure shows that the SISRE is also affected by the orbit height. The mean SISRE for the JASON-2 satellite is only 2 cm for 20 min prediction, while the error increases to 19 cm for 40 min prediction. Both HY-2A and JASON-2 satellites can keep prediction SISRE smaller than 10 cm for 30 min, while the SWARM-E satellite can maintain 10 cm prediction SISRE for less than 20 min. Higher orbit is beneficial for maintaining the LEO broadcast ephemeris accuracy. Conclusions The design of the precise broadcast ephemeris is an indispensable issue to support LEO augmented realtime precise positioning applications. The LEO satellite suffers more complicated dynamic conditions, while the LEO-NA applications require higher broadcast ephemeris precision. In this study, the characteristics of the LEO orbit are analyzed to identify the additional parameter candidates to the extended Figure 11. Predicted ephemeris accuracy variation of SWARM-E satellite with a different prediction time. ephemeris set. Multiple feasible schemes with different parameter numbers were tested and the optimal parameter set in each parameter number group was identified. The global optimal LEO broadcast ephemeris set was then identified by trading-off between the orbit fitting accuracy and the parameter number. A 21-parameter set was proposed for LEO broadcast ephemeris, which achieves 87.3% fitting accuracy improvement subject to the GPS LNAV scheme. The fitting error characteristics and the optimal fitting window length were also examined and the results based on four in-orbit satellites indicate that 20 -50 min fitting interval can meet 10 cm SISRE criteria. This paper also considered the generation of LEO broadcast ephemeris in real-time and thus the orbit prediction error was also considered to avoid over-optimistic evaluation of the real-time LEO broadcast ephemeris. The results indicate that the precision of the fitted broadcast ephemeris with the predicted orbit is related to the orbit height. For the LEO with a higher than 500 km orbit, it is possible to achieve better than 10 cm SISRE with 20 min orbit prediction. Disclosure statement No potential conflict of interest was reported by the author(s). Funding This study was supported by the National Natural Science Foundation of China [grant number 42074036] and the Fundamental Research Funds for the Central Universities. Notes on contributors Xueli Guo is a postgraduate student at Wuhan University. His main research interests are GNSS data processing and LEO orbit determination.
7,232.4
2022-01-02T00:00:00.000
[ "Physics" ]
Information Dissemination Analysis of Different Media towards the Application for Disaster Pre-Warning Knowing the information dissemination mechanisms of different media and having an efficient information dissemination plan for disaster pre-warning plays a very important role in reducing losses and ensuring the safety of human beings. In this paper we established models of information dissemination for six typical information media, including short message service (SMS), microblogs, news portals, cell phones, television, and oral communication. Then, the information dissemination capability of each medium concerning individuals of different ages, genders, and residential areas was simulated, and the dissemination characteristics were studied. Finally, radar graphs were used to illustrate comprehensive assessments of the six media; these graphs show directly the information dissemination characteristics of all media. The models and the results are essential for improving the efficiency of information dissemination for the purpose of disaster pre-warning and for formulating emergency plans which help to reduce the possibility of injuries, deaths and other losses in a disaster. Introduction Natural and man-made disasters seriously threaten human life and property. A more reliable and efficient pre-warning information dissemination system could improve public emergency responses, and enable people to evacuate and take protective measures before and during a disaster [1]. In the Indian Coast, for example, more than one hundred people could be saved because a scientist, using his cell phone, managed to warn about an imminent serious tsunami caused by an 8.7 magnitude earthquake [2]. Moreover, victims easily survived in an effective and efficient way if they have more detailed information [3]. Therefore, the research on information dissemination is of great theoretical and practical value. This paper focuses on information dissemination relating to disaster pre-warning; it does not concern itself with research subjects such as economic-geographic development [4]. Information media can be divided into social and traditional media. Social media including short messages, microblogs, and news portals, because of their high impact and coverage ratio made possible by developments in information technology [5][6], are becoming increasingly popular and therefore critical tools of information dissemination [7]. For instance, they can enhance the decision-making process since more data is provided than it is the case with traditional media [8]. However, some traditional media, including cell phones, television, and oral communication, also play important roles in information dissemination. In some serious disaster cases, when all electronic networks are paralyzed, traditional media such as oral communication, albeit slower, can still be employed [9]. The different characteristics of each information dissemination medium have been studied in different fields. Sattler found that an effective message could be issued by a credible source and transmitted in a quick and stable way through warning message transmission by cell phone and e-mail [10]. Wei analyzed the optimal combination of television broadcasting sequences which ensured the best information dissemination to television viewers [11]. Whittaker researched information management of emails and found habits of email users in information management [12]. Furthermore, there are some studies that looked at information dissemination media in disasters and emergencies. Odeny found that short message services could improve the attendance at postoperative clinic visits after adult male circumcision for HIV prevention [13]. Zhang established that different information media, including cell phones, television and emails, have different information dissemination characteristics in disaster's pre-warning [14]. Shim used wireless TV to improve disaster management and to provide communications for respondents during a natural or man-made disaster [15]. Katada created a simulation model and built a general-purpose system for the efficient study of the dissemination of information concerning disasters and scenarios of information transmission [16]. Zhang used sound trucks to transmit information in an optimal path in the case of network paralysis caused by a serious disaster [17]. Analysis of recent studies reveals that most works focus on a single medium, but neglect detailed comparisons of different media. However, in actual situations, a single information medium cannot ensure the dissemination of large amounts of information. Therefore, each medium should be analyzed and compared to other media to improve overall efficiency of information dissemination. In this study, information dissemination models of six information media, including short message service (SMS), microblogs, news portals, cell phones, television, and oral communication were developed and the information dissemination characteristics were studied and compared. The capabilities and mechanisms of these information dissemination media were also studied concerning people of different ages, genders and residential areas. The developed models were applied to the city of Beijing. Based on the simulation and effectiveness analysis of all information media, optimized plans and suggestions were put forward to improve the effectiveness of information dissemination during emergencies. The results of this research are useful in the development of a comprehensive information dissemination system to transmit emergency information in an effective way. Method In this study we use the following evaluation indices to compare the different information dissemination characteristics: total coverage of information reception, the time it takes for half of the population to believe the information, frequency of media usage and time, the degree of trust, total cost, and delay time. Among these indices, the degree of trust and frequency of media usage and time can be directly obtained through questionnaires and the total cost can be obtained from the internet: Taobao (the most famous electronic mall in China; URL: www.taobao.com). Another three indices need to be calculated by the computational simulation based on information dissemination models. The models are established considering special information dissemination characteristics such as dissemination mechanisms and individual preference for different media. Some required parameters in the models such as average forwarding times were also collected by questionnaires which were distributed on site. The simulation allows the calculation of the total coverage of information reception, the time it takes for half of the population to believe the information and the delay time. All the parameters mentioned above are obtained from questionnaires and the internet. Different information dissemination models are described below: 1 Basic parameters in information dissemination 1.1 Brief introduction of basic parameters. In the information dissemination process effective information dissemination probability and delay time are two important factors. Effective information dissemination probability expresses the probability of a recipient receiving and believing the information after a message is distributed from an information source. Delay time is the time difference between information reception by media and by recipients. Effective information dissemination probability and delay time are related to service usage and the degree of trust which reflects the probability that people believe the media. For service usage, media usage frequency, media coverage ratio, and forwarding number are thought to be three important components. To obtain the above data, a questionnaire is considered to be a useful tool. 1.2 Questionnaires. In this study 370 questionnaires were filled out (350 of them were available) in May and June, 2013. In the period of disaster pre-warning, individual differences are very obvious in information acquisition and dissemination. The questionnaires include six multiple-choice questions and twentynine fill-in questions. We collected the following data: age, gender, educational background, vocation, media usage number (N use ) and times per day (T use ), information forwarding number (n fw ) and probability (p fw ), and the degree of trust of each of the six information media. All respondents were between the ages of 10 and 80. About half of questionnaires derived from urban areas while the rest were filled out in rural areas. The questionnaires allowed a detailed analysis of the degree of trust and service usage for short messages, microblogs, news portals, cell phones, television and oral communication. The data are presented in Table.1 The degree of trust in an information medium reflects its importance, and is crucial in information dissemination [18]. The questionnaires suggest that television, news portals, and microblogs have the three highest degrees of trust (79.0%, 57.7%, and 48.3% respectively); they are followed by cell phones (43.3%), short messages (41.3%) and oral communication (38.91%). Television and news portals have the highest degrees of trust among the six media because these two media are managed by the government. Media coverage ratio determines whether the media could be used in information dissemination for pre-warning of a disaster. Oral communication, cell phones, and short messages as the top three (100%, 99%, and 97% respectively) had the top three coverage ratios; reflecting our dependence on these media in our daily lives. Microblogs rank at the bottom of the six media types (66%) which is due to personal preference. The analysis of the data on media coverage ratios reveals that the usage coverage is very high although the degree of trust of some media is low. The frequency of media usage is an important factor, reflects the popularity of the respective medium, and determines the difficulty of information acquisition. Oral communication is the most frequently used medium in our daily lives. Cell phone and short messaging forms of communication come second as they are used more than 10 times per day. Television and news portals as mass media have lower usage numbers but longer watching times. Microblogs are very popular as well (7.5 usage times per day). Forwarding number indicates the speed of information dissemination from person to person. Strong forwarding capability could lead to rapid information acquisition [19]. Microblogs had the highest average forwarding number (132); followed by short messaging (11.8), cell phone (9.7) and oral communication (3.9). However, television and news portals are one way media transmitting information from a public organization to a wide audience, and cannot be used for forwarding information person to person. The detailed results of effective information dissemination probability and delay time are introduced and calculated below. Model Establishment The models of six typical information media including short messages, microblogs, news portals, cell phones, television and oral communication could be divided into three types. The first type was based on person to person without geographical limitation, including short messages and cell phones, the initial stage of which may present an exponential growth in recipients. The second type was based on person to person with geographical limitation such as oral communication where information can be disseminated only within a limited distance. In addition, television and news portals can transmit information from a mass media to person with a logarithmic growth in recipients which were defined as the third type. In order to establish the information dissemination model of the media, we assigned three statuses to the people who are facing the disaster: ''ignorance'', ''receiver'', and ''believer''. ''Ignorance'' is assigned to people who have not received information about the disaster; ''receiver'' refers to people who get the information but do not believe it; ''believer'' designates people who have received the information and believed it. In this study the information dissemination models of microblogs, oral communication and television were taken as the representative media to analyze. The effective information dissemination probability and delay time of each information medium were obtained through process analysis and data calculation. 2.1 Microblog. Due to fast speed and convenient operation in information dissemination, microblogs have become increasingly popular in recent years. Fig. 1 shows the information dissemination process of a microblog and the probability of a person being assigned to the status of ignorance, receiver or believer. In Fig. 1 P 1(b) expresses the probability of microblog users using a microblog per minute, and n blog is the average usage number of microblog users (16 hours are the effective time per day). P 2(b) is the probability of ignorant users receiving information and N use is the number of people using a microblog. N sp(b) is the number of spreaders N sp(b) = N bel(b) *P fw(b) , N bel(b) is the number of information believers and n fw(b) is the average number of microblog fans (The parameter n blog , N use(b) , n fw(b) and p fw(b) can be directly obtained from questionnaires). P 3(b) is the probability of that those people who received information believe the same; it is related to the degree of trust of the microblog and received information numbers in the time interval (n rec(b) can be obtained by computational simulation). The detailed simulation process is listed below: (1) Set all the parameters which are obtained from questionnaires to target people and create 5 initiative believers which are regarded as the information sources forwarding the information through microblog. (2) Search for the target people who have qualification to forward microblogs in this step. These target people should follow four conditions: a) The person is using microblog at this step b) The person is the information believer c) The person wants to forward the microblog d) The person hasn't forward the microblog yet (3) Update all the microblog online users. (4) The microblog user i checks the blog. With the increase of received information number and the degree of trust, the probability will also rise. P fw(b) is the average forwarding probability from believers to spreaders. Based on Fig. 1, the effective information dissemination probability P blog is expressed by Equation 1. a) Microblog (received information at period 1) Here T 1,blog is the total delay time, P 1,blog denotes the proportion of period 1 to 24 hours; f 1,blog is the function of delay time; P 1,blog represents the time weight of dt. b) Microblog (received information at period 2) where T 2,blog is the total delay time, P 1,blog is the proportion of period 2 to 24 hours; f2,blog shows the function of delay time; P 2,blog (dt) expresses the time weight of dt. If the usage frequency is less than 1 (n,1), i.e., the microblog user will not check the microblog every day, the average delay time is easy to calculate. Equation 7 is the comprehensive calculation of average delay time of microblog T blog where n is the usage number of the microblogs per day. Effective information dissemination probability can be obtained through the same analysis since the information dissemination models of short messaging and cell phones are very similar to that of a microblog. The delay time for short messaging was directly obtained through questionnaires. The delay time of cell phones was acquired by conducting 150 calling experiments covering different situations, including answering the phone, busy line, powering off, and hanging up. The models for cell phones and short messages are given in the Appendix A and B of Appendix S1 respectively (information dissemination characteristics curves are shown in Fig. S1 and Fig. S2). Oral communication. Oral communication is a very universal and flexible information dissemination form. However, the distance over which information can be transmitted is very limited. Because of this limitation, population density is the most important influencing factor in information dissemination. Beijing's population density decreases from the center to the periphery in concentric circles. Fig. 3 illustrates this population distribution pattern by dividing Beijing into 16 annular regions. In the urban areas depicted by the center annular region, the population density is 23000 persons/km 2 , while the outermost annular region has a population density of just 200 persons/km 2 (All data were taken from the 2010 Beijing census.). In addition, the speed of information dissemination will decrease as the distance from the central area increases. An over 100 times difference in population density among different areas leads to an obvious difference in information dissemination speed between urban areas and rural areas. Considering that the positioning of information sources will strongly influence information dissemination, the Monte Carlo method was used to simulate the information dissemination process. It was calculated that the information believers would notify on average 3.87 target people within a range of 90 m. seventy-nine percent of target people would be notified within the distance of 30 m, 14.6% in the range between 30-60 m, and 6.4% within a greater distance [20]. Therefore, the simulation grid was set to be 30 m*30 m and the Beijing area was divided into more than 20 million grids. The dissemination distance and the number of notified people were then obtained. The information dissemination time in one grid was set to one minute, i.e., all people in this grid can obtain within one minute any information disseminated by sources located in the same grid. The effective information dissemination probability, which is the product of probability of information reception, degree of trust, and average delay time, can be obtained through computational calculation. Television. Television is a mass medium with strong influence and high degree of trust that transmits information from medium to person using images and sound. In this study the television-watching period was divided into 8 phases with 3 hour intervals. Fig. 4 shows the television watching time at different time periods based, on questionnaires. The average time of TV watching peaked at 73 minutes after 6 p.m. which indicates that the majority of people choose to watch TV in the evening. This means that television may be a very good choice for disaster information dissemination during the evening. Fig. 5 shows the information dissemination process of TV. The information is assumed to be broadcast once every hour. Among the processes, P 1,TV is the ratio of watching TV in each time period. P 2,TV expresses the probability of a TV watcher getting the information from a TV station. A person should get the information at least once during a 60 minute period. P 2,TV is therefore a piecewise function related to the duration of watching time t i . Finally, the probability that television viewers believe the information P 3,TV is calculated through the degree of trust of TV (C TV ) and receiving times of information n (T use(i) ƒ60, ). In summary, the effective information dissemination probability P, is calculated by Equation 8. P TV~P1,TV : P 2,TV : P 3,TṼ T use (i) 180 In contrast to a microblog, watching television is a continuous activity. In Fig. 6, the pink color expresses the continuing watching time periods. For example, t1 shows that the watcher watched TV t1 minutes between midnight and 3 a.m. When an information source is broadcast during period 1, the average delay time is calculated based on Equation 9. (4:5{ where T 1,TV expresses the delay time; P 1,TV denotes the proportion of a 24 hour period; f 1,TV (x) is a function of delay time; P 1,TV (dt) represents the time weight of dt. The comprehensive function is calculated by Equation 10 considering different delay times from period 1 to period 8. The television model could also be used to simulate the dissemination of information by a news portal because the information dissemination mechanism of a news portal is similar to that of television. The model of a news portal is given in the Appendix C in Appendix S1 (the information dissemination characteristics curve is shown in Fig. S3). Capability analysis of information dissemination Simulations were performed concerning all the models mentioned above; and different curves were drawn to judge their capability to disseminate information, taking into account typical influencing factors such as age, gender, and residential area. In this study the information dissemination capability is reflected by the number of information believers within a fixed time period. Since the majority of children and elders acquire information from their families, the sample size of this population is small. Thus, we have set the age range from 16 to 55. According to census data [21], there are about 25 million people in Beijing. Using all the data from the simulations, detailed results of information dissemination are analyzed below: Information media 1: short messages. Fig. 7 shows the information dissemination of short messaging with different influencing factors. The curves indicate that short messages can be sent very rapidly. Statistical data analysis reveals that the information dissemination of SMS accords with equation 11. where t is the time for information dissemination and N SMS (t) is the total number of information believers. This curve follow the logistic distribution which R 2 = 0.9963. Fig. 7 (A) discloses that the capability of disseminating information of the younger group (16-35) is much greater than that of the middle-aged group (36-55). The delay time and usage time are two key factors that explain the difference between various age groups. According to Fig. 7 (B), about 18 million people will receive and believe the information within five hours of information dissemination in the urban area. Fig. 7 (B) also illustrates that the efficiency of information dissemination via short message relies more on the distribution of residential while the impact of gender can be ignored. It can be concluded that increasing short message usage frequency and usage time of people living in rural areas would greatly improve total information dissemination efficiency. Information media 2: microblogs. Fig. 8 demonstrates the information dissemination of a microblog which obeys logarithmic growth. Information source can be transmitted in a short time because there are many microblog fans. Furthermore, the number of people who believe information just reaches about 11 million because of a low degree of trust and low usage. In Fig. 8 (A) we see that the information dissemination effectiveness is low in the group aged 36-55. In contrast, the younger group shows a strong capability in information dissemination. Fig. 8 (B) illustrates that the capability of information dissemination in the urban and female groups is stronger than that in the rural and male group. Information media 3: cell phone. Cell phones are the most common information dissemination media used in our daily lives. Fig. 9 reveals that the information dissemination curves of cell phones also fit a logistic distribution with a slow initial dissemination speed. According to our statistical data analysis, the information dissemination of cell phone accords with equation 12. where t is the time for information dissemination and N phone (t) is the total number of information believers. This curve follow the logistic distribution which R 2 = 0.9987. Fig. 9 shows that more than 10 million people can be informed via cell phone communication within 6 hours after the curves abruptly became flat due to busy lines and powering off of cell phones. Fig. 9 (A) demonstrates that cell phone usage by young people (16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30)(31)(32)(33)(34)(35) is obviously higher than that of middle-aged people (36-55). Fig. 9 (B) shows that after about 16 hours the final value reached 16.8 million, while the rest 8.2 million people had not changed into information believers because of the low degree of trust in cell phones and the usage ratio of rural areas. The unusual continuous increase should be attributed to the long communication time periods via cell phones (about three hours). It also can be seen that females have a higher capability of information dissemination by cell phone than males. In addition, inhabitants in urban areas use cell phones frequently, which means this medium has a better information dissemination capability there than in rural areas immediately before a disaster situation. Information media 4: Oral communication. Fig. 10 shows that an increase in the degree of trust and forwarding number results in an increase of both the growth rate and the final number of information believers. The curves were drawn under different conditions, including different residential areas, forwarding people numbers, and degrees of trust. However, because of the geographical limitation of information dissemination via oral communication, continued increase of forwarding numbers did not result in a significant improvement. A comparison of the black to the brown line shows that, consistent with the law of population density distribution, the speed of information dissemination in urban areas is much higher than in rural areas. Considering that the position of an information source is the main factor determining the speed of information dissemination via oral communication, the Monte Carlo method was employed to improve the accuracy of results and to avoid the uncertainty caused by different information distribution sources. In this case the simulation time is set to 100. Information media 5: Television. Fig. 11 shows the information dissemination by television with different influencing factors. We can conclude that the speed of information dissemination of TV is strongly related to particular time periods: A large number of people are accustomed to watching TV between 6 p.m. and midnight; at other times the speed of information dissemination is more limited. According to the curves of information dissemination, in the first day (1440 min), about 17 million people received the information, and over the following few days the slope of the curve declined. Finally, after ten days, about 22 million people would have been informed via TV because of the high degree of trust. As shown in Fig. 11 (A), the length of time watching TV increases with increasing age, and the speed of information acquisition via TV is faster. Fig. 11 (B) shows that the capability of information dissemination of television in rural areas is, in contrast to all the other five media, stronger than in urban areas. In addition, the effect of gender is very small. Furthermore, the very high information coverage leads to the dominance of TV with regard to information dissemination. Information media 6: News portal. The information dissemination of a news portal with three influencing factors is shown in Fig. 12. Statistical data analysis shows that the information dissemination of news portal accords with equation 13. where t is the time for information dissemination and N portal (t) is the total number of information believers. This curve follow the logistic distribution which R 2 = 0.9587. Fig. 12 (A) demonstrates that age is the most important influencing factor. The fact that information acquisition of the younger group (16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30)(31)(32)(33)(34)(35) is much greater than that of the middleaged group (36-55) shows that information on news portals can be quickly disseminated among young people. Fig. 12 (B) shows that, considering the influencing factors of gender and residential area, a news portal is very fast at disseminating information, potentially reaching 15 million people in just 1700 minutes. However, due to the lower degree of trust, the final number of information believers reached only 17.8 million. The analysis of the six media under study increases our understanding of different information characteristics. When a serious disaster is approaching, a single information medium cannot manage the dissemination of a large amount of information. A combination of several information dissemination media which is tailored to the specific situation can increase the efficiency of information dissemination and provide people with more time and more accurate information to be informed and make better decisions. Furthermore, governments can make scientific and correct decisions to transmit information, based on different criteria such as information source and characteristics of disaster carriers. 2 Comprehensive assessment of each information dissemination mechanism Fig. 13 shows the change in the number of information believers over time when using short messages, microblogs, cell phones, television, news portals, and oral communication. In the initial 30 minutes, the news portal is the fastest regarding information dissemination because a large number of users can receive the information at the same time. Between 30 and 500 minutes after information dissemination short messages rank first (ignoring the maximum load carrying ability of the base station). The speed of information dissemination can reach exponential growth in the initial period since messages are transmitted from person to person in a short time. After a period of about 100 minutes, the number of information believers will reach a constant value of 16 million. Cell phones have a lower speed of information dissemination than short message services because information cannot be forwarded to many people in a short time. Fig. 13 shows a logistic curve and illustrates that cell phones increased the number of information believers to 16 million within 720 minutes. Television plays an important role in the evening when the majority of people are watching TV at home. With the highest degree of trust and coverage ratio, TV can inform as many people as possible. The information dissemination ability of a microblog is not high due to a lower coverage ratio and degree of trust. It can be used as an auxiliary tool in information dissemination. Oral communication, albeit slow, is a very important information dissemination medium in disaster situations, particularly in the case of network paralysis. A combination of different information media will improve the effectiveness and speed of information dissemination. In this paper six information media were studied. Six indices were established to evaluate the comprehensive capability of each TCIR is defined as the ratio of the number of people who received the information to the total number of people over a long enough period. THB is the time at which half of the people received and believed the information, and it represents the speed of information dissemination during the initial time. TCIR and THB can be calculated using the model process mentioned above. FMU indicates the popularity of the media. Information from official information media has a high degree of trust. The data for these two indices can be directly obtained from questionnaires. The total cost of each information medium was calculated through a price investigation on Taobao which has more than 70% of electronic commerce market share in China [22]. The delay time of the six information media are calculated by the equations mentioned above. The final scores of the six media are computed using a Min-Max Normalization; they are listed in Table.2. Six indices are classified into two categories. One is positive (+) (capability of information dissemination increases when the value of the index increases), the other is negative (2) (the value is subtracted by 1). According to these six indices, the comprehensive expression of each information medium is reflected in the radar graph shown in Fig. 14. The radar graphs express the information dissemination capability of the six media and the comprehensive characteristics directly. Data analysis reveals that short messages, cell phones, and TV have a higher comprehensive information dissemination capability. Short messages and cell phones have a shorter delay time and higher information coverage ratio as well as information dissemination speed, but the degree of trust in them is lower. Television has the highest degree of trust but longer delay time. Microblogs, which have a very long delay time and moderate degrees of trust and information coverage ratio, have a fast initial information dissemination speed. News portals, as a very popular network media, are a very fast method of information dissemination, particularly during the early periods. Oral communication is also a very important information dissemination medium (no cost, ease of use), especially in high population density areas. Different lengths of pre-warning times allow different choices of information dissemination media. For instance, if the pre-warning time is limited, short messages and news portals should be used. Ultimately, the combination of different media can improve the efficiency of information dissemination. The results above can be useful in making an emergency plan that ensures the safety of lives and properties during a disaster. Information media in disaster pre-warning Developing information dissemination technology and understanding the mechanisms of each information medium are crucial for disaster pre-warning and management. Tailored to the specific disaster situation, government and victims can use different information dissemination strategies. Cell phones, SMS, microblogs, news portals, and TV usually play the most important role in some conventional disaster pre-warning such as rainstorm and frost because they feature a higher information dissemination effectiveness. However, when a serious earthquake strikes, the majority of networking and station bases will paralyze and the majority of electronic network media cannot be used. Here, oral communication will disseminate emergency information with few words or sentences. A large number of victims would try to notify all the people around them. Our analysis of oral communication mentioned above suggests that information sources positioning strongly determined the effectiveness of information dissemination. Governments can set the optimal source position to improve the information dissemination speed; they also can assess the spreading time through the analysis introduced above to help improve disaster management and save more lives. Data analysis also revealed that TV has a higher degree of trust and that it also has a higher information dissemination effectiveness in the evening. Governments should put their attention to TV rather Table 2. Value of six indices of different information media. than microblog or news portal to spread the warning information if a serious disaster occurs in the evening. Generally speaking, by combining different information media characteristics governments can improve disaster pre-warning and reduce casualties and damage to properties in an effective way, and victims can acquire more information to make informed decisions. Summing up, there is a need to analyze information dissemination characteristics of different media to ensure that warning information can be spread to every person with short delay times in a reliable manner. Conclusions In this study models of six information dissemination media, including short messages, microblogs, cell phones, television, news portals, and oral communication, were established. The capabilities of each medium to disseminate information were assessed, using data obtained from the dissemination models, statistical data, and questionnaires collected in Beijing. Based on the information dissemination capability analysis and taking into consideration factors such as age, gender, and residential area, different characteristics of the six media were summarized. Our analysis shows that SMSs have the highest speed while cell phones can disseminate more detailed information because verbal communication allows better explanation of complex situations. People's habits suggest the employment of television be emphasized in the evening. In case of serious disasters such as earthquakes, electronic networks are prone to paralyze and oral communication will play an important role to disseminate information in a reliable manner. To directly compare and analyze different aspects of the information dissemination capabilities of the six media radar graphs considering six indices were drawn. Short message services and cell phones have more comprehensive information dissemination capabilities than other information media but they have lower degrees of trust. Television is also a good information dissemination medium; it has a higher information coverage and the highest degree of trust. Compared to other information media, oral communication is not outstanding in information dissemina-tion speed; however, it possesses convenience. News portals and microblogs can be used as auxiliary tools, but their information coverage is not very large. Each of the six media has different strength and limitations; therefore, to help improve dissemination of information, reduce losses, and ensure the safety of disaster carriers, their combination should be tailored to the specific disaster situation. The models and simulation methods can be applied to many other regions. In future works, more ways of information dissemination and more influencing factors will be considered, including the maximum information carrying capacity and the vulnerability of each medium in a disaster. An integrated system covering all recommendable combinations of media will be established to disseminate emergency information timely and accurately under various circumstances.
8,080.8
2014-05-30T00:00:00.000
[ "Computer Science", "Environmental Science" ]
Convolutional Neural Network-Based Digital Image Watermarking Adaptive to the Resolution of Image and Watermark : Digital watermarking has been widely studied as a method of protecting the intellectual property rights of digital images, which are high value-added contents. Recently, studies implementing these techniques with neural networks have been conducted. This paper also proposes a neural network to perform a robust, invisible blind watermarking for digital images. It is a convolutional neural network (CNN)-based scheme that consists of pre-processing networks for both host image and watermark, a watermark embedding network, an attack simulation for training, and a watermark extraction network to extract watermark whenever necessary. It has three peculiarities for the application aspect: The first is the host image resolution’s adaptability. This is to apply the proposed method to any resolution of the host image and is performed by composing the network without using any resolution-dependent layer or component. The second peculiarity is the adaptability of the watermark information. This is to provide usability of any user-defined watermark data. It is conducted by using random binary data as the watermark and is changed each iteration during training. The last peculiarity is the controllability of the trade-off relationship between watermark invisibility and robustness against attacks, which provides applicability for different applications requiring different invisibility and robustness. For this, a strength scaling factor for watermark information is applied. Besides, it has the following structural or in-training peculiarities. First, the proposed network is as simple as the most profound path consists of only 13 CNN layers, which is through the pre-processing network, embedding network, and extraction network. The second is that it maintains the host’s resolution by increasing the resolution of a watermark in the watermark pre-processing network, which is to increases the invisibility of the watermark. Also, the average pooling is used in the watermark pre-processing network to properly combine the binary value of the watermark data with the host image, and it also increases the invisibility of the watermark. Finally, as the loss function, the extractor uses mean absolute error (MAE), while the embedding network uses mean square error (MSE). Because the extracted watermark information consists of binary values, the MAE between the extracted watermark and the original one is more suitable for balanced training between the embedder and the extractor. The proposed network’s performance is confirmed through training and evaluation that the proposed method has high invisibility for the watermark (WM) and high robustness against various pixel-value change attacks and geometric attacks. Each of the three peculiarities of this scheme is shown to work well with the experimental results. Besides, it is exhibited that the proposed scheme shows good performance compared to the previous methods. Introduction With the general use of digital data and the widespread use of the Internet, there are frequent acts of infringement of intellectual property rights, such as illegal use, copying, and digital content theft. Digital images are high value-added contents such that their intellectual property rights must be protected. A recent technique for this is digital watermarking [1]. Watermarking embeds the owner's information (watermark (WM)) into the content, and the result is stored or distributed. This technology is to claim ownership by extracting the embedded WM information when it is necessary. Various technologies have been researched/developed according to the occupied technologies, application field, etc. Until recently, the methods have been proposed to perform WM embedding algorithmically, extract the WM algorithmically according to the embedding process, or modify it [2][3][4][5][6][7][8]. For invisibility, a typical method embeds the WM in a discrete cosine transform (DCT) domain [2], a discrete wavelet transform (DWT) domain [3,4], a discrete Fourier transform (DFT) domain [5], or a quantization index modulation (QIM) domain [6][7][8]. In general, watermarking might suffer from a malicious attack intended to damage or remove the embedded WM information or a non-malicious attack by inevitable processes to store or distribute the content. Therefore, WM embedding can be performed algorithmically or deterministically. However, WM extraction has a different situation. Because of the malicious or non-malicious attacks, the WM embedded host data is damaged that the embedded WM data may also be damaged. Therefore, it may not be appropriate to extract the WM algorithmically or deterministically, and a more statistical scheme might show better performance. With this and other reasons, recent studies have been tending to perform watermarking with a neural network (NN) [9][10][11][12][13][14][15], which are explained separately in Section 2. The purpose of them is the same as protecting intellectual property rights or ownership. In this technique, the embedder must embed a WM for the extractor to easily extract the WM with high invisibility; the extractor must extract the WM by accurately analyzing the host image's characteristics and the embedded WM. With deep learning, this relationship can be organized into a loss function that is used in back-propagation. Usually, it is performed by separating the embedding network and the extraction processes. In this paper, we investigate a NN to perform invisible watermarking that hides the insertion of WM information for digital image content as much as possible, robust watermarking that loses WM information as small as possible despite malicious and non-malicious attacks, and blind watermarking that does not use original content information when extracting WM information. The structure uses a convolutional neural network (CNN) and is implemented as simply as possible by incorporating minimum CNN layers. It consists of pre-processing networks for both the host image and the WM, a WM embedding network, an attack simulation for robustness training, and a WM extraction network. In the WM pre-processing network, the resolution of the WM data increases to that of the host image for the embedding network to maintain the host image's resolution during the process. This is to retain the amount of information of the host image to increase the watermarked image quality. Also, this network uses average pooling for each layer. This is to smoothen the discrete characteristics of the binary WM values to combine with the host image smoothly to increase the WM invisibility. In training, mean absolute error (MAE) between the extractor WM and the original WM is used, while the embedder uses mean square error (MSE) as its loss function. This is because MAE is more suitable for discrete values than MSE. It also helps train the extractor network and balance the losses for the two networks more efficiently. The proposed NN is adaptive to the watermark information for a user to use his watermark information without any further training conducted by training the NN with random patterns as the watermark. It also has the adaptability to the host image's resolution to apply any the host image resolution by not including any resolution-dependent layer or component in the NN. This network can also control the WM's invisibility and the robustness against attacks, which have trade-off relationship by incorporating a strength scaling factor for the WM information as a hyper-parameter inside the NN. This paper's composition is as follows: Section 2 introduces the relevant previous studies; the proposed network structure is explained in Section 3; Section 4 discusses the training technique and the experimental results, and this paper is concluded in Section 5. Analysis of Previous Methods NN-based watermarking schemes have been proposed [9][10][11][12][13][14][15], and their characteristics are described in this section, which is summarized in Table 1 except [9] because it is different from other schemes in that it is a non-blind scheme and only a part of embedder was implemented by NN. The characteristics in Table 1 include the domain of the data used by NNs (Domain), whether the method has a restriction on the resolution of the host image (Host image resolution adaptability), whether it is limited by a specific WM data (WM adaptability), the characteristics of the embedding and extractor networks (Embedding network and extractor network), attack simulation, attacks included in a mini-batch in the attack simulation process, training characteristics, and invisibility-robustness controllability. First, we briefly explain the method by H. Kandi [9]. It is the first method to propose a digital watermarking method using deep learning. This method uses a codebook scheme that a codebook is generated in the WM embedding process, and it is used in the WM extraction process. That means it is a non-blind scheme. It uses the normalized original host image (Posimg) and its inverted image (Negimg) consisting of the pixels by subtracting the normalized pixel values from '1'. Posimg and Negimg are processed to reduce the resolution, and then the results are up-sampled to the original resolution to form the positive codebook and negative codebook, respectively. It uses a binary WM data, and the watermarked image is formed by taking the corresponding pixel group from the positive codebook when the WM bit is '1' and from the negative codebook by changing each pixel value by subtracting the pixel value of the negative codebook from '1' when the WM bit is '0'. NN is used only the process to reduce the image resolution for both Posimg and Negimg. It is the encoder type of an autoencoder (AE) because the resolution must be reduced. The extraction process consists of determining the embedded WM value by taking the corresponding pixel group and calculating which images their values are close to, the normalized watermarked, and attacked the host image or its inverted one. It experimented only for two images, Lena and Mandrill, for various pixel-value change attacks and geometric attacks, and it showed various metrics for invisibility and robustness. Since the study of [9], most afterward works are blind watermarking methods. J. Zhu proposed a method named 'HiDDeN', which consists of a WM embedding network, noise layer (similar to the attack simulation in our scheme), WM extracting network, and an adversary network. The adversary network is for the steganographic process, which is an additional function. However, it is used for watermarking function, too. The adversary network's loss function is an adversarial loss, while the embedding and extraction networks use L2 norm loss. The watermarking process's loss function is the scaled combination of the three, but the adversary network uses only the adversarial loss. All the layers except the final layer of the extractor network consist of CONV (convolution)-BN (batch normalization)-ReLU (rectified linear activation) combination with mostly 64 channels. The WM data is re-structured to 1-dimensional data, and the result is replicated as many as the resolution of the host image, which is to affect the WM data to the whole host image. The replicated WM data is concatenated to the host image to enter to the WM embedding network. The WM embedding network's result enters both the WM extraction network through the noise layer and the adversary network. The extraction network reduces the resolution and finally processes with a global pooling and a FC (fully connected) layer, which means it is dependent on the host image resolution. It uses random WM data, but specific attacks and their strengths are used in training. One more thing to note is that it proposed a scheme to make the JPEG compression differentiable with two schemes, JPEG mask, and JPEG drop. M. Ahmadi proposed a scheme to use the DCT frequency domain by implementing and training a network to perform a DCT separately [11]. The network consists of a WM embedding network, attack simulation, and WM extraction network. Before entering the network, the host image is reduced to the resolution of WM and DCTed by a DCT network that has been constructed and trained already. The WM data is multiplied by a scaling factor and concatenated to the DCTed host image data, the result of which is processed in the WM embedding network that consists of convolution layers with ELU (exponential linear unit) activation. The middle three layers of the embedding network perform the circular convolution for a global convolution. It increases the resolution to that of the original host image, and finally, an inverse DCT layer is processed to form the watermarked image. The WM extraction network also includes the DCT layer and the inverse DCT layer used in the embedding network and several convolution layers, including the circular convolution layers. It reduces the resolution to that of the WM data. It also used specific kinds and strengths of attacks in training with only one kind of attack in a mini-batch, by which it cannot guarantee the performance for other WM information. The SSIM value is used as the loss of the WM embedding network and the cross-entropy for the extraction network. For the cost function of the training, a trade-off combination of the two values by multiplying A and 1-A, respectively, is used, where A is determined empirically. It also proposed a scheme to use the DCT within the network. S. M. Mun proposed a watermarking method with an AE-structured NN, consisting of residual blocks [12]. All residual blocks are composed of a unit that performs ReLU after adding CONV(1 × 1)-ReLU-CONV(3 × 3)-ReLU-CONV(1 × 1) and CONV(1 × 1). For the embedding, the host image is reduced to the WM's resolution by the AE encoding process. To each of the resulting layers, the WM information is added to form the embedding AE's encoded data, which is entered into the decoder of the embedding AE. This decoder increases the resolution to that of the host image and reduces the resulting number of images to the original host image's channels. The watermarked image is output by accumulating the WM information multiplied by different strength factors to the host image several times. The extractor has the AE encoding structure only. It does not use any pooling layer in the network. The attack simulation maintains a constant distribution for all the specified attacks in each mini-batch. It also uses only specific WM information, even though it includes the inverted WM information, to avoid overfitting WM. X. Zhong proposed a scheme to replace the attack simulation with a Frobenius norm [13]. Each of its embedder and extractor networks consists of four connected function networks, each other and one layer (invariance layer) connect the two networks. Thus, each pair network forms a loss function, and the final cost function for training is constructed with the linear combination of the four by determining the four-loss functions by determining the scaling factors empirically. The host image is fixed to 128 × 128 color image, but the binary random data is used as the WM data. Each network consists of convolutional layers, but the invariance layer connects the embedder network, and the extractor network consists of a sparse FC layer with tanh activation. The WM information is up-sampled (network µ) to the host image's resolution, and the result is concatenated to the host image to enter into the embedding network. The embedder reduces the resolution to that of the WM data (network γ), increases it to the host image (network σ), and additionally processes it without changing resolution (network φ). Both embedder and extractor consist of conventional CNN layers. Bingyang used the same network structure as J. Zhu [10] but incorporated an adaptive attack simulation such that it selects more the attacks for which the network shows weak robustness [14]. The method to consider the weak robustness is to include the extractor loss for the worst-case attack result. However, in a mini-batch, only one kind of attack with different strengths is included. It processes in the spatial domain and uses the FC layer to extract the WM. For WM embedding, it uses an adversarial loss, not only for the steganography layer, for training with using random patterns as the WM data. Y. Liu proposed a two-stage training scheme TSDL, two-stage separable deep learning), in which the entire NN with adversary network is trained without any attack (FEAT, free end-to-end adversary training), at first, then in the next train only the extractor without the adversary network is re-trained (ADOT, noise-aware decoder-only training) by adding the attack simulation [15]. In this scheme, the duplicated binary (here, 1, and −1) to the resolution of the host image is concatenated in each convolution layer in the embedder network except the last two layers performing 3 × 3 convolution and 1 × 1 convolution, in order. WM extraction network and the adversary network also consist of 1 × 1 and 3 × 3 convolution layers. The final layer of the extractor network consists of a FC layer after average pooing and ReLU activation. The loss function for the embedder network and the extractor network are MSE (mean square error) loos, but the adversary network uses an adversarial loss. For each of the two training processes, the appropriate combinations of the loss functions are used by multiplying scaling factors determined empirically. In training, it included only one kind of attack that is included in a mini-batch. It also used specific kinds and strengths of attacks in training and showed the test results for only the kinds and strengths of attacks used in training. • Using FC layer(s) restricts the applicable resolution of the host images [10,14,15]. These methods cannot guarantee their usefulness in general applications with other host image resolution. • Some methods did not realize the controllability of the tradeoff relationship between invisibility and robustness [9,10,13,14]. Because they cannot provide options for the user's preference, their practicality may be restricted. Therefore, for the three problems we have raised, we propose a new neural network structure with three goals, which are resolution adaptability of host images, content adaptability of watermark, and controllability of invisibility and robustness, for deep learning as follows. • Resolution adaptability of host image: Deep learning network capable of embedding watermarks in host images of all resolutions. • Content adaptability of watermark: Deep learning network that can change the content of the watermark to be inserted without re-training. • Controllability of invisibility and robustness: Deep learning network with controllable visual visibility and watermarking intensity. Accordingly, we propose a blind, invisible, and robust watermarking NN that adapts to the resolution of the host image and WM information. This method uses spatial domain data and consists of CNN layers with average pooling layers. In its attack simulation, all the attacks are included in each mini-batch, with random strength, but maintain a balanced distribution. Moreover, randomly generated data is used as the WM information in the training process that any data can be used as the WM. This method also includes a strength factor which adjusts the tradeoff relationship between invisibility and robustness. The next section describes the proposed deep learning network architecture for watermarking to achieve our three goals. Proposed Watermarking Framework In the previous section, we derived three items that deep learning-based watermarking should overcome through analysis of previous studies and selected functional goals of the deep learning network for the three items. We intend to solve the problems presented in this section and propose a new network structure for watermarking to achieve the goals. Since the network we are proposing contains various functions, the entire network is composed of four sub-networks as follows. • Host image pre-processing network for resolution adaptability of host images. • WM pre-processing network for content adaptability of watermarks. • Embedding network, Attack Simulation, Extraction network for controllability of invisibility and robustness. Next, the detailed structure and operation of these sub-networks and a method of improving the watermarking performance of the proposed network through their combination will be described. Figure 1 shows the overall structure of the proposed digital watermarking scheme: (a) for the embedder, (b) for the extractor, both relatively conventional. Our scheme is designed to process only one channel that Y component after converting RGB image to YCbCr components is used. Before entering it, it is normalized to the range of [−1, 1]. Meanwhile, the WM data is binary data, and it is scrambled with a key. Both normalized host image data and scrambled WM data are preprocessed, and the results are concatenated. Here, the WM data is multiplied by a strength scaling factor (s), to adjust the trade-off between invisibility and robustness. The concatenated result is processed in the embedding network to output the watermarked data, which is de-normalized and converted to RGB format with the converted Cb and Cr components to form the watermarked host image. Overall Watermarking Scheme The extraction process receives a watermarked and attacked RGB image, which is converted to YCbCr format. Only the Y component is taken and normalized to the [−1, 1] ranged data processed in the extractor network. It extracts the WM information as the output, and the result is de-scrambled with the key used in the scrambling process. The de-scrambled data is the final extracted WM. Structure of Watermarking Network to Be Trained As mentioned before, our intentions with the proposed NN-based watermarking scheme are simplicity in the network structure and depth, host resolution adaptability, WM adaptability, and controllability between invisibility and robustness of WM. Also, to increase the quality of the watermarked image, WM invisibility, and balanced training, we use several techniques such as maintaining the host image's resolution, average pooling for processing the WM data, MAE loss for the extractor network. Those are more focused in the following explanations. Among the functional blocks in Figure 1, only the solid-lined ones are implemented in the network; those in the dotted blocks are processed in off-the-network. This network structure is shown in Figure 2, which includes pre-processing networks for host and WM, WM embedding network, and WM extraction network. Here, the attack simulation process is included for training. This network's first structural feature is that it consists of only the simple CNNs with a relatively shallow depth that the highest depth has only 13 CNNs. Each of the consisting networks and their components was determined empirically based on the tremendous experiments' data. The second feature is that the WM pre-processing network increases the resolution to that of the host image, while most previous works reduce the host image's resolution to that of the WM. This is to maintain the host image's information to increase the invisibility of the WM, which is based on our experimental results that it is more challenging to obtain invisibility performance than robustness, and the scheme maintaining the host resolution was superior in invisibility with the same robustness. Other features of our network are dealt with in detail in the following sub-sections to explain each network. Pre-Processing Network for Host Image First, the host image pre-processing network maintains the original image's resolution and is composed of one convolutional layer (CL) with 64 filters whose strides are the same as 1. Since the embedding network's output should be similar to the host image, the input image, this network should not damage the host significantly. Thus, it consists of only one CL with 3 × 3 filters. Nevertheless, it uses 64 filters (it means 64 channels are produced) to extract as many characteristics of the host image as possible. Pre-Processing Network for WM The WM pre-processing network is configured to gradually increase the resolution to match the host image pre-processing network's resolution. This is to increase the WM invisibility, as explained before. It has been confirmed by our experiments that the case maintaining the resolution of the host image in WM embedding has high WM invisibility than the case reducing the resolution to that of the WM and increasing the resolution to output the watermarked image. This network includes four network blocks: the first three consist of the CL, batch normalization (BN), activation function (AF), an average pooling (AP), but the last block consists of CL and AP. All CLs have a 0.5 stride for up-sampling. The corresponding number of filters is 512, 256, 128, and 1, respectively. AF is the rectified linear unit (ReLU), and AP is a 2 × 2 filter with a stride of 1. AP is used because WM is a binary data that the values are discrete, but the host image data is real and continuous; it is necessary to smoothen the WM data with the APs to combine with the host image data retaining the continuous characteristics. It also has been confirmed with our experiments. The WM pre-processing network output is multiplied by the strength scaling factor to control the invisibility and robustness of the WM. WM Embedding Network The WM embedding network concatenates the results of the 64 channels of the pre-processed host information and one channel of the pre-processed WM information and uses them as the input to output the watermarked image information. The network consists of CL-BN-AF (ReLU) for the front four blocks, and the last block consists of CL-AF (tanh). The tanh activation maintains the positive and negative values to meet the data range of [−1, 1] to the input host information. All CL strides are set to 1 to maintain the resolution of the host image for invisibility. All blocks, except the last one, have 64 CL filters; the last block has the same number of filters as the number of channels in the host image, which is one here. Because we are aiming for invisible watermarking, we use the mean square error (MSE) between the watermarked image (I W Med ) and the host image (I host ) as a loss function (L 1 ) of the pre-processing network and the embedded network. This is shown in Equation (1). Here, M × N is the resolution of the host image. Attack Simulation For high robustness, the watermarked image is intentionally suffered from preset attacks in the attack simulation. It comprises seven pixel-value change attacks and 3 geometric attacks, which are considered the malicious and non-malicious attacks are occurring in the distribution process. Table 2 shows the types, strengths, and the ratio of each attack used in one mini-batch [10,11] in training. We maintain the ratios for each mini-batch, including the ones not attacked ('identity' in the table). WM Extractor Network The extraction network is a reversely symmetrical structure to the WM pre-processing network except for the number of filters used. It reduces the resolution of the watermarked and attacked image and extracts the WM information. It consists of three CL-BN-AF (ReLU) blocks and one CL-AF (tanh) block, that is the last block. We set the stride of all CLs to 2 for down-sampling. The number of filters used in the CLs is 128, 256, 512, and 1, respectively. This network uses mean absolute error (MAE) between the extracted WM (W M ext ) and the original WM (W M o ) as a loss function (L 2 ). The reason why MAE is used for the extraction network is that the WM information consists of binary values (compare to Equation (1) that uses MSE for the host image information). It is determined empirically based on the data from lots of experiments. This is shown in Equation (2). Here, X × Y is the resolution of the WM information. Loss Function of the Network for Training With the two-loss terms of Equations (1) and (2) for host information and WM information, respectively, the loss function of the whole network for training is constructed as Equations (3) and (4) for WM embedding and WM extraction, respectively. In these two equations, λ 1 , λ 2 , and λ 3 are set to the hyper-parameters that control invisibility and robustness. λ 1 represents the strength of the L 1 loss applied to the embedding network, λ 2 represents the strength of the L 2 loss applied to the embedding network, and λ 3 is the strength of the L 2 loss applied to the extraction network. Because the L 1 loss and the L 2 loss have different properties, it is not easy to analytically determine the three hyper-parameters. Therefore, they are obtained empirically. Experimental Results and Discussion Qualitative and quantitative evaluations from various experiments were performed to evaluate the invisibility and robustness of the proposed digital watermarking scheme. First, the dataset used and the environment set for implementation are described, and the measurement method for quantitative evaluation is described. The invisibility, robustness, WM adaptability, host image adaptability, and controllability of the invisibility and robustness are then verified. Finally, the results are compared with the state-of-the-art methods. Host Image We used the BOSS dataset [16], which consists of 10,000 grayscale images with 512 × 512 resolution, as the training dataset. The first reason that we chose the BOSS dataset is that it is used broadly in deep learning for various applications and techniques. Also, it contains only gray images that are more convenient to use in our network because it uses only the Y component, although Figure 1 and its explanation assumed more available RGB color images. Besides, a standard dataset [17], which has 49 grayscale images with 512 × 512 resolution and is used broadly as the evaluation dataset, was used as the evaluation dataset. We have down-sampled images in both datasets to 128 × 128 resolution to use as the host images. Watermark Binary images having a resolution of 8 × 8 was used as the WM. A random WM was generated for each iteration in the training process, and it was scrambled with a corresponding key. These randomly generated WMs are to adapt the network to any WM information. Also, they reduce overfitting in the training process. Training The proposed watermarking network was trained in a PC with an Intel (R) Core (TM) i7-9700 CPU @ 3.00 GHz, 64 GB RAM, and the RTX 2080ti GPU. The hyper-parameters and their values used in training are listed in Table 3, set empirically. A mini-batch includes 100 host images, and a newly generated random-pattern WM data was used for each mini-batch. The training was continued until the loss value is stable, which was 4000 epochs. It used Adam optimizer [18] with learning rates 0.0001 and 0.00001 for the embedding network and the extraction network, respectively. During the training, the strength factor was set to 1, and the weight decay rate was 0.01. The values of the λs were set by separate experiments after determining the other parameters. The finally determined λs values were 45, 0.2, and 20 for λ 1 , λ 2 , and λ 3 , respectively. The values of λ 1 and λ 2 are to balance the invisibility and robustness for embedder, while the value of λ 3 is to balance the training speed of the embedder and the extractor with the weight decay rate. All three values are correlated that we have experimented for the large ranges of values for them. With the hyper-parameters in Table 3, the training took about four days with the BOSS dataset. Performance Assessment Metrics For the quantitative evaluation of invisibility, the peak-signal-to-noise-ratio (PSNR) of Equation (5) has been used primarily in the previous works, and it is thus used in this study, too. As previously mentioned, the pixel value of the WM embedder's output is ranged to [−1. 1] because of normalization. It is converted to an integer in the range of [0, 255] as a final watermarked image used for the invisibility evaluation. Robustness is evaluated by the bit error ratio (BER) of the extracted WM information. Because the WM information comprises binary images, the original and extracted, WM information's pixel value is measured as one if they are the same, and 0 if they are different. The resulting values are averaged for the number of pixels. It is shown in Equation (6). Besides, the WM capacity is shown in Equation (7), which is the ratio of the WM resolution to the host image resolution. In this equation, the resolution of the WM and the host image is (X, Y) and (M, N), respectively. In this study, the WM capacity was fixed at 0.0039, without the loss of generality and practicality. Invisibility of Watermarked Image When s = 1, the watermarked image's average PSNR to the original host image from the training result showed 43.23 dB. We also applied the trained weight set to the evaluation dataset, the result of which showed the PSNR range of [37.46 dB, 42.13 dB]. The average was 40.58 dB. Figure 3 shows three example pairs of the host image (a), watermarked image (b), and the 100 times magnified difference image (c) from the test dataset. They are the ones showing the lowest (1st row), middle (2nd row), and the highest (3rd row) invisibility, respectively. As you can see, it is not easy to distinguish the original image and the watermarked image with the naked eyes, even for one with the lowest invisibility. Therefore, it can be said that the invisibility of our scheme is very high. Robustness for Various Attacks With the trained weight set, robustness experiments were conducted on various types and strengths of attacks on the evaluation dataset. Figure 4 shows some examples of the attacked images. The purpose of the attack is to use the image without ownership by malicious or non-malicious weakening or removing WM. However, as you can see from the figures, some attacks damage the image too much to reuse that those attacks are entirely meaningless. However, we included them to compare with previous works that included them. The experimental robustness results are shown in Table 4 (right now, the first column of the three BER columns), in which the BER values are the average values for all the images in the evaluation dataset. Note that the values in Table 4 are when s = 1. As you can see in Table 4, the BER values tend to increase as the attack strength increases for each kind of attack. Note that the rotation attack disturbs the image information most at the 45 degrees. So the BER increases as the rotation angle increases, but after 45 degrees, it decreases as the angle increases more. It means that the proposed network was trained well without overfitting to a specific kind of strength. As values in the table, most pixel-value change attacks showed high robustness as less BERs than 10% except Gaussian filtering attacks with 7 × 7 and higher filters, Gaussian noise attacks with σ larger than 0.08, and JPEG attack with higher compression than quality 40. Especially, it showed strong robustness for the salt-and-pepper noise addition attacks. For the geometric attacks, it is quite vital for the rotation attacks but shows high BERs against more than 50% of crop and cropout attacks and higher dropout attacks of 30%. However, those attacks for which the proposed scheme shows higher than 10% of BER are potent attacks that damage the host image a lot. Therefore, we believe the proposed scheme would be robust enough for meaningful attacks. For reference, Figure 5 shows some examples of the extracted watermarks according to their BERs. From the figures, it is quite clear that the extracted WM with a higher BER value than 10% cannot guarantee to protect the host image's intellectual property rights. WM Adaptability As mentioned before, a watermarking scheme's capability to accommodate any WM data is essential because any user can use the scheme with his WM data, even though some of the previous works do not have this WM adaptability [10,12]. Our scheme makes it possible by using newly generated random data as the WM information for every mini-batch in training. We have verified this adaptability of our scheme by experimenting with various WM information. Table 4 shows two examples of the results. The one marked as 'Random (average)' means the average values for all the WM data used, while WM2 and WM3 are the two examples of the case using two specific WM data described in the table. From those columns' values, it is confirmed that our scheme applies to any WM data without losing similar robustness. Host Image Adaptability Because the proposed method does not use any layers that are dependent on the host image's resolution, such as the FC layer, it is adaptable to the resolution of the host image. The WM invisibility and the robustness against attack were evaluated by changing the host image's resolution from 64 × 64 to 512 × 512, as shown in Table 5. Here, we fixed the WM capacity to about 0.0039. Note that the network has been trained with 128 × 128 host images and 8 × 8 WM data. As shown in Table 5, the WM invisibility increased as the host image resolution increased. Figure 6 shows the results from robustness experiments as graphs, in which each graph includes the results for one kind of attack with the different attack strengths and the different resolution of the host images. Note that the legends in other graphs for the resolution are the same as (a). According to Figure 6, the robustness decreases as the resolution increases for the most pixel-value change attacks, except the Gaussian noise addition and the high-compression JPEG attacks. Those two attacks showed not much difference in robustness for different resolutions and did not follow the tendency. For most of the geometric attacks, the robustness tends to decrease as the resolution increases, but the dropout attack showed increasing robustness as the resolution increases. Even in the cases that the robustness decreases as the resolution increases, the proportion was not large, or the increased BER values are not so high. That is, the reduced robustness by resolution change can still be regarded as high robustness. Therefore, we can conclude that the proposed method applies to various resolutions of the host image. Especially most pixel-value change attacks, the proposed scheme is more suitable to the high-resolution trend because mostly it increases the robustness as the resolution increases. Invisibility-Robustness Controllability Invisibility-robustness controllability, which can control the complementary relationship between these two characteristics, is required according to the user's need in using a watermarking system, a more robust scheme by sacrificing invisibility or a more invisible scheme by sacrificing robustness. In our scheme, the strength scaling factor is used to control this complementary relationship. When invisibility is more critical than robustness, s is set to a lower value. However, when higher robustness is needed, s is set to a higher value. Controllability, invisibility, and robustness for the various attacks with increasing s from 0.5 to 2 are measured, and the results are shown in Figure 7. This figure includes the invisibility change in (a) and robustness changes for all considered kinds and strengths of the attacks and their strengths in from (b) to (m). As shown in Figure 6a, the WM invisibility decreases as s increases, as expected. For each of the attacks, the robustness increases consistently as s increases while maintaining the performance tendency for the change in the attack's strength. This shows that the proposed scheme has the firm capability to control the trade-off relationship between invisibility and robustness. Comparison with State-of-the-Arts Methods The performances of the proposed scheme are compared with the state-of-the-art methods (HiDDeN [10], ReDMark [11], and [15]). For a fair comparison, we adjusted s to fit the PSNR similar to the other methods. Because ReDMark [11] showed precise numerical results, we first compare it with ours separately for various attacks. We adjusted the PSNR to 40.58 dB by adjusting s = 1. The results are shown in Table 6. The kinds and the strengths of attacks are from [11]. From the values in this table, it is clear that the proposed method performs better for all attacks except the Gaussian noise addition attacks. The other state-of-the-art methods did not show the clear numerical data. Therefore, we use the data presented in [15], which is the result of comparing [15] with [10] and [11], for a specific set of attacks. The comparison results with used training and test dataset are shown in Table 7. For this comparison, s was set to 2.75 for our scheme, to adjust the PSNR to 33.5 dB. As a result, the proposed method showed excellent results, except for the crop (0.035) attack, compared with [10] and [11]. However, when compared with [15], our method only showed better results for the JPEG compression attack. The crop (0.035) attack uses only 3.5% of the watermarked image to extract the WM information, which is unrealistic because 3.5% of the image is not useful. Also, the method of [15] used the same kinds and the same strengths of attacks in training. That means the only trained attacks were evaluated. Therefore, it cannot guarantee the result for other kind or other strength of attack that it cannot be said to have good robustness results in a real application. Our scheme shows high utility from the comparisons because it demonstrates exemplary performance in all attacks, except the Gaussian noise addition attack and high-strength crop attack, which also result in valueless images. Conclusions In this paper, we proposed a digital image watermarking method using CNN that does not limit the resolution of the host image and WM information. This method adjusts the complementary relationship between invisibility and robustness using the strength factor. The pre-processing network for watermark increases the WM's resolution to that of the host image for the invisibility of the WM. The embedding network processes using CNNs that maintain the resolution to output the watermarked image. The extraction network also consists of CNNs to output the WM information by reducing the resolution. We performed attack simulations on the same distribution in each mini-batch to verify the robustness of the WM. This network is composed of a simple CNN and does not use any resolution-dependent layer, such as the FC layer. It is, therefore, adaptive to the resolution of the input image. It is also independent of the WM information because it uses the newly and randomly generated WM information for each mini-batch in training. Invisibility and robustness were measured for various pixel value change attacks and geometric attacks, for various WM information and host image resolutions. The results showed excellent performance and showed better performance for meaningful attacks in comparison with the state-of-the-art works. Therefore, our scheme has been proven to be very practical and universal. Besides, by adjusting the strength factor, we confirmed that our scheme could effectively control the complementary relationship between invisibility and robustness. Therefore, we think the proposed method would be a beneficial watermarking scheme for a digital image because it enables the embedding and extraction of WMs without restrictions on the host image and WM information, that is, and without any additional training. The usefulness can be further improved by adequately controlling the invisibility and the robustness to obtain proper performance, as per the user requirements.
9,867.2
2020-09-29T00:00:00.000
[ "Computer Science" ]
Automated image segmentation method to analyse skeletal muscle cross section in exercise-induced regenerating myofibers Skeletal muscle is an adaptive tissue with the ability to regenerate in response to exercise training. Cross-sectional area (CSA) quantification, as a main parameter to assess muscle regeneration capability, is highly tedious and time-consuming, necessitating an accurate and automated approach to analysis. Although several excellent programs are available to automate analysis of muscle histology, they fail to efficiently and accurately measure CSA in regenerating myofibers in response to exercise training. Here, we have developed a novel fully-automated image segmentation method based on neutrosophic set algorithms to analyse whole skeletal muscle cross sections in exercise-induced regenerating myofibers, referred as MyoView, designed to obtain accurate fiber size and distribution measurements. MyoView provides relatively efficient, accurate, and reliable measurements for CSA quantification and detecting different myofibers, myonuclei and satellite cells in response to the post-exercise regenerating process. We showed that MyoView is comparable with manual quantification. We also showed that MyoView is more accurate and efficient to measure CSA in post-exercise regenerating myofibers as compared with Open-CSAM, MuscleJ, SMASH and MyoVision. Furthermore, we demonstrated that to obtain an accurate CSA quantification of exercise-induced regenerating myofibers, whole muscle cross-section analysis is an essential part, especially for the measurement of different fiber-types. We present MyoView as a new tool to quantify CSA, myonuclei and satellite cells in skeletal muscle from any experimental condition including exercise-induced regenerating myofibers. www.nature.com/scientificreports/ Most of current available softwares were developed to measure myofiber CSA in normal muscle or under conditions targeting muscle regeneration including synergist ablation or cardiotoxin injection 4,8 . While, these strategies induce prominent regenerating capability, there are questions about their physiological relevance due to invasive nature and the potential to damage the skeletal muscle 9 . In addition, the shape of the cells in normal muscle is characterized by polygonal and angular myofibers with keeping their contact with each other, while in regenerating myofibers they are round-shaped, highly variable in size, and smallest ones do not regularly contact surrounding fibers. Moreover, image acquisition and reconstitution of different multiple subsets of the whole muscle may expose the overall results to bias. Additionally, while fluorescent microscopy systems are vital tool for muscle biology research, they require significant manual optimization and continuous human supervision. Further, quantifying myonuclear number by microscopy methods is difficult because skeletal muscle is heterogeneous and the brightness/contrast for each image should be adjusted and raises the possibility of performing image post-processing prior to image analysis. Moreover, the variation of myonuclei intensity within the same image also complicates the automatic microscopy methods, causing over-segmentation during the myonuclei detection 10 . We therefore sought to develop a fully-automated software to quantify CSA, myonuclei and satellite cells in exercise-induced regenerating myofibers. The proposed method handles noise, moving artifact, heterogeneous and brightness/contrast variations in neutrosophic indeterminacy set. We utilized a high intensity interval training (HIIT) protocol which led to progressive hypertrophy thereby inducing muscle regeneration machinery. Here, we present a fully-automated CSA quantification method for skeletal muscle images applicable to any type of muscle and under exercise-induced regenerating muscle condition. The proposed method; named as MyoView; is based on neutrosophic set algorithms designed to automatically quantify CSA, myonuclei and satellite cells on immunofluorescent picture of the whole skeletal muscle section. In addition, it allows the analysis of the CSAs of different myofibers on the whole muscle cross-section, which we show here to be essential to obtain an accurate CSA quantification. Methods Mice and muscle tissue preparation. All experiments involving animals were performed in accordance with approved guidelines and ethical approval from Lorestan University's Institutional Animal Care and Use Committee (as registered under the code: LU.ECRA.2017. 12). Further, the present study was carried out in compliance with the ARRIVE guidelines. C57BL/6 J (n = 18) and mdx (n = 3) mice were purchased from Lorestan University of Medical Sciences Laboratories. At the end of the treatment periods, all mice were anesthetized with inhalation of isoflurane. Gastrocnemius muscles from 16-to 18-week-old C57BL/6 mature mice and mdx mice were dissected in optimal cutting temperature (OCT) medium, mounted on pieces of cork, secured with tragacanth gum, frozen in liquid nitrogen-cooled isopentane and stored at − 80 °C. Moreover, samples from regenerating muscles were provided at several time points after exercise training program (day 28 and day 56). Muscle samples were frozen in isopentane cooled by liquid nitrogen and further stored at − 80 °C. 10 µm-thick cryosections were prepared and processed for immunostaining and used to test the program's ability to recognize variability in myofiber morphology. High-intensity interval training (HIIT) protocol. First, mice were acclimated on the treadmill (5 day/ week, 10 m/min for 10 min with no incline) and then subjected to HIIT program for 8 weeks (3 sessions/week) 11 . Each training session consisted of a warm-up stage (5 min at 10 m/min), eight exercise intervals at the prescribed speed and angle of inclination for 3-5 min, and a 1 min rest interval at 10 m/min was considered between each interval. The angle of inclination was gradually increased from 10° in the first week to 15° in the second week, 20° in the third week, 25° in the fourth week, and it was maintained at 25° from weeks 4 to 8. The treadmill speed was maintained consistent (15 m/min) for the first 4 weeks and from weeks 5-8 was gradually increased by 1-2 m/ min weekly (Model T510E, Diagnostic and Research, Taoyuan, Taiwan) 9 . Immunofluorescent staining. Immunohistochemical procedures were carried out according to our previous studies 12,13 . In summary, for fiber typing, Sects. (10 µm-thick) were incubated with antibodies specific to myosin heavy chain (MyHC) types I, IIa, and IIb (BA-D5, SC-71, and BF-F3, respectively, University of Iowa Developmental Studies Hybridoma Bank, Iowa City, IA), supplemented with rabbit polyclonal anti-laminin antibody (L9393; Sigma-Aldrich, St. Louis, MO). MyHC IIx expression was judged from unstained myofibers. Secondary antibodies coupled to Alexa Fluor 405, 488 and 546 were used to detect MyHC types I, IIa, and IIb, respectively (Molecular Probes, Thermo Fisher Scientific, Waltham, MA, USA). Moreover, laminin (L9393 Sigma-Aldrich, St. Louis, MO, USA) and Pax7 (Developmental Studies Hybridoma Bank, Iowa, IA, USA) were used to detect cell border and satellite cells, respectively. Anti-rabbit IgG Cy3 and Cy5-labeled secondary antibodies (Jackson Immunoresearch Labs, West Grove, PA, USA) were used to detect laminin and Pax7. Image acquisition and quantification. All images were captured at × 10 magnification using a Carl Zeiss AxioImager fluorescent microscope (Carl Zeiss, Jena, Germany). Consecutive fields from whole muscle sections were automatically acquired in multiple channels using the mosaic function in Image M1 Software (version 4.9.1.0, RRID:SCR_002677). Development of MyoView. MyoView has been implemented in MATLAB 2017b on a machine with 2.26 GHz Corei7 CPU and 8 GB of RAM. It is very fast, simple and efficient with low time complexity to analyze skeletal muscle cross sections. The primary version of source codes is undergoing verification to be publicly available with MIT license at Code Ocean platform in https:// codeo cean. com/ capsu le/ 49100 24/ tree which gen-Statistical analyses. Reported data represent mean ± S.E.M. Statistical analysis was performed using the Graph-Pad Prism statistics software (Graph-Pad Software Inc., San Diego, La Jolla CA, USA free demo version 5.04, www. graph pad. com). One-way ANOVA followed by Tukey's post hoc test was performed for inter-user reliability comparisons. Paired, two-tailed Student's t-tests were performed for comparing MyoView with manual quantification data. Repeated-measures two-way ANOVA followed by Bonferroni multiple-comparisons tests were performed for CSA changes with HIIT program and fiber counting accuracy and efficiency measurements. Spearman correlation coefficient was computed to assess the correlation analyses. Results Proposed cell segmentation models. Model cell images in neutrosophic sets and neutrosophic images. Interactions between neutralities as well as their scope and nature are modeled in neutrosophy as a branch of philosophy. Neutrosophic logic and neutrosophic set (NS) stem from neutrosophy. Suppose that N is a universal set in the neutrosophic domain and a set X is included in N. Each member x in X is described with three real standard or nonstandard subsets of [0, 1] named as True(T), Indeterminacy(I), and False(F) which have these properties: Sup_T = t_sup, inf_T = t_inf, Sup_I = i_sup, inf_I = i_inf, Sup_F = f_sup, inf_F = f_inf, n-sup = t_ sup + i_sup + f_sup and n-inf = t_inf + i_inf + f_inf. Therefore, element x in set X is expressed as x(t,i,f), where t, i and f varies in T, I and F respectively. x(t,i,f) could be interpreted as it is t% true, i% indeterminacy, and f% false that x belongs to A, T, I and F could be considered as membership sets 7 . NS can be used in image processing domain. The main contribution of the proposed NS segmentation method is to separate, count and compute sum area of blue, green, black, and red cells in skeletal muscle cross sections. For this task, an image is transformed into the neutrosophic domain. The method of transformation is completely depending on the image processing application. In cell segmentation, image C with the dimension of m × n and L gray levels and k channels are considered. Here images with 3 channels Red, Green and Blue (RGB), each channel with the dimension of 5751 × 7600 for each channel and 256 Gy levels are used for automated segmentation. Since all neutrosophic sets are in the range of [0 1], in the first step, C is normalized to interval [0 1] as follows: where C min (k) and C max (k) represent minimum and maximum values of pixels in cell image C in channel k, respectively. C is mapped into three sets T (true subset), I (indeterminate subset) and F (false subset). Therefore, the pixel p(i,j) in C is transformed into PNS(i, j) = {T(i, j), I(i, j), F(i, j)}) or PNS (t, i, f) in neutrosophic domain. T, I and F are dedicatedly defined for each type of cells. www.nature.com/scientificreports/ where k is channel number which can be 1, 2 or 3. min and max indexes are minimum and maximum values in the whole matrix, respectively. Matrix τ k computes eligibility of pixels to be assigned to cell k, is based on normalized value of pixels in channel k associated with this cell and inverse values in other channels with respect to maximum value 1. True component T is achieved by τ normalization. If a pixel has a high value in channel k and low values in other channels simultaneously, a high percent is assigned to this pixel to be a member of cell index k. Indeterminacy matrix is calculated by difference of pixels in channel k from mean of local neighbor pixels in this channel. Therefore, pixels close to local mean of a channel receive low indeterminacy, means a high confidence of assignment is considered for those pixels. Therefore, in binarized version of True and False sets, boundary and cell pixels are illustrated in light and dark regions, respectively (Fig. 1C). In the next step, true and false sets are converted to each other to place cell pixels in true set as shown in Fig. 1D. In error correction steps, small regions and holes are corrected in Fig. 1E. Finally, www.nature.com/scientificreports/ true set T is placed in input image and boundaries are illustrated with blue color for better visualization as shown in Fig. 1F. Binarized version of detected all cell types in the whole image is depicted in Fig. 1G,H. In this step, all connected components are found by iteratively 8-neighbor correlated pixels. Components are counted, area of each component is calculated, then number of components, sum and average of areas are reported as outputs. Cell counting and area computation for color cells. For red, green and blue cells with k indexes of 1,2 and 3, respectively, PNS (t, i, f) means that this pixel is %t percent true to be a member of cell with index k, confidence of this decision is %i and %f percent true that this pixel does not belong to cell k. T, I and F for cell index k are computed as follows: For black cells, Eq. (9) for τ k computation is rewritten as: It can be interpreted by this fact that: the lower values of a pixel in all channels, the higher membership degree to black cells is assigned. Consider input image shown in Fig. 2A to apply the proposed segmentation method. For better visualization of details, a region of interest (ROI) is selected and illustrated in Fig. 2B. For each pixel in neutrosophic domain, two conditions are considered to assign high membership degree for that pixel to cell index k. The first one is high value of True matrix and the second one is low indeterminacy, means there is a high confidence to decide that pixel has a high membership degree to True set T. These conditions are combined with "AND" relation by pixelwise product of True and Indeterminacy sets as follows: The result of matrix M; still in neuromorphic domain; for blue cells (k = 2) is shown in Fig. 2C. It is clear that pixels in blue cells have higher membership degrees (in lighter gray levels) in comparison with pixels in other cells (darker pixels). Matrix M in neuromorphic domain is binarized with a strict threshold as shown in Fig. 2D. Error correction. In binarization process of image M in neutrosophic domain, some extra regions are appeared which are incorrectly assigned to blue regions (Fig. 2D). Therefore, these errors should be corrected. Error correction is done automatically. Connected components for true image T in neutrosophic domain are found iteratively by connecting all 8-neighbor pixels in Supplementary Fig. S1 and labeling connected pixels upon there is no unlabeled pixel. Average area of all components is computed and small components under 20% of average area are ignored as shown in Fig. 2E. Pixels inside blue cells are located inside a distribution of blue color in channel 3 with a mean and standard deviation. Blue pixels close to the mean of this distribution are strongly assigned to blue cells since high values of T and I matrixes leads to a high value of M for such pixels. Pixels which are far from the mean of distribution are weakly assigned to blue class since their indeterminacy I is high and their true membership is low. Therefore, their values in matrix M are low. It is worth mentioning that such pixels although have low membership degrees blue cells, they are located inside blue regions and should be assigned to blue cells. These errors are corrected by filling holes inside connected components as illustrated in Fig. 2F. www.nature.com/scientificreports/ MyoView is a reliable software for measuring CSA in response to the post-exercise regenerating situation. In order to test the reliability of MyoView, its performance was compared with some other common software including: Open-CSAM, MuscleJ, SMASH, and MyoVision (Fig. 3). We analyzed gastrocnemius muscle from various conditions, including normal muscle, regenerating muscles at several time points after exercise training (D28 and D56) in mature mice, and a model of fibrotic dystrophy (mdx) using anti-laminin antibody. Open-CSAM produced significantly lower mean CSA values as compared with manual quantification on D0, D28 and D56 (Fig. 3A, -6.6, -7.3 and -10.9%, respectively). Mean CSA values obtained with MuscleJ were similar to the manual quantification for normal muscles in D0. However, it gave higher mean CSA values in D28 and D56 post-exercise regenerating muscles (Fig. 3A, between 4.5% to 5.9% of increment). Mean CSA values obtained with SMASH were very close to the manual quantification in normal muscles in D0. However, SMASH produced higher mean CSA values in D28 and D56 post-exercise regenerating muscles (+ 5.8% and + 9.9%, respectively). MyoVision produced similar CSA values to the manual quantification for normal muscles in D0. However, it gave higher mean CSA values in D28 and D56 post-exercise regenerating muscles as compared with manual quantification (Fig. 3A, + 9.1% and + 8%, respectively). On the other hand, MyoView gave mean CSA values close to those obtained manually in D0, D28 and D56, with a very slight underestimation (Fig. 3A, -1.8%, -1.4% and -1.3%, respectively). In the case of mdx muscle, all of the softwares produced similar CSA values to the manual quantification except MuscleJ and MyoVision which they produced higher values (+ 4.8% and + 9.1%, respectively). Despite an increased CSA values obtained using MuscleJ, SMASH, and MyoVision and decreased values for Open-CSAM, the correlation between them and manual quantifications were strong in normal muscles in D0, as well as in days 28 and 56 post-exercise regenerating muscles (Fig. 3B, R 2 > 0.85), suggesting that in these conditions, CSA overestimation by MuscleJ, SMASH, and MyoVision as well as CSA underestimation by Open-CSAM were similar to all the pictures and did not introduce a specific bias. Overall correlation between Open-CSAM and manual quantification was very strong in normal muscles in D0 (Fig. 3B, R 2 = 0.9854). Although this correlation was lower in days 28 and 56 post-exercise regenerating muscles as well as on fibrotic muscles (R 2 = 0.9047, R 2 = 0.9180, and R 2 = 0.9429, respectively). Similarly, overall correlation between MuscleJ, SMASH, and MyoVision and manual quantification were very strong in normal muscles in D0 (Fig. 3B, R 2 = 0.9707, R 2 = 0.9978, and R 2 = 0.9755, respectively). Although these correlations were lower in D28 (Fig. 3B, R 2 = 0.9536, R 2 = 0.8959, and R 2 = 0.9390, respectively) and D56 post-exercise regenerating muscles (Fig. 3B, R 2 = 0.8605, R 2 = 0.8570, and R 2 = 0.9435, respectively) as well as on fibrotic muscles (R 2 = 0.8872, R 2 = 0.9626, and R 2 = 0.9368, respectively). Although, the correlation between MyoView and manual quantification was very strong in normal muscles in D0 (Fig. 3B, R 2 = 0.9799), there was no difference between this correlation and the corresponding values for other software. However, the correlation between MyoView and manual quantification was better than Open-CSAM, MuscleJ, SMASH, and MyoVision in 28 and 56 days' post-exercise regenerating muscles. This suggests that MyoView performance was better in response to the post-exercise regenerating process. We next assessed myofiber hypertrophy at 28 and 56 days after HIIT in gastrocnemius skeletal muscle using MyoView. Quantification of myofiber cross-sectional area (CSA) showed a significant increase at 56 days after HIIT but not at 28 days' point (Fig. 3A). These results indicate that MyoView is a reliable software for measuring CSA in response to the post-exercise regenerating situation. MyoView is an efficient and accurate software for measuring CSA. In order to examine MyoView efficiency in detecting myofibers and the time spent on CSA analysis as well as its accuracy, we next compared MyoView performance with manual quantification, Open-CSAM, MuscleJ, SMASH, and MyoVision softwares (Fig. 4). Figure 4B shows that there was no difference between the number of fibers counted by MyoView and the number counted by manual quantification, which corresponds to an accuracy of 98.1% ± 0.9 (Fig. 4D). In contrast, Open-CSAM, MuscleJ, SMASH, and MyoVision identified lower myofibers and spent much more time to analyse CSA from various experimental conditions (Fig. 4C, P > 0.001). Furthermore, our results indicate that HIIT program has not changed the number of fibers in gastrocnemius skeletal muscle (Fig. 4B). Moreover, as compared with manual quantification, the accuracy of MyoView in analysing CSA was 98.2% ± 1.4, (Fig. 4E), while Open-CSAM, MuscleJ, SMASH, and MyoVision have been less accurate in analysing CSA (P > 0.001). Taken together, these results suggest that MyoView is an efficient and accurate software for detecting myofibers and measuring CSA in response to the post-exercise regeneration process. MyoView performance in different fiber-types is comparable to manual quantification. We next wanted to determine how does effective MyoView work as a tool for analysing different myofiber size and type in entire cross-section of gastrocnemius muscle. Three experienced researchers used the free hand tool in Fiji to encircle individual myofibers from six images from 16-to 18-week-old C57BL/6 mice in 56 days (D56) post-HIIT program to obtain CSA values. We then ran the same images through the MyoView program and obtained a distribution of CSA across the images. The mean CSAs and distributions did not differ significantly between manual and MyoView analysis (Fig. 5A,B). Next, we tested the accuracy of fiber typing using MyoView. Fiber type analysis was manually performed by 3 experienced researchers on six images (2 images per person). We then used MyoView to obtain mean data for blue, green, black, and red channels across these six images for fiber typing. The relative proportion of each fiber type was strongly correlated between MyoView and manual analysis (R 2 > 0.97) (Fig. 5C,D). Additionally, MyoView fiber type classification results in CSA were linearly and positively correlated to manual counts (R 2 > 0.98), and there was no statistically significant difference between the CSAs of each fiber type measured by hand and by MyoView. The accuracy of MyoView fiber type analysis is estimated to be 98.5 ± 0.7% compared with manual quantification. Moreover, fiber-type analysis using Myo-View showed that HIIT program in D28 and D56, changes the characteristics of myofiber toward faster type IIb myofibers along with increasing their size (Fig. 5E,F www.nature.com/scientificreports/ whole muscle cross-section, MyHC I, IIa, IIx, and IIb fibers were similar among the five users (Fig. 6). This further demonstrates the reliability of the image outputs of the analyses taken by MyoView to analyses the CSAs of whole muscle cross-section and different fiber-types in regenerating myofibers in response to HIIT program. Whole muscle cross-section analysis for fiber type determination is essential for best accuracy. CSA determination of the of various fiber types is usually performed on a subset of images randomly taken throughout the muscle section. Depending on the researchers' decision, a variety of different number of images and thus myofibers can be qualified for fiber type analysis. This may expose the evaluation process to the possibility of selection bias as myofiber size is quite heterogeneous through the whole muscle cross-section. Figure 7A shows an example of an entire reconstituted muscle picture. We measured CSAs of different myofibers on individual images from 5 mice on 56 days post-HIIT program, calculated the mean CSAs on 12 subsets of images, and compared the results with the CSAs obtained on the whole muscle section by MyoView. When the measurement was made only using a 12 subset of pictures, there was no significant different in fiber type distribution as compared with whole muscle cross-section analysis (Fig. 7B, P > 0.8). Given that about eighty percent of gastrocnemius muscle fibers are type IIb, measuring fewer number of these fibers on 12 subsets of pictures www.nature.com/scientificreports/ ( Fig. 7C, P = 0.015) was led to underestimation of CSA of IIb fibers as compared with whole muscle cross-section analysis (Fig. 7D, P > 0.8). Moreover, compared with whole muscle cross-section analysis, significant reduced CSA of type IIx fibers was observed in analyzing of 12 subsets of pictures ( Fig. 7D, P = 0.04). Additionally, we particularly observed that compared with whole muscle cross-section analysis, mean CSA was underestimated when 12 subsets of pictures were measured (Fig. 7C, P = 0.015). Taken together, these results indicate that the whole muscle cross-section should be analyzed when measuring CSA of exercise-induced regenerating muscle in order to obtain an unbiased data. MyoView performance in myonuclei and satellite cell detection is comparable to manual quantification. In order to develop the ability of MyoView in other features of skeletal muscle regeneration, we next decided to provide its capability to detect myonuclei and satellite cells. Results of MyoView in myonuclei and satellite cell detection are shown in Fig. 8. Here, it is shown that how does MyoView work as a tool to detect myonuclear number in laminin immunofluorescence demarcating the sarcolemma and DAPI-stained nuclear DNA. Figure 8. Q shows the results from MyoView in myonuclei detection, where the myonuclei are indicated by white star signs. We only considered any myonuclei which has centroid and more than 50% of its area is located inside sarcolemma 8 . Three experienced researchers used the 3D manager plugin of Image J 14 to encircle individual myofibers from six images from 16-to 18-week-old C57BL/6 mice in D56 after HIIT to obtain myonuclear numbers. We then ran the same images through the MyoView software. Next, we tested the accuracy of myonuclei detection by comparing the results of MyoView with manual quantification. The myonuclear number was strongly correlated between MyoView and manual analysis (R 2 = 0.9941) (Fig. 9A). The mean myonuclear number did not differ significantly between manual quantification and MyoView analysis at D56 after HIIT (Fig. 9B). We next assessed the magnitude and timing of myonuclear accretion at various stages of HIIT in order to reveal involved mechanism lead to skeletal muscle regeneration. We assessed myonuclear number on cryosections from gastrocnemius by evaluating the number of nuclei within a lamininstained myofibers using MyoView. This analysis also revealed an increment of myonuclei from sedentary mice and mice subjected to 28 and 56 days of HIIT (Fig. 9C). These data demonstrate that there is a continuous increase in myonuclei throughout the exercise regimen in gastrocnemius regenerating myofibers. Taken together, www.nature.com/scientificreports/ the results from this part implicate that MyoView performance in myonuclei detection is comparable with manual quantification in regenerating myofibers in response to HIIT program. Finally, we assessed the number of satellite cells to determine the cause of elevated myonuclear number in regenerating gastrocnemius muscle. Satellite cell content was strongly correlated between MyoView and manual analysis (R 2 = 0.9622) (Fig. 9D). Moreover, we compared the results of MyoView with manual quantification for satellite cell detection at D56 after HIIT. The mean satellite cell content did not differ significantly between manual quantification and MyoView analysis at D56 after HIIT (Fig. 9E). Further, HIIT was accompanied with elevated number of Pax7 positive cells at both 28 and 56 days post-training duration (Fig. 9F). Taken together, these results suggest that skeletal muscle responds to HIIT by increasing cell size, satellite cell content and myonuclear accretion and MyoView is a powerful software to detect these changes in regenerating myofibers. Discussion Skeletal muscle fibers are extremely sensitive to exercise training stimuli, with individual myofibers capable to increase in size 9,12 . Due to this adaptive characteristic, exercise physiologists have long acknowledged the importance of accurately quantifying muscle CSA in their experiments 8 . However, there are various strategies to analyze images and CSA quantification that can give highly heterogeneous results among different laboratories and teams. On the other hand, no automated program has been developed to provides the possibility for CSA quantification in exercise-induced regenerating myofibers especially in whole muscle cross-section. In the present study, we presented MyoView software to automatically process immunofluorescence images of the whole muscle cross-sections stained with laminin α2 and antibodies specific to MyHC types I, IIa, and IIb (BA-D5, SC-71, and BF-F3, respectively) in order to facilitate the determination of different individual myofibers on D0, D28 and D56 post-exercise training. The parallel comparison between MyoView and manual quantification showed that MyoView can provide relatively efficient, accurate, and reliable measurements for detecting different myofibers and measuring CSA in response to the post-exercise regeneration process. MyoView is based on neutrosophic set algorithms. It is a fully-automated method for color cell segmentation based on neutrosophic sets. To the best of our knowledge, this is the first method which is proposed for neutrosophic cell segmentation. The main benefit behind using neutrosophic set is that: it has been applied in many applications; including segmentation of fluid/cyst regions in diabetic macular edema and exudative age-related macular degeneration patients [15][16][17] , unsupervised color-texture image segmentation 18 , automatic segmentation of choroid layer in retinal images 19 , content-based image retrieval 20,21 , and promising results were achieved. In this research, first, color cells have been modeled as neutrosophic sets with three components and www.nature.com/scientificreports/ then each component is used to increase the confidence of each pixel to its corresponding cell type. Therefore, a high-confidence assignment of pixels to cell regions is achieved. Several other semi and fully-automated softwares have been developed [4][5][6][7][8][22][23][24][25][26][27][28][29] . Among them, we have attempt to test and compare Open-CSAM, MuscleJ, SMASH, and MyoVision with MyoView and found MyoView is relatively easy to implement and more accurate for CSA quantification, especially in post-exercise regenerating muscles. We did not test all available softwares as they are either not available online or purchase is required. Open-CSAM, MuscleJ, SMASH, and MyoVision are well-designed software packages that act as free versions of commercially available image analysis tools, but they require varying amounts of manual corrections to ensure accuracy, especially when it comes to analysing different fiber type across the muscle section. The primary goal of MyoView was to develop an accurate fully-automated software for whole muscle cross-section that is user friendly and requires minimal post-analysis corrections. The accuracy in the CSA quantification and identifying different fiber types are enhanced by MyoView, especially in regenerating muscles in response to exercise training stimuli. There are several limitations for the current version of our MyoView software. First, the accuracy of CSA quantification depends on the quality of the immunostaing procedures. In this case, we recommend performing a new immunostaing rather than trying to analyze poor quality images. Second, MyoView does not allow to manually correct the wrongly identified muscle fibers. However, our initial study with the 100 mages showed that MyoView error for myofiber identification was less than 3% which it does not appear to affect the conclusion of CSA quantification. Finally, in the current version of MyoView we have not provided the possibility to count and analyze vessels and macrophages. Future improvements can be made to develop these functions in MyoView. We present a new fully-automated image analysis program, MyoView, for analyses of whole muscle crosssectional area and fiber-type distribution in exercise-induced regenerating myofibers. MyoView allows rapid and accurate analysis of whole muscle cross-sectional immunofluorescence images. Additionally, MyoView rapidly identifies different myofibers based on the expression of myosin heavy chain isoforms in skeletal muscle from any experimental condition including exercise-induced regenerating myofibers. Further, MyoView identifies myonuclei and satellite cells based on DAPI and PAX7 staining in immunofluorescence images.
7,037.8
2021-08-24T00:00:00.000
[ "Biology" ]
PIMR: Parallel and Integrated Matching for Raw Data With the trend of high-resolution imaging, computational costs of image matching have substantially increased. In order to find the compromise between accuracy and computation in real-time applications, we bring forward a fast and robust matching algorithm, named parallel and integrated matching for raw data (PIMR). This algorithm not only effectively utilizes the color information of raw data, but also designs a parallel and integrated framework to shorten the time-cost in the demosaicing stage. Experiments show that compared to existing state-of-the-art methods, the proposed algorithm yields a comparable recognition rate, while the total time-cost of imaging and matching is significantly reduced. Introduction Image matching is a crucial technique with many practical applications in computer vision, including panorama stitching [1], remote sensing [2], intelligent video surveillance [3] and pathological disease detection [4]. Hu's moment invariants as a shape feature has been widely used for image description due to its scaling and rotation invariance [5]. Wang et al. further proposed a two-step approach used for pathological brain detection by employing this feature combined with wavelet entropy [6]. Lowe's scale-invariant feature transform (SIFT) is a de facto standard for matching, on account of its excellent performance, which is invariant to a variety of common image transformations [7]. Bay's speeded up robust features (SURF) is another outstanding method performing approximately as well as SIFT with lower computational cost [8]. We proposed a lightweight approach with the name of region-restricted rapid keypoint registration (R 3 KR), which makes use of a 12-dimensional orientation descriptor and a two-stage strategy to further reduce the computational cost [9]. However, it is still computationally expensive for real-time applications. Recently, many efforts have been made to enhance the efficiency of matching by employing binary descriptors instead of floating-point ones. Binary robust independent elementary features (BRIEF) is a representative example which directly computes the descriptor bit-stream quite fast, based on simple intensity difference tests in a smoothed patch [10]. When combined with a fast keypoint detector, such as features from accelerated segment test (FAST) [11] or center surround extrema (CenSurE) [12], the method provides a better alternative for real-time applications. Despite the efficiency and robustness to image blur and illumination change, the approach is very sensitive to rotation and scale changes. Rublee et al. further proposed oriented FAST and rotated BRIEF (ORB) on the basis of BRIEF [13]. The approach acquires multi-scale FAST keypoints using a pyramid scheme, and computes the orientation of the keypoints utilizing intensity centroid [14]; thus, the descriptor is rotation-and scale-invariant. In addition, it also uses a learning method to obtain binary tests with lower correlation, so that the descriptor becomes more discriminative accordingly. Some researchers also try to increase the robustness of matching by improving the sampling pattern for descriptors. The binary robust invariant scalable keypoints (BRISK) method proposed by Leutenegger adopts a circular pattern with 60 sampling points, of which the long-distance pairs are used for computing the orientation and the short-distance ones for building descriptors [15]. Alahi's fast retina keypoint (FREAK) is another typical one leveraging a novel retina sampling pattern inspired by the human visual system [16]. Leutenegger uses a scale-space FAST-based detector in BRISK to cope with the scale invariance, which is employed by FREAK as well. However, several inherent problems remain in existing methods. The digital image which consists of 24 bit blue/green/red (BGR) data is color-interpolated, which is known as demosaicing from raw data, including adjustment for saturation, sharpness and contrast, and sometimes compression for transmission [17,18]. This operation leads to irreversible information loss and quality degeneration. Furthermore, up until now, conventional image matching has been implemented after demosaicing, and such sequential operation restricts its application to general tasks. To address these limitations, in this paper we introduce an ultra-fast and robust algorithm, coined parallel and integrated matching for raw data (PIMR). The approach takes raw data instead of a digital image as the object for analysis, which is efficient for preventing information from being tampered with artificially. It is crucial to obtain high-quality features with the result that the approach can achieve comparable precision. Meanwhile, a parallel and integrated framework is employed to accelerate the entire image matching, in which two cores are used to respectively process the matching and demosaicing stages in parallel. Our experiments demonstrate that the proposed method can acquire more robust matches in most cases, even though it is much less time-consuming than traditional sequential image matching algorithms, such as BRIEF, ORB, BRISK and FREAK. The rest of the paper is organized as follows. Section 2 gives the implementation details of the proposed method, which mainly includes the raw data reconstruction, and the parallel and integrated framework. In Section 3, we evaluate the performance of PIMR. Lastly, in Section 4, conclusions are presented. Parallel and Integrated Matching for Raw Data In Section 2.1, we give a brief account of raw data to make the reader understand our work more clearly. The key steps in PIMR are explained in Sections 2.2 and 2.3 namely the reconstruction step for raw data and the details of the parallel and integrated framework. Raw Data Raw data, which is the unprocessed digital output of an image sensor, represents an amount of electrical charges accumulated in each photographic unit of the sensor. The notable features of raw data are nondestructive white balance, lossless compression, and high bit depth (e.g., 16 bits), providing considerably wider dynamic range than the JPEG file [19][20][21]. Hence, raw data maximally retains the real information of the scene compared to the digital image. By directly analyzing raw data, not only does it prevent the original information from being tampered with artificially, which contributes to an increase in precision, but it also shortens the time-cost in the demosaicing part. The most popular format of raw data is Bayer Tile Pattern [22,23], typically in GBRG mode. It is widely used in industrial digital cameras. As illustrated in Figure 1, it is basically a series of band pass filters that only allow certain colors of light through, which are red (R), green (G) and blue (B). Green detail, to which the human visual system is more responsive than blue and red, is sampled more frequently, and the number of G is twice that of B and R. Reconstruction for Raw Data At each pixel location, the sensor measures either the red, green, or blue value, and the values of other two are highly related to the neighbors around the pixel. On this basis, in order to make raw data more appropriate for further processing, we reconstruct it as follows. Initially, we define the 2 × 2 set of pixels as a cell. It starts with the target pixel C(i, j), and the other three pixels belonging to the cell are C(i + 1, j), C(i, j + 1) and C(i + 1, j + 1), respectively, where C represents an arbitrary color in G, B and R. Next, we utilize the color information in each cell to reconstruct the intensity I for the target pixel: where G, G', B and R are the luminance values of each pixel in the defined cell, IMAX and IMIN are the maximum and minimum of the diagonal values in the cell, respectively, and w1 and w2 are the weight coefficients, which represent the contribution of diagonal color components to the cell. The greater the contribution, the larger the weight that will be assigned to the components. According to the results of multiple tests, w1 = 0.6, w2 = 0.4 in our method. All the cells in raw data are processed in this way. In this procedure, the diagonal elements in each cell, one of which is two values of G and the others are B and R values, are combined to generate the target intensity which contains the essential information of the cell. Thus, the reconstruction operation making full use of abundant color information in raw data is conducive to enhancing the matching accuracy. Moreover, it reduces time complexity effectively as well, without the procedure of demosaicing for acquiring a full color image with traditional methods. Parallel and Integrated Framework With the focus on efficiency of computation, in our methodology, two cores in a multi-core processor are used to parallel the procedure of matching and imaging. As shown in Figure 2, thread Reconstruction for Raw Data At each pixel location, the sensor measures either the red, green, or blue value, and the values of other two are highly related to the neighbors around the pixel. On this basis, in order to make raw data more appropriate for further processing, we reconstruct it as follows. Initially, we define the 2ˆ2 set of pixels as a cell. It starts with the target pixel C(i, j), and the other three pixels belonging to the cell are C(i + 1, j), C(i, j + 1) and C(i + 1, j + 1), respectively, where C represents an arbitrary color in G, B and R. Next, we utilize the color information in each cell to reconstruct the intensity I for the target pixel: where G, G', B and R are the luminance values of each pixel in the defined cell, I MAX and I MIN are the maximum and minimum of the diagonal values in the cell, respectively, and w 1 and w 2 are the weight coefficients, which represent the contribution of diagonal color components to the cell. The greater the contribution, the larger the weight that will be assigned to the components. According to the results of multiple tests, w 1 = 0.6, w 2 = 0.4 in our method. All the cells in raw data are processed in this way. In this procedure, the diagonal elements in each cell, one of which is two values of G and the others are B and R values, are combined to generate the target intensity which contains the essential information of the cell. Thus, the reconstruction operation making full use of abundant color information in raw data is conducive to enhancing the matching accuracy. Moreover, it reduces time complexity effectively as well, without the procedure of demosaicing for acquiring a full color image with traditional methods. Parallel and Integrated Framework With the focus on efficiency of computation, in our methodology, two cores in a multi-core processor are used to parallel the procedure of matching and imaging. As shown in Figure 2, thread 1 is responsible for handling the raw data matching, and thread 2 performs demosaicing to get a full color image with high quality. Eventually, the final results are obtained by merging the outcomes from the two different threads. When handling the raw data matching, first of all, we preprocess it using the raw data reconstruction step whose detailed implementation is in Section 2.2. Since the intensity information of the pixels after reconstruction has a similar data form to gray images, we adopt the multi-scale FAST detector and the rotated BRIEF descriptor in ORB to complete feature detection and description for raw data, which achieves high robustness to general image transformations, including image blur, rotation, scale and illuminance change, and is of fast speed as well [13]. For the part of feature matching, we first find the k-nearest neighbors in the sensed image to the keypoints in the reference image via brute-force matching, and then employ the ratio test explained by Lowe [7] to select the best one from the k matches. The three parts mentioned above, i.e., the feature detection, description and matching, are collectively referred to as the overall matching stage in this paper. Moreover, note that the similarity between descriptors is measured by Hamming distance [10], which can be computed rapidly via a bitwise exclusive or (XOR) operation followed by a bit count. However, as the development of the multimedia instruction set of the central processing unit (CPU) is fast, it is more effective to calculate the number of bits set to 1 using the newer instructions on modern CPUs compared with the previous bit count operation. Hence, the population count (POPCNT) instruction, which is part of SSE4.2, is applied to our method for speeding up feature matching [24]. the reference image via brute-force matching, and then employ the ratio test explained by Lowe [7] to select the best one from the k matches. The three parts mentioned above, i.e., the feature detection, description and matching, are collectively referred to as the overall matching stage in this paper. Moreover, note that the similarity between descriptors is measured by Hamming distance [10], which can be computed rapidly via a bitwise exclusive or (XOR) operation followed by a bit count. However, as the development of the multimedia instruction set of the central processing unit (CPU) is fast, it is more effective to calculate the number of bits set to 1 using the newer instructions on modern CPUs compared with the previous bit count operation. Hence, the population count (POPCNT) instruction, which is part of SSE4.2, is applied to our method for speeding up feature matching [24]. We use Malvar's algorithm for raw data demosacing [25]. The method computes an interpolation using a bilinear technique, computes a gradient correction term, and linearly combines the interpolation and the correction term to produce a corrected, high-quality interpolation of a missing color value at a pixel. A gradient-correction gain is also used to control how much correction is applied, and the gain parameters are computed by a Wiener approach and they contribute to the coefficients of the linear filter in the method. The approach outperforms most nonlinear and linear demosaicing algorithms with a reduced computational complexity. Due to the high image quality and low computational cost, the method is considered the major demosaicing approach in industrial digital cameras [26]. Experimental Details We evaluate PIMR using a well-known dataset, i.e., the Affine Covariant Features dataset introduced by Mikolajczyk and Schimid [27]. It should be noted that our approach directly processes raw data captured by the digital camera without demosaicing. For this reason, we need to extract the corresponding color information at each pixel location from the original images in the chosen dataset, in accordance with the GBRG mode of the Bayer pattern. The new dataset, being made up of a series of raw data, mainly contains the following six sequences: wall (view point change), bikes and trees (image blur), leuven (illumination change), University of British Columbia (UBC) (JPEG compression), and graffiti (rotation). Each sequence consists of six sets of raw data just as six images, sorted in order of a gradually increasing degree of distortions with respect to the first image with the exception of the graffiti sequence. We take the raw data of image 1 in each sequence as the reference data, and match the reference one against the remaining ones, yielding five matching pairs per sequence (1|2, 1|3, 1|4, 1|5 and 1|6). Figure 3 shows the first image in the original six sequences from the standard dataset. We use Malvar's algorithm for raw data demosacing [25]. The method computes an interpolation using a bilinear technique, computes a gradient correction term, and linearly combines the interpolation and the correction term to produce a corrected, high-quality interpolation of a missing color value at a pixel. A gradient-correction gain is also used to control how much correction is applied, and the gain parameters are computed by a Wiener approach and they contribute to the coefficients of the linear filter in the method. The approach outperforms most nonlinear and linear demosaicing algorithms with a reduced computational complexity. Due to the high image quality and low computational cost, the method is considered the major demosaicing approach in industrial digital cameras [26]. Experimental Details We evaluate PIMR using a well-known dataset, i.e., the Affine Covariant Features dataset introduced by Mikolajczyk and Schimid [27]. It should be noted that our approach directly processes raw data captured by the digital camera without demosaicing. For this reason, we need to extract the corresponding color information at each pixel location from the original images in the chosen dataset, in accordance with the GBRG mode of the Bayer pattern. The new dataset, being made up of a series of raw data, mainly contains the following six sequences: wall (view point change), bikes and trees (image blur), leuven (illumination change), University of British Columbia (UBC) (JPEG compression), and graffiti (rotation). Each sequence consists of six sets of raw data just as six images, sorted in order of a gradually increasing degree of distortions with respect to the first image with the exception of the graffiti sequence. We take the raw data of image 1 in each sequence as the reference data, and match the reference one against the remaining ones, yielding five matching pairs per sequence (1|2, 1|3, 1|4, 1|5 and 1|6). Figure 3 shows the first image in the original six sequences from the standard dataset. Since our work aims at realizing robust and fast feature detection, description and matching for raw data, we assess not only the accuracy but also the time-cost, comparing our PIMR with the state-of-the-art algorithms, including BRIEF, ORB, BRISK, and FREAK. We compute 1000 keypoints on five scales per raw data with a scaling factor of 1.3 using PIMR. In order to ensure a valid and fair assessment, the full color image processed by the BRIEF, ORB, BRISK and FREAK algorithms is demosaiced from raw data using the same interpolation approach as PIMR. It must be emphasized that we combine the BRIEF descriptor with the CenSurE detector, and the multi-scale adaptive and generic corner detection based on the accelerated segment test (AGAST) detector proposed in BRISK with the FREAK descriptor, based on the settings in reference [10,16]. The implementation of these methods is built with OpenCV 2.4.8 which provides a common two-dimensional (2D) feature interface. Accuracy We use the recognition rate, namely the number of correct matches versus the number of total good matches, as the evaluation criterion. The results are shown in Figure 4. Each group of five bars with different colors represents the recognition rates of PIMR, ORB, BRIEF, BRISK and FREAK, respectively. On the basis of these plots, we make the following observations. In general, our algorithm performs well for all test sequences, yielding comparable recognition accuracy with the state-of-the-art algorithms, and even outperforms them in most cases. There are only a few exceptions. For example, in the wall sequence, BRIEF achieves slightly higher precision than our methods, but in the other five sequences, PIMR outperforms the other algorithms. Since our work aims at realizing robust and fast feature detection, description and matching for raw data, we assess not only the accuracy but also the time-cost, comparing our PIMR with the state-of-the-art algorithms, including BRIEF, ORB, BRISK, and FREAK. We compute 1000 keypoints on five scales per raw data with a scaling factor of 1.3 using PIMR. In order to ensure a valid and fair assessment, the full color image processed by the BRIEF, ORB, BRISK and FREAK algorithms is demosaiced from raw data using the same interpolation approach as PIMR. It must be emphasized that we combine the BRIEF descriptor with the CenSurE detector, and the multi-scale adaptive and generic corner detection based on the accelerated segment test (AGAST) detector proposed in BRISK with the FREAK descriptor, based on the settings in reference [10,16]. The implementation of these methods is built with OpenCV 2.4.8 which provides a common two-dimensional (2D) feature interface. Accuracy We use the recognition rate, namely the number of correct matches versus the number of total good matches, as the evaluation criterion. The results are shown in Figure 4. Each group of five bars with different colors represents the recognition rates of PIMR, ORB, BRIEF, BRISK and FREAK, respectively. On the basis of these plots, we make the following observations. In general, our algorithm performs well for all test sequences, yielding comparable recognition accuracy with the state-of-the-art algorithms, and even outperforms them in most cases. There are only a few exceptions. For example, in the wall sequence, BRIEF achieves slightly higher precision than our methods, but in the other five sequences, PIMR outperforms the other algorithms. Table 1 shows the time-cost of the bikes 1|3 employing different algorithms, measured on an Intel Core i7 processor at 3.4 GHz, in seconds. Since there is no raw data construction step in traditional methods, and the demosaicing is processed in parallel with the other stages within the proposed approach, the procedures marked with dashes in the table involve no extra time. Note that the time is averaged over 50 runs. Time-Cost Considering the total time listed in the last column of Table 1, PIMR is about 1.4× faster than ORB and 7.1× faster than FREAK. In PIMR, the time-cost of demosaicing is excluded from the total time, as its demosaicing stage is faster than the sum of the other two stages. One potential reason for the high computational cost in BRISK and FREAK is the application of the scale-space FAST-based detector [15] which promises coping with the scale invariance. Matching Samples with PIMR Extensive evaluations have been made for PIMR above, and we also provide some matching samples with the chosen dataset in this part. It is considered that the invariance of rotation, scale and illuminance is the overriding concern in image matching algorithms. Thereby, we present the matching results of the graffiti (rotation and scale change) and leuven (illumination change) Table 1 shows the time-cost of the bikes 1|3 employing different algorithms, measured on an Intel Core i7 processor at 3.4 GHz, in seconds. Since there is no raw data construction step in traditional methods, and the demosaicing is processed in parallel with the other stages within the proposed approach, the procedures marked with dashes in the table involve no extra time. Note that the time is averaged over 50 runs. Time-Cost Considering the total time listed in the last column of Table 1, PIMR is about 1.4ˆfaster than ORB and 7.1ˆfaster than FREAK. In PIMR, the time-cost of demosaicing is excluded from the total time, as its demosaicing stage is faster than the sum of the other two stages. One potential reason for the high computational cost in BRISK and FREAK is the application of the scale-space FAST-based detector [15] which promises coping with the scale invariance. Matching Samples with PIMR Extensive evaluations have been made for PIMR above, and we also provide some matching samples with the chosen dataset in this part. It is considered that the invariance of rotation, scale and illuminance is the overriding concern in image matching algorithms. Thereby, we present the matching results of the graffiti (rotation and scale change) and leuven (illumination change) sequences to further demonstrate the robustness of the proposed approach. Figure 5a shows the matching result of the graffiti sequence (1|2), and Figure 5b shows the matching result of the leuven sequence (1|2). The keypoints connected by green lines indicate keypoint correspondences, namely valid matches. It can be seen that the method PIMR can acquire sufficient robust matches with few outliers. sequences to further demonstrate the robustness of the proposed approach. Figure 5a shows the matching result of the graffiti sequence (1|2), and Figure 5b shows the matching result of the leuven sequence (1|2). The keypoints connected by green lines indicate keypoint correspondences, namely valid matches. It can be seen that the method PIMR can acquire sufficient robust matches with few outliers. Conclusions We have presented a parallel and integrated matching algorithm for raw data named PIMR, mainly to speed up whole image matching and to enhance the robustness as well. In most cases, it achieves better performance for most image transformations than current approaches, with a fairly low computational cost. Furthermore, our work is not only basic research in the field of rapid image matching, but also a preliminary study with respect to a method for processing raw data. In the future, an ultra-light-weight local binary descriptor will be studied further. Author Contributions: The work presented in this paper is a collaborative development by five authors. Li defined the research theme, designed the methods and experiments. Li and Yang performed the experiments. Yang developed the data collection modules and guided the data analysis. Zhao, Han and Chai performed the data collection and data analysis. Yang wrote this paper, Li, Zhao, Han and Chai reviewed and edited the manuscript. Conclusions We have presented a parallel and integrated matching algorithm for raw data named PIMR, mainly to speed up whole image matching and to enhance the robustness as well. In most cases, it achieves better performance for most image transformations than current approaches, with a fairly low computational cost. Furthermore, our work is not only basic research in the field of rapid image matching, but also a preliminary study with respect to a method for processing raw data. In the future, an ultra-light-weight local binary descriptor will be studied further.
5,789
2016-01-01T00:00:00.000
[ "Computer Science" ]
Picometer registration of zinc impurity states in Bi2Sr2CaCu2O8+δ for phase determination in intra-unit-cell Fourier transform STM Direct visualization of electronic-structure symmetry within each crystalline unit cell is a new technique for complex electronic matter research (Lawler et al 2010 Nature 466 347–51, Schmidt et al 2011 New J. Phys. 13 065014, Fujita K et al 2012 J. Phys. Soc. Japan 81 011005). By studying the Bragg peaks in Fourier transforms of electronic structure images and particularly by resolving both the real and imaginary components of the Bragg amplitudes, distinct types of intra-unit-cell symmetry breaking can be studied. However, establishing the precise symmetry point of each unit cell in real space is crucial in defining the phase for such a Bragg-peak Fourier analysis. Exemplary of this challenge is the high-temperature superconductor Bi2Sr2CaCu2O8+δ for which the surface Bi atom locations are observable, while it is the invisible Cu atoms that define the relevant CuO2 unit-cell symmetry point. Here we demonstrate, by imaging with picometer precision the electronic impurity states at individual Zn atoms substituted at Cu sites, that the phase established using the Bi lattice produces a ∼2%(2π) error relative to the actual Cu lattice. Such a phase assignment error would not diminish reliability in the determination of intra-unit-cell rotational symmetry breaking at the CuO2 plane (Lawler et al 2010 Nature 466 347–51, Schmidt et al 2011 New J. Phys. 13 065014, Fujita K et al 2012 J. Phys. Soc. Japan 81 011005). Moreover, this type of impurity atom substitution at the relevant symmetry site can be of general utility in phase determination for the Bragg-peak Fourier analysis of intra-unit-cell symmetry. In spectroscopic imaging scanning tunneling microscopy (SI-STM), the differential tunneling conductance dI /dV (r, V ) ≡ g(r, V ) between the tip and sample is measured as a function of both the location r and the electron energy E = eV . For a simple metal, g(r, V ) = LDOS(r, E = eV where LDOS(r, E) is the spatially and energy-resolved local-density-ofelectronic states [4]. This direct assignment cannot be made for materials whose electronic structure is strongly heterogeneous at the nanoscale [5,6]. However, even in those circumstances distances (wavelengths) and spatial symmetries in the g(r, V ) images should retain their physical significance. SI-STM has proven to be of wide utility and growing significance in electronic structure studies, especially in situations where the translationally invariant noninteracting single-particle picture does not hold [6][7][8][9][10][11][12][13][14]. A key practical challenge for all such SI-STM studies is that, over the week(s) of continuous data acquisition required to measure a g(r, V ) dataset having both 50 pixels within each crystal unit cell and the large field of view (FOV) required for precision Fourier analysis, thermal and mechanical drifts distort the g(r, V ) images subtly. Such g(r, V ) distortions, while pervasive, are usually imperceptible in conventional analyses. However, they strongly impact on the capability to determine intra-unit-cell symmetry breaking, because the perfect latticeperiodicity throughout g(r, V ) that is necessary for Bragg-peak Fourier analysis is disrupted [1]. To address this issue, we recently introduced a post-measurement distortion correction technique that is closely related to an approach we developed earlier to address incommensurate crystal modulation effects [15]. The new procedure [1][2][3]16] identifies a slowly varying field u(r) [17] that measures the displacement vector u of each location r in a topographic image of the crystal surface T (r), from the location r − u(r) where it should be if T (r) were perfectly periodic with the symmetry and dimensions established by x-ray or other scattering studies of the same material. To understand the procedure, consider an atomically resolved topograph T (r) with tetragonal symmetry. In SI-STM, the T (r) and its simultaneously measured g(r, V ) are specified by measurements on a square array of pixels with coordinates labeled r = (x, y). The power-spectral-density (PSD) Fourier transform of T (r), |T (q)| 2 , whereT (q) = ReT (q) + iImT (q), then exhibits two distinct peaks representing the atomic corrugations. These are centered at the first reciprocal unit cell Bragg wavevectors Q a = (Q ax , Q ay ) and Q b = (Q bx , Q by ) with a and b being the unit cell vectors. Next, we apply a computational 'lock-in' technique in which T (r) is multiplied by reference cosine and sine functions with periodicity set by the wavevectors Q a and Q b and whose origin is chosen at an apparent atomic location in T (r). The resulting four images are filtered to retain only the q-space regions within a radius δq = 1 λ of the four Bragg peaks; the magnitude of λ is chosen to capture only the relevant image distortions (in particular, δq is chosen here to be smaller than the Bi 2 Sr 2 CaCu 2 O 8+δ supermodulation wavevector). This procedure results in retaining the local phase information θ a (r), θ b (r) that quantifies the local displacements from perfect periodicity: Dividing the appropriate pairs of images then allows one to extract Of course, in a perfect lattice θ a (r), θ b (r) would be independent of r. However, in the real image T (r), u(r) represents the distortion of the local maxima away from their expected perfectly periodic locations, with an identical distortion occurring in the simultaneous spectroscopic data g(r, V ). Considering only the components periodic with the lattice, the measured topograph can therefore be represented by T (r) = T 0 cos Q a · (r + u(r)) + cos Q b · (r + u(r)) . 4 Correcting this for the spatially dependent phases θ a (r), θ b (r) generated by u(r) requires an affine transformation at each point in (x,y) space. From equation (3) we see that the actual local phase of each cosine component at a given spatial point r, ϕ a (r), ϕ b (r), can be written as where θ i (r) = Q i · u(r), i = a, b, is the additional phase generated by the distortion field u(r). This simplifies equation (2) to which is defined in terms of its local phase fields only, and every peak associated with an atomic local maximum in the topographic image has the same ϕ a and ϕ b . The goal is then to find a transformation, using the given phase information ϕ a,b (r), to map the distorted lattice onto a perfectly periodic one. This is equivalent to finding a set of local transformations that makes θ a,b take on constant values,θ a andθ b , over all space. Thus, let r be a point on the unprocessed (distorted) T (r), and letr = r − u(r) be the point of equal phase on the 'perfectly' latticeperiodic image that needs to be determined. This produces a set of equivalence relations Solving for the components ofr and then re-assigning the T (r) values measured at r to the new locationr in the (x, y) coordinates produces a topograph with 'perfect' lattice periodicity. To solve forr we rewrite equation (6) in matrix form: where Q = Q ax Q ay Q bx Q by . Because Q a and Q b are orthogonal, Q is invertible allowing one to solve for the displacement field u(r) which maps r tor as In practice, we use the conventionθ i = 0, which generates a 'perfect' lattice with an atomic peak at the origin; this is equivalent to ensuring that there are no imaginary (sine) components to the Bragg peaks in the Fourier transform. Using this technique, one can estimate u(r) and thereby undo distortions in the raw T (r) data with the result that it is transformed into a distortion-corrected topograph T (r) exhibiting the known periodicity and symmetry of the termination layer of the crystal. The key step for electronic-structure symmetry determination is then that the identical geometrical transformations to undo u(r) in T (r) yielding T (r) are also carried out on every g(r, V ) acquired simultaneously with the T (r) to yield a distortion-corrected g (r, V ). The T (r) and g (r, V ) are then registered to each other and to the lattice with excellent precision. This procedure can be used quite generally with SI-STM data that exhibit appropriately high resolution in both r-space and q-space. 'Real' and 'imaginary' contributions to the Bragg peaks ing(q,V ) Bragg-peak Fourier analysis of an electronic structure image g(r, V ) focuses upong(q, V ) = Reg(q, V ) + i Img(q, V ), its complex valued two-dimensional (2D) Fourier transform. Here Reg(q, V ) is the cosine and Img(q, V ) the sine Fourier component, respectively. By focusing on the Bragg peaks q = Q a , Q b only those electronic phenomena with the same spatial periodicity as the crystal are considered. Obviously, successful application of this approach when using g(r, V ) images requires: (i) highly accurate registry of the unit-cell origin to satisfy the extreme sensitivity ing(q, V ) to the phase, (ii) that the g(r, V ) data set has adequate sub-unit-cell resolution without which the distinction between the four inequivalent Bragg amplitudes at Q a , Q b would be zero and (iii) that this same g(r, V ) be measured in a large FOV so as to achieve high resolution in q-space. Only recently has this combination of characteristics in g(r, V ) measurement been achieved [1][2][3]. With the availability of such data, several measures of intra-unit-cell breaking of crystal symmetry by the electronic structure become possible from the study of the real and imaginary components of the Bragg amplitudes ing( Q, V ). For example, if the crystal unit cell is tetragonal with 90 • -rotational (C 4v ) symmetry, one can search for intra-unit-cell 'nematicity' (breaking of C 4v down to 180 • -rotational (C 2v ) symmetry) in the electronic structure by Similarly, if the crystal unit-cell is centrosymmetric, one can search for intra-unit-cell breaking of inversion symmetry in electronic structure using Obviously, however, in both of these cases and in general, the correct determination of Reg(q, V ) and Img(q, V ) is critical. The assignment of the zero of coordinates at the symmetry point of the unit-cell (and thus the correct choice of phase) is therefore the fundamental practical challenge of Bragg-peak Fourier transform SI-STM. Intra-unit-cell electronic symmetry breaking in Bi 2 Sr 2 CaCu 2 O 8+δ An excellent example of this challenge can be found in the copper-oxide hightemperature superconductor Bi 2 Sr 2 CaCu 2 O 8+δ . In general, copper-oxide superconductors are 'charge-transfer' Mott insulators and are strongly antiferromagnetic due to inter-copper superexchange [18]. Doping these materials with a hole density p to create superconductivity is achieved by removing electrons from the O atoms in the CuO 2 plane [19,20]. Antiferromagnetism exists for p < 2-5%, superconductivity occurs in the range 5-10% < p < 25-30%, and a Fermi liquid state appears for p > 25-30%. For p < 20% an unusual electronic excitation with energy scale |E| = 1 , which is anisotropic in k-space [21][22][23][24][25], appears at temperatures far above the superconducting critical temperature. This region of the phase diagram has been labeled the 'pseudogap' (PG) phase because the energy scale 1 could be the energy gap of a distinct electronic phase [21,22]. Intra-unit-cell spatial symmetries of the E ∼ 1 (PG) states can be imaged directly using SI-STM in underdoped cuprates [1-3, 6, 16]. Typically, the function Z (r, V ) = g(r, +V )/g(r, −V ) is used because it eliminates the severe systematic errors in g(r, V ) generated by the intense electronic heterogeneity effects specific to these materials [1-3, 5, 6, 11, 16]. These Z (r, E) images reveal compelling evidence for intra-unit-cell C 4v symmetry breaking specific to the states at the E ∼ 1 pseudogap energy [6]. However, for Bragg-peak Fourier transform studies of this effect the choice of origin of the CuO 2 unit cell (and thus the 6 phase of the Fourier transforms) was determined by using the imaged locations of the Bi atoms in the BiO layer [1][2][3], while it is knowledge of the actual Cu atom locations that is required to most confidently examine intra-unit-cell symmetry breaking of the CuO 2 unit cell. Cu-lattice phase-resolution challenge in Bi 2 Sr 2 CaCu 2 O 8+δ To identify these sites and thus the correct phase, we studied lightly Zn-doped Bi 2 Sr 2 CaCu 2 O 8+δ crystals with p ∼ 20%. Each g(r, E = eV) map required ∼5 days and a typical resulting topograph T (r) with 64 pixels covering the area of each CuO 2 unit cell is shown in figure 1(a). This is an unprocessed topographic image T (r) of the BiO layer with the bright dots occurring at the location of Bi atoms. The inset shows a tightly focused measurement at the location of one of the Bragg peaks in |T (q)| 2 ; this clearly has spectral weight distributed over numerous pixels indicating the imperfect nature of the periodicity in this T (r). Figure 1(b) is the simultaneously measured image of electronic structure g(r, E) determined near E ∼ 1 . Figure 1(c) shows the PSD Fourier transform of figure 1(b), while its inset focuses upon a single Bragg peak. Figure 1(d) shows the processed topographic image T (r) after distortion correction using equation (9). The subtlety of these corrections is such that figure 1(d) appears virtually identical to figure 1(a) at first sight. However, the inset shows that the Bragg peak of the PSD Fourier transform |T (q)| 2 of figure 1(d) now becomes isotropic and consists of just a single pixel; this indicates that Bi atom periodicity is now as perfect as possible given the limitations of q-space resolution from the finite FOV. Figure 1(e) is the g(r, V ) data of figure 1(b), but now processed in the same distortion correction fashion as figure 1(d) to yield a function g (r, V ). Its PSD Fourier transform |g (q, V )| 2 as shown in figure 1(f) reveals how the Bragg peaks are now also sharp, indicating that the same spatial periodicity now exists in the electronic structure images (inset of figure 1(e)). Nevertheless, the location of the Cu sites in the CuO 2 plane cannot be determined from the BiO T (r) and therefore the phase for Bragg-peak Fourier analysis of g (q, V ) from the CuO 2 plane retains some ambiguity. Imaging the electronic impurity state at Zn atoms substituted for Cu To directly identify the symmetry point of the CuO 2 unit cell in a BiO topograph, we image g(r, V = −1.5 mV) measured on Zn-doped Bi 2 Sr 2 CaCu 2 O 8+δ crystals. A conductance map in a 60 nm square region (simultaneous topograph figure 2(a)) is shown in figure 2(b); the overall light background is indicative of a very low conductance near E F as expected in the superconducting state. However, we also detect a significant number of randomly distributed dark sites corresponding to areas of high conductance each with a distinct fourfold symmetric shape and the same relative orientation. The spectrum at the center of a dark site has a very strong intra-gap conductance peak at energy E = −1.5 ± 0.5 meV, while the superconducting coherence peaks are suppressed [26]. This is a unitary strength quasiparticle scattering resonance at a single, potential-scattering, impurity atom in a d-wave superconductor [26,27]. These signatures can be used to identify the location of Zn atoms substituted on the Cu sites of Bi 2 Sr 2 CaCu 2 O 8+δ . Figure 2(a) actually shows the topographic image T (r) of the BiO layer after its distortion correction has been carried out, whereas figure 2(b) shows the identically distortion-corrected image of differential conductance. Fourteen Zn impurity state sites at Cu sites in the CuO 2 plane are observed. Imaging the locations of these individual Zn resonance sites with approximately q,e=1) 3x10 6 PSD0 2 x 1 0 6 PSD 0 Fourteen Zn impurity resonances are distinguishable in this image. These data have been processed using the distortion correction algorithm. The data in (a) and (b) were acquired at 1 G junction resistance, 60 mV tip-sample bias. Determination of Cu-lattice phase error from Bi-lattice calibration In figure 3, each pair of panels (a-b), (c-d), . . . , (w-x) contains the simultaneously measured and identically distortion-corrected images T (r) and g (r, −1.5 mV), each with 76 pixels inside the area of every CuO 2 unit cell. The coordinates of every Bi atom in the perfectly square lattice are known with approximately picometer precision in these T (r) images. The location of the Zn impurity state in each of the g (r, −1.5 mV) images is determined by fitting a 2D Gaussian to the central peak of the Zn resonance; a typical resulting error bar for the location of the maximum is a value between 1 and 2 pm (see supplementary information, section I, available from stacks.iop.org/NJP/14/053017/mmedia). The smallness of this error with respect to the The data in (a) and (b) were acquired at 1 G junction resistance, 60 mV, and were obtained from a double-layer g(r, E) map. A total time of 340 ms has elapsed between the measurement of (a) and (b). All subsequent image pairs represent equivalent data at a different location. All data in this figure were obtained from five maps with identical acquisition parameters, and have been processed using the distortion correction algorithm. pixel size is well understood in terms of the large signal-to-noise ratio at Zn resonances [26] (supplementary information, section I). These procedures yield the displacement vector d of every Zn-resonance maximum from the site of the nearest Bi atom as identified in T (r). Figure 4(b) shows a combined analysis of the measured values of d for all the CuO 2 unit cells containing a Zn atom (with the origin of each centered at the relevant Bi atom identified from the nearest maximum in T (r) images in figure 3). The measured d of every Zn resonance is shown as a red dot. The resulting average (Zn, Bi) displacement vector shown in black has a magnitude of ∼2% of the CuO 2 unit cell dimension (1 standard deviation of the distribution is indicated by the grey ellipse). It is quite obvious from figure 4(b) that the Zn resonances are extremely close to the Bi sites, meaning that the CuO 2 layer is not shifted significantly from its expected location below the BiO layer ( figure 4(a)). Beyond the fact that the average displacement d represents only a ∼2%(2π) error in the phase determination for the CuO 2 layer when using the BiO layer, other information on systematic errors within the SI-STM approach can be examined using these data. For example, the observed distribution of d rules out the existence of any discrete pixel displacement between T (r) and its simultaneously measured g(r, V ), as might occur if there were a software or processing error. Another point is that any spatial drift of the tip location during the hundreds of elapsed milliseconds between the measurement of the topographic signal and the differential conductance signal is also below 2% of the unit cell dimension. In fact the data in Figures 2-4 show that, for our instruments, the measured T (r) and g(r, V = −1.5 mV) are registered to each other within a few pm. Additionally, studies of this same set of Zn resonances using the 180 • -rotated scan direction (supplementary information, section II) yield an equivalently narrow (but distinct) distribution of values of d. Moreover, the center of this distribution is not shifted along the scan relative to that in figure 4(b), indicating that random picometer scale image distortions dominate the d distribution and not the trajectory of the tip. Thus, we do not currently regard the apparent displacement (CuO 2 , BiO) in figure 4(b) as a property of the crystal lattice, but rather due to measurement limitations at these picometer length scales (see supplementary information, section II). Conclusions and future Three key practical conclusions emerge from these studies. Firstly, the lateral shift between the surface BiO layer and the CuO 2 layer is measured at less than 2% of the unit cell dimension. Secondly, at this approximately picometer precision there is no resolvable spatial drift of the tip location during the fractions of a second between the measurement of the topographic signal and the differential conductance signal. Perhaps most importantly, the ∼2%(2π ) phase error in the choice of the Cu-lattice origin observed here would not, based on results from our simulations (supplementary information, section III, available from stacks.iop.org/NJP/14/053017/mmedia), impact on Fourier transform analysis using Reg( Q a , V ) and Reg( Q b , V ) to determine symmetry breaking in g(r, V ). However, such a ∼2%(2π ) phase error would have a significant effect on the measure of intra-unit-cell inversion-symmetry breaking like O I (V ), yielding an incorrect non-zero value for Img( Q a , V ) or Img( Q b , V ) of ∼15% of Reg( Q a , V ) or Reg( Q b , V ) (supplementary information, section III). Specifically, a ∼2%(2π ) phase assignment error for the Cu sites does not diminish the reliability in the determination of intra-unit-cell rotational symmetry breaking at the CuO 2 plane [1][2][3]. Of more long-term significance is the fact that impurity atom substitution at the relevant symmetry site as demonstrated here can be of general utility in accurate phase determination for Bragg-peak Fourier analysis of intra-unit-cell symmetry.
5,062.4
2012-02-20T00:00:00.000
[ "Physics", "Materials Science" ]
On Mode Regularization of the Configuration Space Path Integral in Curved Space The path integral representation of the transition amplitude for a particle moving in curved space has presented unexpected challenges since the introduction of path integrals by Feynman fifty years ago. In this paper we discuss and review mode regularization of the configuration space path integral, and present a three loop computation of the transition amplitude to test with success the consistency of such a regularization. Key features of the method are the use of the Lee-Yang ghost fields, which guarantee a consistent treatment of the non-trivial path integral measure at higher loops, and an effective potential specific to mode regularization which arises at two loops. We also perform the computation of the transition amplitude using the regularization of the path integral by time discretization, which also makes use of Lee-Yang ghost fields and needs its own specific effective potential. This computation is shown to reproduce the same final result as the one performed in mode regularization. Introduction The Schrödinger equation for a particle moving in a curved space with metric g µν (x) has many applications ranging from non-relativistic diffusion problems (described by a Wick rotated version of the Schrödinger equation) to the relativistic description of particles moving in a curved space-time. However it cannot be solved exactly for an arbitrary background metric g µν (x), and one has to resort to some kind of perturbation theory. A very useful perturbative solution can be obtained by employing a well-known ansatz introduced by De Witt [1], also known as the heat kernel ansatz. This ansatz makes use of a power series expansion in the time of propagation of the particle. The coefficients of the power series are then determined iteratively by requiring that the Schrödinger equation be satisfied perturbatively. Equivalently, the solution of the Schrödinger equation can be represented by a path integral, as shown by Feynman fifty years ago [2]. One can formally write down the path integral for the particle moving in curved space and check that the standard loop expansion reproduces the structure of the heat kernel ansatz of De Witt. However the proper definition of the path integral in curved space is not straightforward. In fact it has presented many challenges due to complications arising from: i) the non-trivial path integral measure [3], ii) the proper discretization of the action necessary to regulate the path integral. A quite extensive literature has been produced over the years addressing especially the latter point [4]. In this paper we short cut most of the literature and discuss a method of defining the path integral by employing mode regularization as it is by now standard in many calculations done in quantum field theory. The methods extends the one employed by Feynman and Hibbs in discussing mode regularization of the path integral in flat space [5]. It has been introduced and successively refined in [6], [7] and [8] where quantum mechanics was used to compute one loop trace anomalies of certain quantum field theories. The key feature is to employ ghost fields to treat the non-trivial path integral measure as part of the action, in the spirit of Lee and Yang [3]. These ghost fields have been named "Lee-Yang" ghosts and allow to take care of the non-trivial path integral measure at higher loops in a consistent manner. The path integral is then defined by expanding all fields, including the ghosts, in a sine expansion about the classical trajectories and integrating over the corresponding Fourier coefficients. The necessary regularization is obtained by integrating all Fourier coefficients up to a fixed mode M, which is eventually taken to infinity. A drawback of mode regularization is that it doesn't respect general coordinate invariance in target space: a particular non-covariant counterterm has to be used in order to restore that symmetry [8]. General arguments based on power counting (quantum mechanics can be thought as a super-renormalizable quantum field theory) plus the fact that the correct trace anomalies are obtained by the use of this path integral suggest that the mode regularization described above is consistent to any loop order without any additional input. As usual when dealing with formal constructions, it is a good practice to check with explicit calculations the proposed scheme. It is the purpose of this paper to present a full three loop computation of the transition amplitude. The result is found to be correct since it solves the correct Schrödinger equation at the required loop order. This gives a powerful check on the method of mode regularization for quantum mechanical path integrals on curved space. In addition, we present our computation in such a way that it can be easily extended and compared to the time discretization method developed in refs. [9], which is also based on the use of the Lee-Yang ghosts. This method requires its own specific counterterm (also called effective potential) to restore general coordinate invariance. As expected both schemes give the same answer. The paper is structured as follows. In section 2 we review the method of mode regularization and discuss the effective potential specific to this regularization. In section 3 we present a three loop computation of the transition amplitude. Here we make use of general coordinate invariance to select Riemann normal coordinates to simplify an otherwise gigantic computation. We check that the result satisfies the Schrödinger equation at the correct loop order. In section 4 we extend our computation to the time discretization scheme. This is found to compare successfully with the results previously obtained in section 3. Finally, in section 5 we present our conclusions and perspectives. In appendix A we present a technical section with a list of loop integrals employed in the text. In particular, we discuss how to compute them in mode regularization as well as in time discretization regularization. Mode regularization The Schrödinger equation for a particle of mass m moving in a D-dimensional curved space with metric g µν (x) and coupled to a scalar potential V (x) is given by where with ∇ 2 the covariant laplacian acting on scalars. It can be obtained by canonical quantization of the model described by the classical action when ordering ambiguities are fixed by requiring general coordinate invariance in target space and requiring in addition that no scalar curvature term be generated by the orderings in the quantum potential 1 . For convenience we will Wick rotate the time variable t → −it and set m =h = 1 to obtain the following heat equation and corresponding euclidean action As mentioned in the introduction the heat equation can be solved by the heat kernel ansatz of De Witt [1]: a n (x, y)t n (6) which depends parametrically on the point y µ that specifies the boundary condition . Here σ(x, y) is the so-called Synge world function and corresponds to half the squared geodesic distance. The coefficients a n (x, y) are sometimes called Seeley-De Witt coefficients 2 and are determined by plugging the ansatz (6) into (4) and matching powers of t. Now we want to describe in detail how to get the solution of eq. (4) by the use of a path integral which employs the classical action in (5). Following refs. [6,7,8] we write the transition amplitude for the particle to propagate from the initial point x µ i at time t i to the final point x µ f at time t f as follows For commodity we have shifted and rescaled the time parameter in the action, Note that the total time of propagation β plays the role of the Planck constanth (which we have already set to one) and counts the number of loops. In the loop expansion generated by β the potentials V and V M R start contributing only at two loops 3 . The full action S includes terms proportional to the Lee-Yang ghosts, namely the commuting ghosts a µ and the anticommuting ghosts b µ and c µ . Their effect is to reproduce a formally covariant measure: integrating them out producesDx = (det g µν (x(τ ))) 1/2 d D x(τ ). As we will discuss, mode regularization destroys this formal covariance. Nevertheless reparametrization covariance is recovered thanks to the effects of the potential V M R directly included in the action (8). With precisely this counterterm the mode regulated path integral in (7) solves the equation in (4) in both sets of variables (x µ f , t f ) and (x µ i , t i ) and with the boundary condition . For an arbitrary metric g µν (x) one is able to calculate the path integral only in a perturbative expansion in β and in the coordinate displacements ξ µ about the final point 2 It is also customary to redefine the a n (x, y) by extracting a common factor ∆ 1 2 (x, y), where ∆(x, y) is a scalar version of the so-called Van Vleck-Morette determinant. 3 Reintroducingh one can see that the classical potential V must be of orderh 0 while the counterterm V MR is a truly two loop effect of orderh 2 . The actual computation starts by parametrizing where x µ bg (τ ) is a background trajectory and q µ (τ ) the quantum fluctuations. The background trajectory is taken to satisfy the free equations of motion and is a function linear in τ connecting x µ i to x µ f in the chosen coordinate system, thus enforcing the proper boundary conditions Note that by free equations of motion we mean the ones arising from (8) The quantum fields q µ (τ ) in (11) should vanish at the time boundaries since the boundary conditions are already included in x µ bg (τ ). Therefore they can be expanded in a sine series. For the Lee-Yang ghosts we use the same Fourier expansion since the classical solutions of their field equations are where φ stands for all the quantum fields q µ , a µ , b µ , c µ . The measureDx in (10) is now properly defined in terms of integration over the Fourier coefficients φ µ m as follows where A is a constant. Note that this fixes the path integral for a free particle to It is well-known that A = (2πβ) − D 2 , however this value can also be deduced later on from a consistency requirement. The way to implement mode regularization is now quite clear: limiting the integration over the number of modes for each field to a finite mode number M gives the natural regularization of the path integral. This regularization resolves the ambiguities that show up in the continuum limit. The perturbative expansion is generated by splitting the action into a quadratic part S 2 , which defines the propagators, and an interacting part S int , which gives the vertices. We do this splitting by expanding the action about the final point x µ f and obtain where In this expansion all geometrical quantities, like g µν and ∂ α g µν , as well as V and V M R , are evaluated at the final point x µ f , but for notational simplicity we do not exhibit this dependence. S 2 is taken as the free part and defines the propagators which are easily obtained from the path integral where ∆ is regulated by the mode cut-off and has the following limiting value for M → ∞ Note that we indicate • ∆(τ, σ) = ∂ ∂τ ∆(τ, σ), ∆ • (τ, σ) = ∂ ∂σ ∆(τ, σ) and so on. Details on the properties of these functions are given in appendix A. Now, the quantum perturbative expansion reads: where the brackets · · · denote the averaging with the free action S 2 , and amount to use the propagators given in (21) in the perturbative expansion. Note that in the last line of the above equation we have kept only those terms contributing up to two loops, i.e. up to O(β), by taking into account that ξ µ ∼ O(β 1 2 ), as follows from the exponential appearing in the last line of (24) after one averages over ξ µ . Note also that having extracted the coefficient A together with the exponential of the quadratic action S 2 evaluated on the background trajectory implies that the normalization of the left over path integral is such that 1 = 1. To test its consistency one can use it to evolve in time an arbitrary wave function Ψ(x, t) and the terms of order β give This last equation means that the wave function Ψ satisfies the correct Schrödinger equation (4) at the final point (x µ f , t f ). It is interesting to note that the counterterm V M R appears only in the last line of eq. (26). Actually the value of the counterterm reported in eq. (9) has been deduced in [8] by imposing that the transition amplitude would solve eq. (30). General arguments can then be used to show that this counterterm should be left unmodified at higher loops. In fact one can consider quantum mechanics on curved spaces as a super-renormalizable one-dimensional quantum field theory, and check by power counting that all possible divergences can only appear at loop order 2 or less in β. In the next section we are going to check that it is so indeed, expelling doubts which have sometimes been raised that mode regularization would be inconsistent at higher loops. Thus one can consider mode regularization as a viable way of correctly defining the path integral in curved spaces. The transition amplitude at three loops In this section we want to check eq. (28) at the next order in β, which is equivalent to showing that the transition amplitude computed by the path integral satisfies the Schrödinger equation not only at the point (x µ f , t f ) but in a small neighbourhood of it. This computation can be quite lengthy if done in arbitrary coordinates. To make it feasible we select a useful set of coordinates: the Riemann normal coordinates centred at the point x µ f . In such a frame of reference the coordinates of an arbitrary point x µ contained in a neighbourhood of the origin are given by a vector z µ (x) belonging to the tangent space at the origin. This vector specifies the unique geodesic connecting the origin to the given point x µ in a unit time. In such a frame of reference the coordinates of the origin are obviously given by z µ (x f ) = 0. In what follows we will use Riemann normal coordinates which we keep denoting by x µ since no confusion can arise. The expansion of the metric around the origin is given by (see e.g. [7] for a derivation) Note that the coefficients in this expansion are tensors belonging to the tangent space at the origin. This is a property of Riemann normal coordinates. In general, the terms contributing to the transition amplitude up to three loops are given by Clearly the computation would be quite complex in arbitrary coordinates. Fortunately, in Riemann normal coordinates many terms are absent since we obtain Note that all structures like R µναβ , V , V M R and derivatives thereof are evaluated at the origin of the Riemann coordinate system, but for notational simplicity we do not indicate so explicitly. The computation is still quite lengthy and we get where the integrals I n are listed and evaluated using mode regularization in appendix A. Inserting the specific values of the terms arising from the effective potential V M R when evaluated at the origin leads us to the following expression for the transition amplitude at the third loop order R αµ R β µ + 1 12 This is the complete expression which should be used to test eq. (28) at order β 2 . A straightforward calculation shows that one indeed obtains an identity after making use of eq. (30). The mode regulated path integral described in the previous section passes this consistency check. Therefore it can be considered as a well defined way of computing path integrals in curved spaces. Before closing this section it may be useful to cast the transition amplitude in a more compact form which can be made manifestly symmetric under the exchange of the initial and final point. Keeping on using the Riemann normal coordinates (in which we recall x µ f = 0 and ξ µ ≡ x µ i − x µ f = x µ i ) and defining symmetrized quantities as we can write From this expression one can extract (by re-expanding part of the exponential) the leading terms of the Seeley-De Witt coefficients a 0 , a 1 , a 2 for non-coinciding points and obtain, in particular, the one loop trace anomalies for the operator H = − 1 2 ∇ 2 + V (x) in two and four dimensions. Time discretization The computation performed in the previous section was cast in such a way that can be easily extended to a different regularization scheme: the time discretization method developed in refs. [9]. Such a regularization was obtained by deriving directly from operatorial methods a discretized version of the path integral. Taking the continuum limit one recognizes the action with the proper counterterm, and the rules how to compute Feynman graphs. These rules differ in general from the one required by mode regularization. The counterterm V W arising in time discretization differs from V M R , too. The time discretization method leads to the following path integral expression of the transition amplitude [9] x µ where The propagators to be used in the perturbative expansion implied by the brackets on the right hand side of eq. (49) are the same as in (21). The only difference is in the prescription how to resolve the ambiguities arising when distributions are multiplied together. The prescription imposed by time discretization consists in integrating the Dirac delta functions coming form the velocities and the ghosts propagators (thanks to the Lee-Yang ghosts they never appear multiplied together) and using consistently the value θ(0) = 1 2 for the step function. Note also the presence of the factor [ g(x f ) g(x i ) ] 1/4 appearing in this scheme. The result of the calculation has the same structure as the one reported in eqs. (37), (38), (39), (40), (41) with the difference that V M R should be substituted by V W , leading to and with the following different values of the integrals computed in time discretization regularization I 1 = 0, I 3 = 0, I 5 = 0, I 10 = 0, I 13 = 1 12 , The other integrals are as in mode regularization. Inserting all these values back in (48) and expanding the coefficient [ g(x i ) ] 1/4 at the required loop order give the same transition amplitude as in (45) or, equivalently, in (47). Thus this result constitutes a successful test on the method developed in [9]. Conclusions In this paper we have discussed a proper definition of the configuration space path integral for a particle moving in curved spaces. By performing a three loop computation we have tested its consistency and checked that one can equally well obtain the perturbative solution of the Schrödinger equation by path integrals. This fills a conceptual gap, showing that the perturbative description of a quantum particle moving in a curved space obtained by De Witt by solving the Schrödinger equation (i.e. using the canonical formulation of quantum mechanics [1]) can equally well be obtained in the path integral approach introduced by Feynman fifty years ago. This approach may also have practical applications in quantum field theoretical computations when carried out in curved background using the world line formalism [10]. We have mainly described the mode regulated path integral. Its definition was obtained in [6], [7] and [8] by using a pragmatic approach to identify its key elements, and needed a strong check to test its foundations. This we have provided in this paper. We find that the method of mode regularization is also quite appealing for aesthetic reasons, since it is close to the spirit of path integrals that are meant to give a global picture of the quantum phenomena. On the other hand we have also extended our computation to the time discretization method of defining the path integrals [9]. This method is in some sense closer to the local picture given by the differential Schrödinger equation, since one imagines the particle propagating by small time steps. It is nevertheless a consistent way of defining the path integral, maybe superior at this stage, since one obtains its properties directly from canonical methods. As we have seen also this scheme gives the correct result for the transition amplitude. An annoying property of the two regularization schemes we have been discussing is that they both do not respect general coordinate invariance in target space, and require specific non-covariant counterterms to restore that symmetry. It would be interesting to find a reliable covariant regularization scheme or, at least, a scheme which while breaking covariance (e.g. in the decomposition of the action into free and interacting parts) does not necessitates non-covariant counterterms. being included in square brackets. .
5,014.8
1998-10-15T00:00:00.000
[ "Physics" ]
A Comparative Analysis of AI Models in Complex Medical Decision-Making Scenarios: Evaluating ChatGPT, Claude AI, Bard, and Perplexity This study rigorously evaluates the performance of four artificial intelligence (AI) language models - ChatGPT, Claude AI, Google Bard, and Perplexity AI - across four key metrics: accuracy, relevance, clarity, and completeness. We used a strong mix of research methods, getting opinions from 14 scenarios. This helped us make sure our findings were accurate and dependable. The study showed that Claude AI performs better than others because it gives complete responses. Its average score was 3.64 for relevance and 3.43 for completeness compared to other AI tools. ChatGPT always did well, and Google Bard had unclear responses, which varied greatly, making it difficult to understand it, so there was no consistency in Google Bard. These results give important information about what AI language models are doing well or not for medical suggestions. They help us use them better, telling us how to improve future tech changes that use AI. The study shows that AI abilities match complex medical scenarios. Introduction In the modern era of digital healthcare, artificial intelligence (AI) has emerged as a pivotal force in transforming medical decision-making [1].The ability of AI to analyze vast datasets, recognize patterns, and generate predictive models has led to more informed and efficient healthcare delivery [2].While the adoption of AI in medicine is promising, it introduces a complex landscape of diverse AI models, each with unique capabilities and limitations.Models such as ChatGPT, Claude AI, Bard, and Perplexity have shown the potential to provide medical guidance [3].However, the healthcare sector necessitates critically evaluating these models to ensure the advice's accuracy, reliability, and appropriateness. This study compares and evaluates the performance of AI models regarding medical guidance.The primary objectives include assessing the accuracy of medical information, adherence to current medical guidelines, and the models' ability to handle complex medical scenarios. Technical Report The scope of this research encompasses a systematic examination of the AI tools, including ChatGPT-4, Claude AI (Pro), Google Bard (Pro), and Perplexity (Pro) models, across a spectrum of medical scenarios.In this research, results from four top AI smart systems were assessed.These scenarios were carefully chosen to represent a range of medical conditions and decision-making scenarios, from emergency procedures to chronic disease management. Every answer AI gave was checked against the gold standards to follow, representing the highest level of agreement in medicine and evidence-based guidelines.The scoring was conducted on dimensions: accuracy, dependability, relevance, and completeness.The AI's diagnosis or treatment advice was checked to match the standard protocol.The answers from AI tools were compared not only for correctness but also for completeness.Reliability was checked by looking at how constantly, trustworthy, and clear the AI's thought process was.A group of doctors from diverse departments like Anaesthesia, Emergency Care, Critical Care, and Cardiology with experience of more than 10 years, who are knowledgeable about medicine and the gold standard medical procedures were used to give the scores from 1 (bad) to 5 (good) based on the Likert scale [4].This two-part scoring system was made to look at how each AI tool works.The following statistical tests such as descriptive statistics (mean and standard deviations), analysis of variance (ANOVA), and correlations analysis are applied using Jeffreys's Amazing Statistics Program (JASP) (University of Amsterdam, Amsterdam, The Netherlands) to compare how well the AI models present the accuracy and dependability [5]. Medical scenarios and scores Table 1 presents 14 complex medical scenarios searched in AI tools for suggestions.Each was carefully rated using four AI models to check the accuracy, relevance, clarity, and completeness.The scores showed big differences between the models.Claude AI consistently got higher relevance and completeness marks, while Google Bard's clarity scores were greatly low.Each model had varying user experience quality ratings shown in their test results.However, all the scenarios are carefully picked out and these situations are very demanding ones requiring medical expertise.In order to define the most complex situations one should take into consideration such characteristics as a rarity of condition, complications with surgery, potential risks, and overall complexity level for all medical issues.From the list provided, here are a few scenarios that stand out due to their extensive complexity: Atrial Switch Surgery Prognosis The atrial switch is a complicated cardiac surgery done mostly on patients suffering from congenital heart defects.Different outcomes are possible in such situations, depending on multiple variables -the patient's overall health condition, other heart malformations, and the age of surgery. Malignancy Periampullary Pancreas Post-operative Success Rate Periampullary malignancies, including the anatomical site where bile and pancreatic ducts open into small intestines, are hard to cure.The rate of surgical success can also be affected by the stage of cancer, overall patient health, and the presence of metastasis. Ninety Years With Multiple Chronic Conditions for Emergency Laparotomy The management of a patient with diabetes, hypertension, chronic obstructive pulmonary disease (COPD), coronary artery disease, ischemic heart disease, post-coronary artery bypass graft (CABG) dilated cardiomyopathy low ejection fraction, and diabetic ketoacidosis poses a highly complicated case.The consequences of complications development during an emergency laparotomy in a patient with age and numerous comorbidities are much higher. 75% Burns With Hyperkalemia and Hemoglobin 2 Shock Resuscitation Protocol Skin burns covering 75% of the body, hyperkalemia (high potassium levels), and severe anemia with Hb being only two, are very complicated.Such a situation calls for detailed monitoring of fluid resuscitation, electrolyte balance, and associated complications such as infection and organ failure. All these cases involve a multidisciplinary approach and consider various factors to achieve favorable patient results.Each case is not only surgical in complexity but requires intricate preoperative and postoperative care as well as comorbidity management and potential complications. Statistical comparison Table 3 shows there is no significant difference in accuracy, clarity, and completeness among the AI models (p >0.05), but there is a significant difference in relevance (p =0.038).Correlation analysis indicates a moderate positive relationship between Google Bard's accuracy and relevance (r =0.550, p =0.037), suggesting that as Google Bard's accuracy increases, its relevance tends to increase as well.However, no other correlations between accuracy and other metrics were found to be significant, indicating that, in most cases, accuracy does not predict relevance, clarity, or completeness within the models tested.The comparison shows that Claude AI is better and has more information, while it points out different experiences with Google Bard's clear understanding.This shows that we need to choose a model carefully, using the performance numbers that are most important for medical decisions. Discussion The comparative analysis of AI models reveals Claude AI's dominance in relevance and completeness, suggesting its superior ability to generate contextually pertinent and thorough responses.The consistency in ChatGPT's clarity and Claude AI's completeness, with lower standard deviations, indicates their reliability in maintaining a performance standard.Conversely, the significant variability in Google Bard's clarity highlights the potential for unpredictable user experiences, emphasizing the need for enhanced model finetuning.The ANOVA results, particularly the significant difference in relevance, further corroborate the distinct performance profiles of these models.Moreover, the moderate positive correlation between Google Bard's accuracy and relevance suggests a link between the correctness of information and its relevance suggests a link between the correctness of information and its applicability.However, such correlations are not uniformly observed across all models.This nuanced understanding of model-specific strengths and weaknesses is critical for informed AI selection, tailored to specific user needs and contexts, thus enhancing the practicality and effectiveness of AI in complex decision-making scenarios.The discussion section is further categorized into the following subsections: Limitations An important limitation is the risk of bias in scenario selection and AI training data.Further, the fixed behavior of AI answers does not accurately reflect dynamic decision-making in real-life clinical scenarios. Ethical considerations From an ethical standpoint, the use of AI in making healthcare decisions calls into question patient privacy, transparency on how and why such decisions are made through AI, and widening healthcare gaps due to biased machine learning datasets. Contextual analysis The findings highlight the need for context-based knowledge in AI applications within healthcare.Although AI yields helpful information, it should reinforce rather than supersede human judgment, especially in sophisticated health situations. Future directions Future research should be on longitudinal studies to measure the performance of AI through time and across different medical contexts.Moreover, developing approaches to incorporate AI with human governance in clinical practice is critical. The research fortifies the changing role of AI in healthcare, jeopardizing thoughtful analysis and ethical aspects as well as balanced integration of AI into the clinical decision-making process. Conclusions This paper presents a detailed comparison of AI language models in complex medical scenarios, pointing out evidence-based findings by applying quantitative analysis illustrating major differences between their accuracy and gold standards obtained by medical doctors.Comprehensive results showcase Claude AI's abilities to provide more concise answers, whereas Google Bard's lower clarity shows that there are challenges in human-AI interactions.The current study is a crucial reference for understanding AI performance and its application in medicine.It underscores the importance of leveraging these findings to enhance AI technologies and adapt their use in medical settings.This approach aims to optimize the experience of medical professionals, ensuring they derive superior benefits from AI tools.The insights gained from this study are vital for guiding the development and effective utilization of AI in healthcare decision-making. Table 2 shows descriptive statistics for how well different AI models work.Claude AI is the most related, and Google Bard gets the lowest fullness score.The results of all tests suggest different ratings.ChatGPT's clarity and Claude AI completeness scores are more stable.Google Bard's clarity, however, varies significantly.This table provides a quantitative comparison between four AI models -ChatGPT, Claude-AI analyzer, Google Bard, and Perplexity AI regarding accuracy, relevance, clarity, and completeness in carrying out multicomplex scenarios based on mean scores among parameter variations. TABLE 3 : Statistical Comparison of AI Tools ANOVA: analysis of variance
2,275
2024-01-01T00:00:00.000
[ "Medicine", "Computer Science" ]
Phase diagram for passive electromagnetic scatterers With the conservation of power, a phase diagram defined by amplitude square and phase of scattering coefficients for each spherical harmonic channel is introduced as a universal map for any passive electromagnetic scatterers. Physically allowable solutions for scattering coefficients in this diagram clearly show power competitions among scattering and absorption. It also illustrates a variety of exotic scattering or absorption phenomena, from resonant scattering, invisible cloaking, to coherent perfect absorber. With electrically small core-shell scatterers as an example, we demonstrate a systematic method to design field-controllable structures based on the allowed trajectories in this diagram. The proposed phase diagram and inverse design can provide tools to design functional electromagnetic devices. © 2016 Optical Society of America OCIS codes: (290.4020) Mie theory; (160.4670) Optical materials. References and links 1. J. Shi, F. Monticone, S. Elias, Y. Wu, D. Ratchford, X. Li, and A. Alú, “Modular assembly of optical nanocircuits,” Nat. Commun. 5, 3896 (2014). 2. N. Engheta, “Circuits with light at nanoscales: optical nanocircuits inspired by metamaterials,” Science 317, 1698–1702 (2007). 3. M. I. Tribelsky, and B. S. Lukyanchuk, “Anomalous light scattering by small particles,” Phys. Rev. Lett. 97, 263902 (2006). 4. F. Monticone, C. Argyropoulos, and A. Alú, “Multi-layered plasmonic covers for comblike scattering response and optical tagging,” Phys. Rev. Lett. 110, 113901 (2013). 5. A. Alú and N. Engheta, “Polarizabilities and effective parameters for collections of spherical nanoparticles formed by pairs of concentric double-negative, single-negative, and/or double-positive metamaterial layers,” J. Appl. Phys. 97, 094310 (2005). 6. R. E. Hamam, A. Karalis, J. D. Joannopoulos, and M. Soljacic, “Coupled-mode theory for general free-space resonant scattering of waves,” Phys. Rev. A 75, 053801 (2007). 7. H. Noh, Y. Chong, A.D. Stone, and H. Cao, “Perfect coupling of light to surface plasmons by coherent absorption,” Phys. Rev. Lett. 108, 186805 (2012). 8. H. Noh, S.M. Popoff, and H. Cao, “Broadband subwavelength focusing of light using a passive sink,” Opt. Express 21, 17435–17446 (2013). 9. M. I. Tribelsky, “Anomalous light absorption by small particles,” Europhys. Lett. 94, 14004 (2011). 10. V. Grigoriev, N. Bonod, J. Wenger, and B. Stout, “Optimizing nanoparticle designs for ideal absorption of light,” ACS Photonics 2, 263–270 (2015). 11. A. Alú and N. Engheta, “Achieving transparency with plasmonic and metamaterial coatings,” Phys. Rev. E 72, 016623 (2005). 12. A. Alú and N. Engheta, “Plasmonic materials in transparency and cloaking problems: mechanism, robustness, and physical insights,” Opt. Express 15, 3318–3332 (2007). 13. S. Muhlig, M. Farhat, C. Rockstuhl, and F. Lederer, “Cloaking dielectric spherical objects by a shell of metallic nanoparticles,” Phys. Rev. B 83, 195116 (2011). #257388 Received 13 Jan 2016; revised 11 Mar 2016; accepted 11 Mar 2016; published 15 Mar 2016 © 2016 OSA 21 Mar 2016 | Vol. 24, No. 6 | DOI:10.1364/OE.24.006480 | OPTICS EXPRESS 6480 14. S. Muhlig, A. Cunningham, J. Dintinger, M. Farhat, S. B. Hasan, T. Scharf, T. Burgi, F. Lederer, and C. Rockstuhl, “A self-assembled three-dimensional cloak in the visible,” Sci. Rep. 3, 2328 (2013). 15. M. Farhat, S. Mhlig, C. Rockstuhl, and F. Lederer, “Scattering cancellation of the magnetic dipole field from macroscopic spheres,” Opt. Express 20, 13896–13906 (2012). 16. A. Alú and N. Engheta, “Cloaking a sensor,” Phys. Rev. Lett. 102, 233901 (2009). 17. Z. Ruan and S. Fan, “Superscattering of light from subwavelength nanostructures,” Phys. Rev. Lett. 105, 013901 (2010) 18. Z. Ruan and S. Fan, “Design of subwavelength superscattering nanospheres,” Appl. Phys. Lett. 98, 043101 (2011). 19. A. Mirzaei, A. E. Miroshnichenko, I. V. Shadrivov, and Y. S. Kivshar, “Superscattering of light optimized by a genetic algorithm,” Appl. Phys. Lett. 105, 011109 (2014). 20. N. M. Estakhri and A. Alú, “Minimum-scattering superabsorbers,” Phys. Rev. B. 89, 121416 (2014). 21. Z. Ruan and S. Fan, “Temporal coupled-mode theory for Fano resonance in light scattering by a single obstacle,” J. Phys. Chem. C 114, 7324 (2010). 22. K. R. Catchpole and A. Polman, “Plasmonic solar cells,” Opt. Express 16, 21793–21800 (2008). 23. X. Zhang, Y. L. Chen, R. S. Liu, and D. P. Tsai, “Plasmonic photocatalysis,” Rep. Prog. Phys. 76, 046401 (2013). 24. J.-Y. Lee, M.-C. Tsai, P.-C. Chen, T.-T. Chen, K.-L. Chan, C.-Y. Lee, and R.-K. Lee, “Thickness effects on light absorption and scattering for nano-particles in shape of hollow-spheres,” J. Phys. Chem. C 119, 25754–25760 (2015). 25. L. R. Hirsch, R. J. Stafford, J. A. Bankson, S. R. Sershen, B. Rivera, R. E. Price, J. D. Hazle, N. J. Halas, and J. L. West, “Nanoshell-mediated near-infrared thermal therapy of tumors under magnetic resonance guidance,” Proc. Natl. Acad. Sci. U. S. A. 100, 13549–13554 (2003). 26. M. I. Tribelsky, A. E. Miroshnichenko, Y. S. Kivshar, B. S. Lukyanchuk, and A. R. Khokhlov, “Laser pulse heating of spherical metal particles,” Phys. Rev. X 1, 021024 (2011). 27. Y. Pu, R. Grange, C. L. Hsieh, and D. Psaltis, “Nonlinear optical properties of core-shell nanocavities for enhanced second-harmonic generation,” Phys. Rev. Lett. 104, 207402 (2010). 28. S.-W. Chu, T.-Y. Su, R. Oketani, Y.-T. Huang, H.-Y. Wu, Y. Yonemaru, M. Yamanaka, H. Lee, G.-Y. Zhuo, M.-Y. Lee, S. Kawata, and K. Fujita, “Measurement of a saturated emission of optical radiation from gold nanoparticles: Application to an ultrahigh resolution microscope,” Phys. Rev. Lett. 112, 017402 (2014). 29. A. Mirzaei, I. V. Shadrivov, A. E. Miroshnichenko, and Y. S. Kivshar, “Cloaking and enhanced scattering of core-shell plasmonic nanowires,” Opt. Express 21, 10454–10459 (2013). 30. R. Fleury, J. Soric, and A. Alú, “Physical bounds on absorption and scattering for cloaked sensors,” Phys. Rev. B. 89, 045122 (2014). 31. M. Born and E. Wolf, Principle of Optics, 7th ed. (Cambridge University, 1999). 32. C. F. Bohren and D. R. Huffman, Absorption and Scattering of Light by Small Particles (Wiley, 1983). 33. S. A. Maier, Plasmonics: Fundamentals and Applications (Springer, 2007). 34. H.C. van de Hulst, Light Scattering by Small Particles (Dover, 1981). To have exotic electromagnetic properties at subwavelength scale, a variety of specific conditions are asked to be satisfied.Undoubtedly, a better understanding in the scattering coefficients could provide an access to design nanostructures.In general, we need to consider information about scattering limitation, power assignment, scattered radiation pattern, and robustness on the corresponding extrinsic field response of real scatterers.For working frequency of interests, for example, many metals are associated with a strong dispersion in the visible spectra, that introduces real loss effects and suppresses desired functions [29,30].As possible mismatching in physical parameters may occur, it is natural to seek optimized invisible cloaks or performance boundary in a cloaked sensor with the consideration of intrinsic loss in reality [30]. In this paper, we study the general relation between amplitude and phase in the scattering coefficients for any passive electromagnetic scatterers.A phase diagram is introduced by imposing the power conservation on absorption cross section for each partial wave channel, which acts as a universal map to design passive scatterers.Not only all physically allowed regions can be defined to satisfy the intrinsic power conservation, but also all exotic electromagnetic properties in the literature can be illustrated in this phase diagram.Moreover, we take electrically small core-shell scatterers as an example to illustrate a systematic way in designing the composition of subwavelength-structures with required scattering and absorption properties. Phase diagram for a passive scatterer We consider a linearly polarized plane wave with time evolution e −iωt at the angular frequency ω, which is illuminating on a single spherical object.The object could be made of multiple layers of uniform and isotropic media with complex permittivity and permeability, denoted as ε = ε + iε and μ = μ + iμ , respectively.Here, ε and μ are both assumed to be positive real numbers for a passive medium.Without loss of generality, the surrounding environment is taken as non-absorptive, non-magnetic, and free of external sources or currents, i.e., ε 0 = μ 0 = 1.In the following, we express the electric field E and magnetic field H in the environment by two auxiliary vector potentials, i.e., the transverse magnetic (TM) and transverse electric (TE) modes, which are respectively generated by two scalar spherical wave equations.Each scalar functions can be built by an infinite series with unknown coefficients determined through boundary conditions.By following the conventional notations, let the scattering coefficients be C T M n and C T E n for the transverse magnetic (TM) and transverse electric (TE) modes in each spherical harmonic channel labeled by the index n, respectively [31][32][33][34].The corresponding absorption and scattering cross-sections, σ abs and σ scat , defined as the total power absorbed and scattered by a single scatterer with respect to the unit intensity of incident plane wave, can be expressed as where λ is the wavelength of incident wave in vacuum.For a given radius of particle, denoted as a, the value of size parameter 2πa/λ determines how many terms in these two convergent series to be dominant [32].Here, we define the partial absorption cross section for each spherical harmonic channel, labeled by n, as σ is the corresponding phase.Due to the conservation of power, these partial absorption crosssections would be equal or larger than zero in each spherical harmonic channel, σ abs(TE,TM) n ≥ 0. By decomposing into partial waves, in terms of each spherical channel, we can have a universal phase diagram for any passive electromagnetic scatterers, as shown in Fig. 1.Interestingly, even though we do not write down any exact formulas for the scattering coefficient, the range to support physical values for the amplitude square only exists within 0 ≤ |C that the range to support allowable solutions of scattering coefficients results from the intrinsic power conservation regardless of any specific scattering events.We depict the allowed solutions in colors for passive cases, which correspond to σ abs(TE,TM) n ≥ 0 and depict the forbidden ones in white color for gain cases, which correspond to σ abs(TE,TM) n < 0. Along this lossless contour, there exist a family of solutions with the same value on σ abs(T E,T M) n = 0, but with different scattering coefficients in amplitude and phase.It is known that an ideally localized surface plasmon in the subwavelength structure relies on lossless resonance condition [3,5], which corresponds to the point |C = π in our phase diagram.As for invisible cloaks [11][12][13][14][15][16], one can look for the solutions near bottom of the phase diagram, i.e., |C Once the composited material in a scatterer has intrinsic loss, the scattering coefficients move to reside inside the colored region.For each channel, the maximum value in the normalized absorption cross section is 2πσ abs n /(2n + 1)λ 2 = 1/4, i.e., the Green cross-marker shown in Fig. 1, corresponding to coherent perfect absorbers [7][8][9][10], but which is also associated with the same amount of electromagnetic scattering power.The phase and amplitude of scattering coefficients to achieve a maximum absorption power is π and 1/2, respectively.Moreover, along the contour for a constant absorption power, there exist a maximum and a minimum values in From the trajectory in the phase diagram to design passive scatterers Through above examples, the phase diagram provides a universal map to display all possible solutions for any passive scatterers.In principle, without knowing the composition in a scatterer, one can also have the same scattering coefficients.In this way, one may design specific scatterers with the required scattering and absorption properties by the choice of allowed trajectories in the phase diagram. In the following, we introduce a systematic way to do the inverse design for the scatterer by specifying the required scattering or absorption properties.For a scatterer with N layers made of isotropic and homogeneous media, the corresponding scattering coefficient can be expressed in a compact form: Fig. 3. Absorption and scattering cross sections correspond to the contour shown in Fig. 2, which are depicted in terms of the parametric variable t defined in Eqs. ( 4)-( 5).Here, a constant absorption power is requested by setting q T M 1 = 0.2; while there is a degree of freedom in the scattering power.The insect illustrates the core-shell scatterer used as an example to design a passive electromagnetic devices with the constant absorption power. As an example, we consider a passive scatterer in the configuration of a core-shell sphere, as illustrated in the insect of Fig. 3, which is composed by two concentric layers of isotropic and homogeneous materials.The geometrical parameters and material properties for this core-shell scatterer are the radius of core, a c , the radius of whole particle, a, and ε s (μ s )/ ε c (μ c ) for the permittivity (permeability) in the shell/core regions, respectively.We limit our system to nonmagnetic case, so μ s = μ c = 1.If the electrically small approximation is satisfied for such a core-shell scatterer, it is known that the main contribution dominantly comes from the electric dipole-wave scattering, i.e., n = 1 and TM mode.We choose the constant absorption power with q (T M) 1 = 0.2 in Eqs. ( 4)- (5).For such a two-layered scatterer, the corresponding scattering coefficients are conducted from a 4 × 4 matrix by tracking TE and TM modes.By applying the continuity of electric and magnetic fields established at the two boundaries of shell-environment and core-shell, one can approximately express the term where γ is defined as the ratio between the core radius to the whole particle radius, γ ≡ a c /a.If one replaces ε by μ, then we can obtain the other term V TE 1 /U TE 1 .However for non-magnetic media it is automatically zero for μ 0 = μ s = μ c = 1.By taking γ = 1 or ε s = ε c , above result can be reduced to the electric dipole equation for a solid sphere.Now, for our core-shell system with the geometric size fixed, we provide a systematic way to find out the corresponding material properties with a constant absorption power, as specified by the contour in the phase diagram shown in Fig. 2. To give a clear illustration, first, one may fix the material property in the shell or in the core region.If we assume that the composition for the shell region is given, i.e., ε s is fixed, then, based on Eqs. ( 4)-( 6), the corresponding solution for the permittivity in the core region is found to satisfy: Solutions obtained from the analytical formula in Eq. ( 7) are shown in Figs.4(a) and 4(b) for the real and imaginary parts of the permittivity in the core region, respectively.In terms of the parametric variable, t, we can have a wide rang in selecting core materials, and all of them have the same absorption power.Moreover, based on these found parameters, the corresponding absorption and scattering cross sections would satisfy our request for a constant absorption power, as shown in Fig. 3. From the comparison between Fig. 3 and Figs.4(a)-4(b), we find that when t = π/2 the scattering power reaches a maximum value; while the ε c for the required material has a minimum value, due to the reason that dissipative loss is proportional to the local electric field.In this scenario, with the help of a strong electric field, it becomes possible to maintain the same absorption power simultaneously. On other hand, if the material property in the core region is specified, based on Eqs. ( 4)-( 6) the corresponding solutions for the permittivity in the shell region ε s to support a constant absorption power are governed by with the shorthanded notations: Fig. 4. The permittivities to support a constant absorption power are shown as a function of the parametric variable t.For a given material in the shell region, ε s = 3.12, found solutions for the real and imaginary parts of the permittivity in the core region are shown in (a) and (b), respectively.For a given material in the core region, ε c = 5, two families of found solutions shown in Eq. ( 8) are denoted as ε + s and ε − s for the shell region, with the corresponding real and imaginary parts of the permittivity shown in (c, e) and (d, f), respectively.Results obtained from analytical formulas are depicted in solid-curves; while exact solutions from scattering theory are depicted in dashed-curves.In all cases, the coreshell geometries are fixed with a = 1/24λ and γ = 0.9. Derivations to have solutions in Eqs.(7) and (8) are shown in Appendix. From Eq. ( 8), there exist two families to support the materials in the shell region, denoted as ε ± s .We show the real and imaginary parts of the permittivity in the shell region for these two families in Figs.4(c)-4(d) and 4(e)-4(f), as well as the exact solutions in dashed-curves, respectively.We reveal a good agreement between our analytical solutions and the numerical ones obtained by the exact scattering theory.Again, based on these results, the corresponding absorption and scattering cross sections would satisfy our request for a constant absorption power, as shown in Fig. 3.It is surprised to find that there exist a variety of choices for the material properties even a specific scattering or absorption property is given in the beginning.Without the introduction of this phase diagram and the inverse design, it is not only difficult to find out the required material properties, but also complicated to recognize the power competition and limitation among these cross sections for each channel. Before concluding, we remark that when the electrically small approximation is not valid, multiple channels for TE and TM modes may be excited as expected.In this scenario, one can also apply our phase diagram, but not for a single channel only.By embedding multi-layered coatings to excite multiple channels, the intrinsic single channel limitation can be broken to generate superscattering or superabsorber phenomena [17][18][19][20][21]29]. Scattering coefficients from several dominant channels are just a natural extension by considering all of them onto the phase diagrams simultaneously.In addition, although the proposed phase diagram is based on the well-known scattering formulas for symmetrically spherical scatterers, the concept of our phase diagram can be applied to non-spherical scatterers as well.Although in our example of inverse design we use well-known electric dipole formulas for core-shell systems, our approach to find out the corresponding materials for a given request on the scattering and absorption properties is always non-trivial.Through this universal phase diagram, one can have a systematic way to design functional passive electromagnetic scatterers. Conclusion In summary, we introduce a phase diagram as a compact tool to link the scattering and absorption powers for each spherical harmonic channel.Intrinsically, the power conservation for any passive scatterers gives the physically allowable solutions in the scattering coefficient.Not only the known exotic scattering and absorption phenomena can be illustrated in this diagram, but supported trajectories are also demonstrated to design extrinsic-field-controllable scatterers.With the core-shell scatterers at the subwavelength scale as an example, we reveal a systematic way to find out a variety of solutions in the composited materials to possess the same absorption power.In general, one can easily extend this methodology to go beyond the small particle size limitation, by considering interferences from several channels in the map.With the analogy among wave phenomena, the concept of this phase diagram and our inverse design method can be ready applied to acoustic systems as well as quantum scattering system. With the help of Eq. ( 12), one has which gives us the result in Eq. (7).Similar process can be applied to derive Eq. ( 8) by expanding every terms and collecting coefficients of ε s , i.e., Finally, with the shorthanded notations introduced in Eqs. ( 9)-( 11), we have As a result, one can easily solve ε s in Eq. ( 15), which gives the solutions shown in Eq. ( 8). | 2 ) , for TE or TM mode.We further express the scattering coefficient as C the magnitude |C (TE,TM) n | is a positive real value and θ (TE,TM) n Fig. 1 . Fig. 1.A phase diagram for each spherical harmonic channel, labeled by n, is generated by imposing the power conservation on the partial absorption cross section, for TE or TM mode separately.Marked numbers shown in the contour lines correspond to the values of normalized absorption cross section in the individual channel: 2π (2n+1)λ 2 σ abs(T E,T M) n .Colored regions are physically allowed solutions; while uncolored regions represent forbidden solutions.It is noted that the amplitude square is bounded within the range [0, 1]; while the allowed phase is within [π/2, 3π/2].The Green cross-marker, localed at (θ (T E,T M) n = π, |C (T E,T M) n | 2 = 0.25), indicates the maximum value, 0.25, in the normalized absorption cross-section. Fig. 2 . Fig. 2. Supported trajectories in the phase diagram are shown for different sets of the parameters: α (T E,T M) n and β (T E,T M) n defined in Eq. (3).Here, trajectories with a constant β (T E,T M) n are shown in Blue dotted-dashed-curves; while trajectories with a constant α (T E,T M) n are shown in Red dotted-dashed-curves. Two contours for a constant absorption power are also depicted in the Black color.
4,891.6
2016-03-21T00:00:00.000
[ "Physics" ]