text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Computer-Assisted Tracking of Chlamydomonas Species
The green algae Chlamydomonas reinhardtii is a model system for motility in unicellular organisms. Photo-, gravi-, and chemotaxis have previously been associated with C. reinhardtii, and observing the extent of these responses within a population of cells is crucial for refining our understanding of how this organism responds to changing environmental conditions. However, manually tracking and modeling a statistically viable number of samples of these microorganisms is an unreasonable task. We hypothesized that automated particle tracking systems are now sufficiently advanced to effectively characterize such populations. Here, we present an automated method to observe C. reinhardtii motility that allows us to identify individual cells as well as global information on direction, speed, and size. Nutrient availability effects on wild-type C. reinhardtii swimming speeds, as well as changes in speed and directionality in response to light, were characterized using this method. We also provide for the first time the swimming speeds of several motility-deficient mutant lines. While our present effort is focused around the unicellular green algae, C. reinhardtii, we confirm the general utility of this approach using Chlamydomonas moewusii, another member of this genus which contains over 300 species. Our work provides new tools for evaluating and modeling motility in this model organism and establishes the methodology for conducting similar experiments on other unicellular microorganisms.
INTRODUCTION
The unicellular alga Chlamydomonas reinhardtii is a model organism for the study of flagellar motility, photosynthesis, and a variety of biotechnology applications among unicellular eukaryotes. This photoautotroph has minimal culture requirements, is genetically tractable, and has an extensive strain repository including numerous motility mutants (Luck et al., 1977;Huang et al., 1981;Huang et al., 1982a;Huang et al., 1982b;Kuchka and Jarvik, 1982;Segal et al., 1984;Kamiya, 1988;Barsel et al., 1988;Bloodgood and Salomonsky, 1989;Kuchka and Jarvik, 1987;Kamiya et al., 1991). Photo-, gravi-, and chemotaxis have all been associated with this organism, making it an ideal system for understanding how multiple inputs can be integrated to regulate motility in unicellular organisms. Prior efforts to quantify Chlamydomonas motility have largely focused on high-speed photographic evidence, which was then analyzed by manual tracking (Racey et al., 1981). However, such methods are time-consuming, prone to error, and unlikely to resolve multiple responses within a population. Moreover, algal cultures frequently reach densities ranging from 10 5 to 10 7 cells/ml. The results of a handful of tracks (<30) are unlikely to be an accurate reflection of the behavior of such a population. The ability to observe a larger sample size would also provide refined insight into the dynamic response range of this model organism rather than just an average. Automated particle tracking in which individual constituents of a population, biotic or abiotic, are followed over time can yield data on particle speed, directionality, size, and other features of interest for understanding the behavior of the observed population. In the case of C. reinhardtii, we hypothesized that such approaches would be able to accommodate significantly more tracks, providing better models of behavior within a population of cells. Here, we report a new method to characterize motility in C. reinhardtii, which allows us to identify individual particles (cells) as well as gather population-wide information on speed as well as directionality. The proposed strategy requires only a microscope with a camera to collect images and utilizes a publicly available software package, the TrackMate plugin for ImageJ, for analysis (Schneider et al., 2012;Tinevez et al., 2017).
In the present study, we evaluated the impacts of light and nutrient availability on motility in wild-type C. reinhardtii, as well as characterizing a series of motility-deficient mutant lines. The ability to quantify the effects of such mutations provides a refined perspective on the impacts such mutations may have on these organisms. Our work provides a new tool for evaluating and modeling motility in this model organism. Furthermore, we confirm that the established methodology is able to characterize motility in another member of this genus, Chlamydomonas moewusii, supporting the broader utility of this approach for observing motility in other unicellular microorganisms which may have important roles in host-microbial associations and/or biotechnology.
Materials
Unless stated otherwise, all reagents were purchased from either Fisher Scientific or Sigma Aldrich.
Video Acquisition
All motility data was acquired under minimal lighting in a darkroom to minimize background phototactic effects. Cell density was determined by fixing 200 μl aliquots of cells in 1.7% formaldehyde and manually counted using a hemocytometer at ×400. The remainder of the culture was incubated at 23°C in the dark for 30 min to reduce the impact of light on cell movement. Algae cultures were vortexed for ≈30 s to resuspend any settled algae. Aliquots of 30 µl were placed on a slide and viewed under an Olympus CH30 binocular microscope at ×100 magnification. Cells were allowed to settle to avoid artificial movement of the cells caused by convection currents in the liquid on the slide. ToupView software (www.touptek.com) was used to collect videos with an AmScope FMA050 fixed microscope camera. Videos were collected with a frame rate of 7.5 frames/s for approximately 30 s.
Video Analysis
Videos were imported into Fiji for splitting and tracking analysis (Schindelin et al., 2012). Calibration was done using a micrometer ruler slide to determine pixel length. To analyze tracks, the preinstalled Fiji plugin TrackMate was utilized (Tinevez et al., 2017). Cell size was also determined through Fiji. The resulting files were saved in comma-separated values (CSV) format as "Spots in tracks statistics, " "Track statistics, " and "Links in tracks statistics. " To determine the directionality of each track, the files "Spots in tracks statistics, " "Track statistics, " and "Links in tracks statistics" were combined using the statistical computing software R using the RStudio interface (www.rstudio.com) (R Development Core Team, 2016). This allowed for the creation of a new file in Tab Delimited Text format containing direction vectors of all cell tracks. From the combined R file, Rose plots were generated by dividing the slide viewing area into eight wedges and plotting the total fraction of cells in each section. Preliminary findings were confirmed using the Chemotaxis and Migration Tool (www.ibidi. com). Histograms of varying speeds within the population were generated by the data analysis package in Excel. Average speeds of tracks were determined using the "Track statistics" file. Units were converted to micrometers per second by dividing the time of the video by the number of image slices.
Phototaxis Studies
Light intensity studies were completed using a Leica 13410311 Illuminator dissection scope leveled with the microscope platform. This allowed for the light to pass perpendicular to the slide across the sample. Light intensity settings were measured using Digital Luxometer. Cultures of 30 µl were loaded onto slides and exposed to the indicated intensity of light immediately before data collection.
Particle Tracking
The digital equivalent of manual tracking across multiple video frames with hundreds of particles (cells) rapidly becomes computationally restrictive if the spatial coordinates of every possible combination has to be recorded from frame to frame over the period of these videos (30 s). This is an example of what is known as the 'linear assignment problem' and is well established in image analysis (Kong et al., 2013;Diem et al., 2015). However, this limit can be overcome by a variety of approaches. We utilized a publicly available software package, the TrackMate plugin for ImageJ, for analysis. In this package, videos are separated into individual frames, and the total number of particles, in this case cells of C. reinhardtii, in each frame are counted by identifying the edges using a Laplacian of Gaussian (LoG) detector, a common approach in image analysis (Leal Taixé et al., 2009;Kalaidzidis, 2009). TrackMate then matches particles across multiple frames using the Munkres-Kuhn (aka Hungarian) algorithm which identifies the most likely matches for the particles between frames and is also a well-established method for tracking multiple particles (Kong et al., 2013;Diem et al., 2015).
Visualizing Cells and Quantifying Motility
We selected C. reinhardtii cc124, a common lab strain, for our initial experiments as it is the parent line for a number of motility mutants. Cultures were grown for 72 h in standard TAP media at room temperature to a density of 10 7 cells/ml (see Methods). Thirty-second videos of 30 μl aliquots of cultures were acquired and subsequently processed and analyzed in ImageJ with the help of the TrackMate plugin. A representative image slice extracted from one of the videos is shown in Figure 1A. Each frame of the video is automatically separated into different slices and the particles (algal cells) counted ( Figure 1B). C. reinhardtii is known to form motile aggregates such as diads and tetrads, which could skew the counting process. However, the software reliably distinguished individual members of this collection of cells ( Figure 1C). From these videos, we were able to acquire 3,000-6,000 individual tracks for analysis over a 30-s period (see Supplementary Video 1).
Cells were then tracked frame by frame to determine average speeds ( Figure 1D). The average speed of our cc124 populations at 72 h was 40 ± 4.5 µm/s ( Figure 1E). These findings are slower than some of those derived from previous studies, which can range from 80 to 200 µm/s (Racey et al., 1981;Marshall, 2009;Engel et al., 2011). However, close inspection of our track speeds confirmed an upper rate of ≈93 µm/s, in the range of these previously reported values. A histogram of speed distributions in this population confirmed that >50% of the total tracks for each sample (3,000-6,000 tracks/video) were within the 31-to 60-µm/s range (Figure 1F), consistent with the average speed.
Using Chlamydomonas Mutants to Observe Changes in Motility
A large assortment of motility mutants are available for C. reinhardtii which have helped elucidate the mechanisms of flagellar motility. Unfortunately, the characterization of such mutants is generally limited to relative statements, i.e., "slower" or "extremely slow, " rather than being quantified. In order to confirm the robustness of our tracking approach, we evaluated the swimming speeds of several mutant cell lines of C. reinhardtii. We initially investigated three specific mutant strains: cc1036, a non-motile line, as well as cc2228 and cc3663, both motilitydeficient lines. Video analysis confirmed that the total number of tracks for each sample increased in the following order: cc1036, cc2228, cc3663, and finally cc124, respectively (Figures 2A-D). The average speed of the mutant lines ranged from ≈1 µm/s (cc1036) to 20 µm/s (cc3663) based on a minimum of 1,000 tracks/video for each mutant ( Figure 2E and Table 1).
Our results establish lines cc1036, cc2228, and cc3663 as examples of 0%, 25%, and 50% motility lines, respectively, when compared to the wild-type cc124 strain. A sample video for cc3663 is provided as Supplementary Video 2 (see Supplementary Material). Closer inspection of the 25% mutant (cc2228) showed that a small fraction of these mutants (<0.01%) were actually able to obtain speeds in the 21-to 40-µm/s category, with a maximal speed of 30 µm/s. The ability to obtain this information provides valuable insight into the stochastic nature of cell populations, but may also be used for identifying unique behaviors and compensatory mutations which may arise. Using this method, we obtained the average cell speeds for 12 different C. reinhardtii mutant lines ( Table 1). We note that the cc125 line, which is deficient in phototaxis, Kuchka and Jarvik, 1987 actually moves slightly faster than the cc124 strain (115%, p = 0.05, as determined by Student's t test). In addition to the successful characterization of swimming speeds in mutant lines, this approach appears to have even broader utility as we were also able to characterize the swimming speed of a closely related species, C. moewusii (48 +/-6 µm/s).
Characterizing Mixed Populations
Our method was able to measure swimming speeds in populations which converged around a single maximum number of tracks. However, it should also be robust enough to handle mixed populations of varying speeds. In order to evaluate this, we combined equal concentrations of 72-h cultures of cc124 and cc2228, a mutant with 25% motility of wild-type. As seen in Figure 3, the distribution of track speeds for mixed cultures was distinct from either of the cultures for each line individually, with an average speed of 36 ± 2.5 μm/s. These findings underscore the ability of this approach to observe variations within the population.
The effects of Culture Conditions on Speed
While specific mutations impact swimming speed, it is changing environmental conditions such as nutrient availability and light which influence speed and directionality of motility in the wild type. As previously stated, this particular strain of C. reinhardtii is known to be sensitive to nitrogen availability, and we tested the effect of this on swimming speed. We prepared reduced nitrogen TAP by limiting the amount of NH 4 Cl added to the media to either 50% or no NH 4 Cl. As shown in Figure 4, a fraction of the algae cultured in 50% NH 4 Cl TAP (≈40%) were observed in the 51-to 75-μm/s range, while in regular TAP only 20% of the cultures reached this range. We propose that this unexpected increase in swimming speed is to support the search for new nitrogen sources in this nutrient-depleted environment. This was in stark contrast to cells cultured in NH 4 Cl-free TAP, which significantly reduced motility, not surprising given its importance in flagellar motility and development (Engel et al., 2012). We note that a small fraction of these cultures (<5%) seemed unaffected by the nitrogen availability differences in these media, suggesting some compensatory mechanisms may be at work. Ongoing studies in our lab are further exploring media effects on swimming speeds in this model organism.
Visualizing Phototaxis in cc124
The studies above confirm the utility of this approach to obtain population-level resolution of variations in swimming speed in response to mutation or changing environmental conditions. However, the same method used to determine speed also provides spatial coordinates for each track, allowing us to measure overall changes in the directionality of our samples. In this approach, the coordinates of each track are used to determine where the samples reside within the image. The image space is then divided into eight distinct sectors and populated accordingly. Analysis of our 30-s videos of cc124 in regular TAP media confirmed that the cells were uniformly distributed across the slide, establishing that there is no directionality bias (artifact) in our technique which might impact our study ( Figure 5A). Phototaxis is a commonly observed phenomenon in the cc124 line. Cells will move into the path of light, but also away from the source, presumably to minimize photosynthetic stress. We next sought to exploit this response to see if we could observe populationwide directionality changes utilizing our approach (Harris et al., 2013). Phototaxis was induced by placing a narrow beam of white light (≈75,000 W/m 2 ) perpendicular to the focal plane of the slide. As shown in Figure 5B, within 5 min of light exposure, the bulk of the cells in the sample oriented into the path of the light but also into the three sectors furthest from the light source. Surprisingly, there was no increase in swimming speeds within the population in response to this stimulus. These findings confirm the ability of our approach to detect changes in directionality across the window provided by the microscope camera (8.19 mm diagonally across the camera window). While preliminary, this experiment confirms the ability of this approach to observe responses to stimuli within the experimental environment (i.e., the microscope slide). Future experiments will explore the potential for chemotaxis across the surface area of the slide.
DISCUSSION
Automated image analysis provides us with an opportunity to observe and characterize population-level responses, overcoming the limits to statistical resolution and bias associated with manual measurements. In the present study, we have developed a method requiring no custom-designed equipment or software, only a microscope, camera, and the free software package Fiji to analyze the motility of C. reinhardtii. Employing this method, we were able to observe differences in swimming speeds between the wildtype strain and several motility mutants, providing quantitative values to these mutants for the first time. In addition to motility mutants, we showed that our approach was sensitive to changes in media composition such as nitrogen availability. As expected, the removal of NH 4 Cl from TAP seriously compromised motility, but unexpectedly, a 50% reduction in NH 4 Cl resulted in an increase in the swimming speed of a portion (20%) of the cells. We propose that this rate increase facilitates the search for new nitrogen sources under nitrogen-limited conditions.
When coupled to the microplate-based methods we have already developed for investigating the growth and viability of this microorganism, we are now prepared to thoroughly explore and characterize the impacts of changing environmental conditions and/or mutation on multiple aspects of Chlamydomonas biology (Haire et al., 2018). Indeed, our ability to measure swimming speeds in C. moewusii strongly supports the utility of this approach for understanding motility in this important genus of over 300 species. While focused on this model unicellular eukaryote, the approaches outlined here should be easily exportable to other genera, potentially providing new insights into microbial ecology, modes of pathogenesis, and other aspects of microbial behavior.
DATA AVAILABILITY STATeMeNT
The datasets generated for this study are available on request to the corresponding author.
AUThOR CONTRIBUTIONS
AF performed all phototaxis assays and mutant characterizations including experiments and analysis. TH contributed to phototaxis assays, identified mutant strains to be utilized, and assisted in methods development. KC performed all variable growth media studies and analysis. PS obtained the swimming speeds of C. moewusii. MR, CS, SN, and NN-K all participated in the initial development of the method. PR and BR helped simplify the method and validate the results presented herein. AP provided all resources and research oversight for the project.
ACKNOWLeDGMeNTS
TH was supported by a departmental graduate student fellowship. AF was supported by the Astronaut Scholarship Foundation. KC, MR, CS, and SN were supported by the NSF Biomath REU at the Florida Institute of Technology (NSF ID# 1359341). We thank Professors David G. Lynn (Emory University) and Arijit Mukherjee (University of Central Arkansas) for their comments and insights. Publication of this article was funded in part by the Open Access Subvention Fund and the John H. Evans Library. | 4,253.2 | 2020-01-31T00:00:00.000 | [
"Biology",
"Computer Science",
"Environmental Science"
] |
Exploring the exceptional performance of a deep learning stream temperature model and the value of streamflow data
Stream water temperature (T s) is a variable of critical importance for aquatic ecosystem health. T s is strongly affected by groundwater-surface water interactions which can be learned from streamflow records, but previously such information was challenging to effectively absorb with process-based models due to parameter equifinality. Based on the long short-term memory (LSTM) deep learning architecture, we developed a basin-centric lumped daily mean T s model, which was trained over 118 data-rich basins with no major dams in the conterminous United States, and showed strong results. At a national scale, we obtained a median root-mean-square error of 0.69°C, Nash–Sutcliffe model efficiency coefficient of 0.985, and correlation of 0.994, which are marked improvements over previous values reported in literature. The addition of streamflow observations as a model input strongly elevated the performance of this model. In the absence of measured streamflow, we showed that a two-stage model could be used, where simulated streamflow from a pre-trained LSTM model (Q sim) still benefited the T s model even though no new information was brought directly into the inputs of the T s model. The model indirectly used information learned from streamflow observations provided during the training of Q sim, potentially to improve internal representation of physically meaningful variables. Our results indicate that strong relationships exist between basin-averaged forcing variables, catchment attributes, and T s that can be simulated by a single model trained by data on the continental scale.
Introduction
Stream water temperature (T s ) is a critical, decision-relevant variable that controls numerous physical, chemical, and biological processes and properties including dissolved oxygen concentrations and nutrient transformation rates, as well as industrial processes such as cooling power plants and treating drinking water (Delpla et al 2009, Kaushal et al 2010, Madden et al 2013. Thermal regimes of streams directly affect aquatic species (Justice et al 2017) and in some cases, fish mortality rate increases as T s passes a certain threshold (Marcogliese 2001, Martins et al 2012. These thermal regimes are complicated by water uses in industry, such as utilizing stream water for cooling systems, which causes thermal pollution downstream (Raptis et al 2016). Fulfilling the temperature requirements of the environment, agriculture, industries, and municipalities, and coordinating these uses requires a delicate balance. Accurate T s models can inform the decision-making process and help lower the risks of exceeding thermal thresholds.
Myriad basin and in-stream/near-stream processes govern T s (Poole and Berman 2000). The heat balance of a basin, as modulated by land use types (Borman and Larson 2003, Moore et al 2005, Nelson and Palmer 2007 is a primary control on T s . At the basin scale, snowmelt and groundwater baseflow contributions (Kelleher et al 2012) are also important factors due to their sharp contrast in temperature with air. In streams, T s is influenced by solar radiation, latent heat flux, air-water heat exchange, riparian vegetation (Theurer et al 1985, Garner et al 2017, channel geomorphology (Hawkins et al 1997), hyporheic exchange (Evans and Petts 1997), and reservoirs and industrial discharges (Poff et al 2010). At any point in the channel network, T s is the spatiotemporal integration of all of the above processes. Process-based models, while offering physical explanations of causes and effects, need to embrace substantial model complexity to represent all or even parts of these complex processes with their heterogeneity and scaling effects (Johnson et al 2020). The requirements for input data also make scaling up such simulations challenging. Some large-scale process-based models have had root-mean-square error (RMSE) values reported of greater than 2.5 • C (Vliet et al 2012, Wanders et al 2019. A large body of literature has employed statistical models to simulate T s , with some good summaries given by Benyahya et al (2007) and Gallice et al (2015). Typically, T s was regressed to air temperature (T a ), but more recent studies regressed the parameters in T s -T a relationships using catchment attributes. Among these studies and most relevant to our work, Segura et al (2015) predicted the slope and intercept of the 7-day average T s -T a relationship based on catchment characteristics such as watershed area and baseflow index. A Nash-Sutcliffe coefficient of 0.78 was obtained for reference sites for the 7-day average T s , and strong hysteresis was noted in the stream-air temperature relationship (Segura et al 2015). Stewart et al (2015), integrated an artificial neural network with a soil water balance model and obtained an RMSE of around 1.5 • C and R 2 of 0.76 for 371 sites across Wisconsin. Very recently, Johnson et al (2020) used sine-wave linear regression and reported RMSE values of 1.41 • C for an extensive regional dataset and 1.85 • C for the national-scale U.S. Geological Survey (USGS) dataset. They highlighted the importance of spatial scales and heterogeneity. Graf et al (2019) used wavelet transformations of daily average air temperature as inputs to an artificial neural network to predict water temperature at eight different sites in Poland and obtained RMSE values ranging from 0.98 • C to 1.43 • C in the test period. Additionally, there are several studies that used recent water temperature data as an input to predict stream temperature, which can significantly increase model performance as it can be interpreted as a form of data assimilation (Feng et al 2020). Sohrabi et al (2017) obtained RMSE of ∼1.25 • C when they used the previous day's temperature and streamflow as drivers. On one temperature gauge, Stajkowski et al (2020) obtained RMSE of 0.76 • C using a variant of long short-term memory (LSTM) with the previous hour's stream temperature included in the inputs. This differs from the present work in that here, our purpose was to provide longterm projections, and so we did not use recent measurements as inputs.
Streamflow conditions are often not utilized by statistical models of T s , partially because relationships are not clear; none of the abovementioned statistical models used streamflow. However, we do know that streamflow exerts substantial control on T s . Rivers dominated by baseflow are typically fairly stable and cool in summer, but have relatively low thermal capacity and are rapidly heated by strong solar radiation. Peak flows are typically dominated by surface runoff, the temperature of which is strongly influenced by the fast-changing air temperature (Edinger et al 1968). A sensitivity analysis on large river basins found that on average, a decrease in river flow of 50% as compared to a reference condition lowered the minimum annual river discharge temperature (in winter) by −0.4 • C or raised the maximum temperature (in summer) by +1.2 • C ( van Vliet et al 2012). In another study on one specific river, a linear regression model based on monthly air temperature and streamflow data revealed that air temperature rise and flow reduction were responsible for 60% and 40% of June to August temperature increases, respectively (Moatar and Gailhard 2006).
Recently, deep learning (DL) models, including those based on the LSTM algorithm, have shown promise in predicting hydrologic variables such as soil moisture and streamflow by achieving superior results with low computational and human effort , Fang and Shen 2020, Feng et al 2020. LSTM can learn long-term dependencies and gave high performance in snow-dominated regions for streamflow prediction (Feng et al 2020). The memory mechanisms of LSTM may be able to mimic heat units, similar to heat accumulation and release processes. Thus, it is natural to think that LSTM may also be suitable for T s modeling. However, given the complicated and scale-dependent processes influencing T s , it is highly uncertain if there even is a stable relationship between basin-average forcing inputs and T s across different spatial scales, and if so, whether such a relationship can be captured by LSTM given limited observational data.
One advantage of a DL model as compared to process-based ones is that it can incorporate auxiliary information without requiring explicit understanding of relationships. In our case, not only does streamflow directly influence T s fluctuations, it also reveals multi-faceted hydrologic dynamics in a basin, regarding factors such as baseflow contributions and residence times of surface runoff, which could aid T s modeling. Therefore, we expect adding streamflow information to improve model performance. In a process-based modeling framework, streamflow data can be used to calibrate the hydrologic components. Unfortunately due to the issue of model equifinality (Beven andFreer 2001, Beven 2006), calibration may or may not improve model internal dynamics, depending on model parameterization, structure, and data information content (Huang and Liang 2006). In utilizing the DL framework, we hypothesize that models may be able to automatically extract information from hydrographs to inform T s , which, to our knowledge, no study has examined in the context of DL models.
Even if streamflow data are indeed useful, realworld use can be hampered by lack of available streamflow data. Beyond existing stations, collecting new streamflow data is more expensive than collecting new T s data. However, given that highly accurate LSTM-based streamflow models have been reported (Feng et al 2020), we wondered if a well-trained LSTM streamflow model could serve as a surrogate for actual measurements. Deep networks are known to maximally use available information, so it was not clear whether using such a streamflow model as an input to another DL model would present any benefit; the streamflow model used identical forcing data to those already used by our temperature model and would not explicitly bring in any new information.
In this work, we attempted to answer two main research questions and improve our understanding of the stream heat balance: (a) Are there reliable relationships between T s and basin-average meteorological forcing information or attributes that could be learned by deep networks to predict T s with high accuracy? (b) Can observed or simulated streamflow be used to improve temperature predictions, especially when the simulated streamflow is predicted using the same information as the T s model?
Methods
We simulated T s from a basin perspective, that is, as a function of basin-average climate forcings and attributes. This setup greatly simplified model representation compared to spatially explicit models and was supported by widely-available data, but ignored some local channel characteristics. We examined the effect of including daily streamflow in the inputs to assess its information content for T s .
Datasets
Basin characteristics came from the Geospatial Attributes of Gages for Evaluating Streamflow dataset version II (GAGES-II), and represent geological aspects, land cover, reservoir information, and air temperature data for basins across the conterminous United States (CONUS) (Falcone 2011). Historical data for daily mean T s was downloaded from the USGS's National Water Information System (USGS NWIS) website for all 9322 basins in GAGES-II (USGS 2016). We obtained daily meteorological forcing data (e.g. precipitation, maximum and minimum air temperature, vapor pressure, solar radiation) by interpolating a gridded meteorological dataset (Daymet) ( Many of the GAGES-II basins did not have T s observations recorded for all days of the year, with unobserved days being more common during the winter (there were nonetheless many sites with winter data, and our resulting model predicts temperature for all days in a year). For this work, we selected temperature gauges with more than 60% of daily observations available between 2010/10/01 and 2014/09/30, in basins where there were no major dams (over 50 ft in height, or having more than 5000 acre-feet in storage, defined in GAGES-II), resulting in a dataset of 118 basins ranging in size from 2 to 14 000 km 2 . We simulated T s at the pour point of basins where the USGS streamgages were located. Limiting this analysis to sites with >60% data coverage allowed us to focus on the capabilities of LSTM for T s modeling under relatively ideal conditions. Future research on the effect of reservoir presence and data availability could perhaps further improve stream temperature predictions, but such focus was outside the scope of our work here.
LSTM-based models for predicting T s
We used the long short-term memory (LSTM) algorithm, which has received increasing attention in hydrologic literature. This method is designed to learn and keep information for long periods using units called memory cells and gates. Cells store the information, and gates decide which information comes in and out of the cells. Because the basic LSTM architecture has been described extensively elsewhere, we refer readers to those papers for a more detailed discussion of the equations and structure of LSTM (Hochreiter and Schmidhuber 1997, Fang et al 2017, 2019, although a sketch and equations for this model are provided in figure S1. We standardized all input and target values. As a preprocessing step, streamflow was first divided by basin area and mean annual precipitation to obtain a dimensionless streamflow, which was then transformed to a new, more Gaussian distribution (Feng et al 2020): where v * and v are the variables after and before transformation, respectively. Next, the transformed streamflow data along with all other meteorological forcing data, basin characteristics, and T s observations were standardized by the following formula (Feng et al 2020): in which x i,new is the standardized value, x i is the raw value,x is the mean for the variable, and o ′ is the standard deviation for the variable. This standardization better conditions the model for gradient descent and also forces the model to pay roughly equal attention to both large wet basins and small dry basins (Feng et al 2020). All results in this study are shown after destandardization, or reversal of all standardization procedures, was applied to the model outputs.
Hyperparameters were chosen by running multiple tests to determine the hyperparameters as listed in table S2. The target metric the model aimed to minimize (loss function) was root-mean-square error (RMSE), and we also reported an unbiased RMSE, which is the RMSE minus the mean bias. We also report bias (mean error) and Nash-Sutcliffe efficiency coefficient (NSE, equation in supporting information) (Nash and Sutcliffe 1970) for the test periods for comparison with other studies. Further, because a model simply copying air temperature may give relatively acceptable metrics, we also report the Nash-Sutcliffe coefficient (NSE res ) calculated for residual temperature, the difference between daily mean water temperature and daily mean air temperature: T res = T s − T a . To provide a baseline for comparison, we also compared to a locally-fitted autoregressive model with exogenous variables (ARX 2 ). The ARX 2 inputs contained current and delayed atmospheric forcings (X) and ARX 2 -simulated stream temperature in the last 2 days: where a, b and c were fitted coefficients, T t, * s is the stream temperature simulated by this model at time step t, and p is the number of forcings. All temperature models were trained on data from 2010/10/01 to 2014/09/30 and tested from 2014/10/01 to 2016/09/30.
Streamflow observations or simulations as model inputs
Across USGS gages, streamflow is a more widelyavailable measurement than temperature, suggesting that inclusion of streamflow data could bring additional information. To test this, the following models were trained: where F is the forcing data time series, A T represents static and single-valued attributes of the basin for temperature modeling, Q obs is the observed time series of daily mean streamflow, and Q sim is simulated streamflow (described below). LSTM obsQ , LSTM noQ , and LSTM simQ are LSTM-based models incorporating observed streamflow, no streamflow information, and simulated streamflow, respectively. For Q sim , streamflow was simulated using a LSTM-based streamflow model shown to have very good performance (Feng et al 2020): where A Q represents static attributes of the basins used for streamflow modeling (table S1 in supporting information). Meteorological forcing data used for the simulations were the same as for the temperature prediction models. Q sim was trained using observations from 2397 basins, and a longer training period (from 2004/10/01 to 2014/09/30) was used than for the temperature models in this study.
Overall results
All LSTM-based models delivered exceptionally strong performance in the test period (figure 1). Across the conterminous United States (CONUS), the median test-period RMSE for the model incorporating streamflow observations (LSTM obsQ ) was 0.69 • C. The RMSE for the model incorporating simulated streamflow (LSTM simQ ) was 0.81 • C, which was still lower than that for the model lacking any streamflow information (LSTM noQ ), for which the RMSE was 0.86 • C. The corresponding median NSE values were 0.986, 0.983, and 0.979 respectively, and all of the correlation values were above 0.992, indicating that temporal fluctuations were extremely well captured. These metrics are markedly better than those reported in the literature at this scale, which demonstrates that LSTM is particularly well-suited for T s modeling at basin outlets. In general, the LSTM-based models performed much better than ARX 2 , which had a median RMSE of 1.41 • C. Moreover, when we evaluated T res , the locally-fitted ARX 2 model's median Nash-Sutcliffe Efficiency (NSE res ) worsened substantially to 0.772, indicating that a substantial portion (although not all) of ARX 2 's predictive power came from air temperature and some memory (linear regression performed worse than ARX 2 , not shown here). In comparison, the LSTM-based models were much less affected: the median NSE res values were above 0.950 and 0.924 for LSTM obsQ and LSTM noQ , respectively. LSTM models captured most fluctuations unaccounted for by seasonality and were able to capture more complicated memory effects than the simple linear autocorrelation used in ARX 2 . These temporal fluctuations could have been induced by heat storages in the basin (vegetation, snow, soil, groundwater, riparian zone, urban areas) causing delayed responses to atmospheric forcings.
LSTM obsQ generally performed better in the eastern CONUS than the western half, and better in the Figure 1. CONUS-scale aggregated metrics of stream temperature models for the test period. LSTM obsQ incorporated observed streamflow, LSTMnoQ had no input streamflow information, while LSTM simQ incorporated simulated streamflow (Q sim ). ARX2 is a locally-fitted auto-regressive model with extra inputs. The lower whisker, lower box edge, center bar, upper box edge and upper whisker represent 5%, 25%, 50%, 75% and 95% of data, respectively. northern half than the southern half (figures 2(a) and 2(b)). Most of the eastern basins had NSE values above 0.975 and RMSE values below 0.9 • C. Northern basins had slightly higher NSE values, presumably because in colder basins, the minimum winter liquid T s is confined to above 0 • C, and is therefore easier to predict. Existing statistical models often have difficulty with northern basins where air temperature and water temperature are decoupled. LSTM has a long memory to keep track of seasonal snow states and can learn threshold-like functions. Hence LSTM is quite useful where existing models often have deficiencies. Sites with large RMSE values were scattered across the geographic extent of the CONUS, with no clear, explainable patterns. Regardless, the NSE values for most of these 'difficult' basins were still quite high, with only two stations out of the 118 in our dataset having NSE values under 0.9.
Impacts of observed and simulated streamflow as inputs
Providing streamflow as an input to the T s model generally improved model accuracy, but the effects were pronounced for the poorly-simulated sites. The models incorporating either observed (LSTM obsQ ) or simulated (LSTM simQ ) streamflow improved median bias (reducing the absolute median bias by 0.120 • C and 0.062 • C) and RMSE (by 0.170 • C and 0.049 • C) as compared to the model lacking streamflow (LSTM noQ ) (figure 1). Including streamflow information helped to both reduce bias and greatly improve representation of temporal fluctuations, especially for the worse-performing sites. Without the streamflow data, ten sites had NSE values below 0.9. Additionally, LSTM noQ had a median bias of around −0.25 • C, while the median bias of LSTM obsQ was much closer to 0 • C. The inclusion of observed streamflow also greatly reduced overall error range, providing the largest improvements in model performance at the most troublesome sites.
The model incorporating simulated streamflow (LSTM simQ ) generally performed between LSTM obsQ and LSTM noQ . Similar to LSTM obsQ , LSTM simQ helped to noticeably improve the accuracy and reduced the spread of bias (decreasing error range, as shown by compressed whiskers and outliers compared to LSTM noQ ), but did not help as much to improve the median bias. Understandably, simulated streamflow had more errors compared to actual observations, as input attributes (A Q ) do not fully characterize a basin. While Q sim offered state-ofthe-art performance, it still encountered more errors estimating peaks (mainly due to rainfall inputs) and baseflows, especially in the western CONUS (possibly due to inadequate geological information) (Feng et al 2020).
Negative biases with LSTM noQ were attributable to underestimating T s peaks in both winter and summer in some sites (e.g. figure 3(a)) and a more consistent bias at other sites (e.g. figure 3(b)). T s peaks are often associated with streamflow peaks (possibly caused by warm rain) in the winter but after-storm recession limbs in the summer. For the Black River in Ohio ( figure 3(a)), T s peaks were coincidental with recession periods between storms in summer 2015 (annotated points A and B). Simulated T s by LSTM noQ did not rise as high as the observed T s , possibly because LSTM noQ had an internal representation of baseflow that was overestimated here, while LSTM obsQ captured the peaks well. For the South Fork Sultan River in Washington ( figure 3(b)), there was a more prominent year-round bias for temperature predictions in 2015, concurrent with an overestimation of baseflow in Q sim . This underestimation could potentially be due to multi-year accumulation and melt of snowpacks, as this basin typically has a long snow season, sometimes lasting the whole year, but the 2015 summer saw all snow melted by June (verified via Google Earth).
Several reasons could explain why observed streamflow helped the model, but to think them through, we first need to assume that the LSTM model has internal representations of physically-relevant quantities such as water depth, snowmelt, water temperature, net heat flux, and baseflow temperature. Other studies have shown that LSTM has learned to use cell memory states to represent intermediate hydrologic variables that were not matched to observations, e.g. snow cover (Jiang et al 2020). Given this assumption, it is then possible that observed or simulated streamflow corrects the internal 'water depth' variable to estimate the effect of net heat flux. From the energy balance equation, stream temperature changes are estimated by dividing the net heat flux over the flow depth. If streamflow is overestimated during summer baseflow periods, the positive heat flux is vertically diluted too much (and thus the temperature rise is underestimated). Secondly, derived from figure 3(b), the model may not be able to accurately keep track of long-term snow accumulation/melt resulting in LSTM noQ misjudging the amount of cool snowmelt water. However, LSTM obsQ was informed by observed flow and therefore corrected the error. In fact, the basins with the lowest NSE values were concentrated in the Rocky Mountains region having long snow seasons (figure 2(c)). Thirdly, LSTM-based T s models may have learned other holistic hydrologic information from the streamflow time series. For example, they may have learned to perform baseflow separation internally, if such a feature was helpful for T s prediction. Streamflow data may provide more clues to reduce uncertainties-for example, cool summer temperatures could be due to high baseflow or abundant riparian shade, so streamflow data may make it easier to distinguish between these causes.
Further discussion
LSTM, with its hidden layers to store system states (100 hidden units) with different rates, is extremely well-suited to model systems with memory and hysteresis. The warming and cooling of water storage compartments (soil water, groundwater, riparian zones, etc) are caused by different mechanisms with different rates, durations, and lags relative to drivers (accumulation, flush by storms, etc.). Such multirate exchanges, along with diffusive exchanges with soil, easily lead to hysteresis in the system (Briggs et al 2014). We suspect that the internal states and gates of LSTM-based models mimic the effects of buffers and delays by these heat (and water) storage compartments and can be sufficiently trained with 4 years of data as was done in this study.
Streamflow data may have carried multifaceted, temperature-relevant information about stream depth, basin hydrologic properties, and the relative influence of flow versus other heat-moderating processes. Even simulated streamflow provided valuable new information to the temperature model-despite the fact that Q sim ingested identical meteorological forcing data as the T s models. We posit that the pre-trained Q sim model may have derived some of this new information from the additional catchment attributes in A Q relative to A T (table S1 in supporting information) but that the majority of the new information came from the 10 years of streamflow observations across the 2397 stations on which Q sim was trained. We therefore hypothesize that Q sim was thus able to learn and transfer a wealth of nuanced information about each basin's hydrologic properties and responses to meteorological drivers, which in turn likely improved the implicit representations of those attributes in the T s models.
As the first LSTM application for stream temperature, this application is focused on temporal prediction for basins with a good record of historical data, and, as such, may not generalize well to ungauged basins. It is well-known that spatial extrapolation of stream temperature models can be quite risky (Gallice et al 2015), a problem that merits further work. Also warranting further investigation is the representation of spatial heterogeneity at smaller scales, e.g. using a multiscale graph network or calibrating parameters of a spatially-distributed process-based model.
Conclusions
This is the first time a basin-centric lumped T s model has been shown to be so effective. The results clearly indicate that robust (but complex) mapping relationships exist between basin-averaged attributes, climate forcings, and T s , which can be reliably learned by a uniform, continental-scale model using a few years' worth of daily T s observations. All models presented exceptional evaluation metrics that outperformed state-of-the-art models reported in the literature by a substantial margin. Additionally, this performance was achieved without the need for detailed representations of the subsurface or the channel network-a convenience that promises high-quality forecasts of future T s given available climate forcings.
Our use of a basin-centric lumped model to predict T s allowed for great simplification that potentially enabled LSTM to learn the connection between different factors influencing T s , alongside the more obvious benefits of simplifying model assembly and training. The disadvantage of this basin-centric formulation is that it assumes each basin is homogeneous in forcings and attributes. The homogeneity assumption fundamentally limits the size of the basin that can be simulated: when predictions are needed for larger, mainstem rivers, we will need reach-centric models. Therefore, while the current model is highly capable and useful, we do not perceive the present form of the model as being complete in functionality.
Our results show that observed streamflow information helped to improve modeling of T s , perhaps because the observations (a) allowed the model to better resolve groundwater and snowmelt contributions, and (b) provided a more accurate water volume used to estimate the effect of net heat fluxes, especially during recession periods when T s rapidly changed. The benefits were most substantial in basins with multiyear snow accumulations. If streamflow observations do not exist for a basin in which temperature prediction is desired, our results show that a well-trained continental-scale streamflow model can indirectly bring in data from a larger training dataset, which in this study alleviated more than half of the degradation in median NSE that would have otherwise resulted from the lack of streamflow observations.
Data availability statement
Data supporting the findings of this study are openly available at the following URL/DOI: 10.5066/P97CGHZH.
Acknowledgments
FR was supported by the Pennsylvania Water Resources Research Center graduate internship G19AC00425, with funding for that fellowship and AA and SO provided by the Integrated Water Prediction Program at the US Geological Survey. CS was supported by National Science Foundation Award OAC #1940190. Data sources have been cited in the paper, and all model inputs, outputs, and code are archived in a data release (Rahmani et al 2020). The LSTM code for modeling streamflow is available at https://github.com/mhpi/hydroDL. CS and KL have financial interests in HydroSapient, Inc., a company which could potentially benefit from the results of this research. This interest has been reviewed by the University in accordance with its Individual Conflict of Interest policy, for the purpose of maintaining the objectivity and the integrity of research at The Pennsylvania State University. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the US Government. | 6,771.8 | 2020-12-18T00:00:00.000 | [
"Environmental Science",
"Computer Science"
] |
Banking Services Improvement through the Development of Service Technologies
The authors analyze the qualitative changes in the nature and orientation of modern society, which led to the emergence of the phenomenon of “customization” and analyze the features of a modern “service of civilization”. The authors use the technique of ABC-and XYZ-analysis to assess the assortment policy in banks. The article defines the external factors and the theoretical and methodological background of innovative changes in the banking market. The article also discusses the most promising innovative service technologies in the banking market and the examples of their implementation of Russian and foreign banks. To identify the main characteristics of banking products to help meet customer demand, the authors are guided by the model N. Cano, who formulated the “theory of attractive quality” and highlighted the major kinds of needs. Based on analysis of trends in the global economy, the authors concluded the growing role of the Internet in the development of service technology in the banking market and the need for further use of its opportunities for improvement of banking services. This paper investigates the key challenges and prospects for the development of service technologies and their impact on the development of banking services.
Introduction
Qualitative changes in the nature and orientation of modern society that took place in the XX century, led the chronology of successive change of specific economic development models and paradigms appropriate management of economic entities.
Phenomenon, won the world's scientific literature on the topic business called "customization" means a special, individualized approach to meeting the needs of the individual customer and is considered the ideal interaction between the "service provider-client".He is not only attractive for ethical reasons, but also economically beneficial because it provides a competitive advantage through the creation of higher value (value) for the client.This concept has received the scientific development of a number of researchers called "service factory", applies in addition to any service sector business activity, and the application of appropriate principles and methods of work is considered as due to the need for competition, often called the service imperative (Al-Hawari et al., 2005).
In this regard, the production of services, designed essentially to serve the customer to meet their basic personal or corporate needs, becomes dominant for the management of modern organizations, its concepts, methods and techniques and determines the competitive strategies based on the skills and ability to provide quality service.Society whose institutions, primarily economic, are ready to implement such approaches, it is in the true sense of service, and economy of industrial converted into service.
The emergence and development of services markets, increased competition for customers give a new, qualitatively different than before, meaning the activities of enterprises require their orientation not only at public inquiries, but -above all -to the personal needs of the person."Service civilization" radical change world leaders and managers of enterprises.We need for new mentality, different abilities and organizational forms.The main difference between the service and industrial economy are as follows.
If the industrial economy manufacturers aim at maximizing production output; usefulness of the concept is identified with a tangible product; Quality is synonymous with "well made"; core technology focus in the transformation of raw materials into finished products; Management is "mechanistic" character due to excessive ordering and hierarchical organization, the service economy businesses are striving to improve the usefulness of the effect by better meet the specific needs of the client; utility depends on the nature of use and the level of perfection of the so-called service product -a standalone service or system involving physical product and related services; in terms of quality we are talking about establishing an interactive relationship with the consumer that maximizes the degree of his satisfaction; core technology related to the supply of services and operation of material and service systems; management decision-making is fast, flexible and network organization.
Servization social production and consumption is most clearly manifested in the development of the service sector.Processes of servization of the economy are universal: they apply to all subjects of the economic life of society, including the end user (the person).Quality of life in the service economy is largely determined by quantitative and qualitative characteristics produced and consumed services, including financial, particularly banking.
Method
This study uses a technique of ABC-and XYZ-analysis to assess the assortment policy in banks.
ABC analysis is a method that allows you to explore the range of banking services to determine their rating with the greatest economic significance for the organization.The basis of this analysis is based on the Pareto principle, formulated by the Italian economist Vilfredo Pareto, namely: − 20% range, in the group A (it is in the best case, and then 15, 10 and 5%), bring 50% of turnover and 80% of the profits; − 50% range, in the group B, 40% bring turnover and 30% of the profits; − 30% range (group C, in its composition may also include a range of up to 65%) bring 10% of turnover and loss of form that reduces profits.
On the example of the bank will hold ABC analysis range of banking services.Objective analysis -optimization range of banking services.The analysis examined six major product groups of banking services, the bank made.
For more in-depth and accurate analysis of the data were considered revenue for each type of banking services for the four quarters in 2013 (Birch & Young, 1997).
When analyzing the range of banking services are allocated groups A, B, C, corresponding to specific assortments positions that allow you to bring the share of financing for the production of banking services and income from their sale to respectively 80, 90 and 100%.
ABC analysis is based on dividing the target population into groups according to the specific gravity of a trait.In our example, the division into groups is made according to the proportion, engaged in banking services in the total amount of proceeds from the sale of banking services, while this share should be calculated on a cumulative total.
Group A will include banking services, the amount of shares with a cumulative total which is the first 80% of the total proceeds from the sale of banking services to the group -from 80% to 95% in group C -the remainder of the amount of shares with cumulative which is from 95% to 100%.Given this, the whole range of banking services can be divided into groups according to the degree of importance: − Group A -very important banking services, which must always be present in the range.If the parameter used in the analysis proceeds from the sale of banking services, this group includes leaders of sales; − Group B -Banking average degree of importance; − Group C -the least important banking services, applicants for this exception to the range.
To carry out the ABC-analysis the authors construct Table 1.
By counting the number of items of banking services, calculate their share of the group: A -16.7% B -33.3%, C -50%.Thus 16.7% of the range of banking services in the group A, the organization brings 63.34% of sales of banking services, it is the most revenue compared to other services.
In group B included 33.3% of the range of banking services which accounted for 27.4% of sales of banking services.Banking services are included in this group, the average contribution to promote income retail organizations.Revenue from these relatively constant, that is implemented consistently banking (Berger, 2003).Group C includes all other banking services, their contribution to the revenue does not exceed 4.64%.Leadership of the organization must examine the feasibility of production of the banking group and decide to reduce their range in order to avoid extra costs and free trade area for the more sought-after banking services.
With all the numerous advantages ABC-analysis, there is one significant disadvantage: this method does not allow evaluate seasonal fluctuations in demand for banking services.Therefore, a logical continuation of this analysis is XYZ-analysis, whose main purpose is to find out how stable demand for banking services.XYZ-analysis allows us to share in the degree of banking stability in sales and the level of volatility of consumption.
The method of this analysis is to calculate the position of each coefficient of variation or fluctuation rate.This ratio indicates the deviation from the average flow rate value, and is expressed as a percentage.The parameter can be: sales or revenues from the sale of banking services.XYZ-analysis result is the grouping of banking services in three categories, based on the stability of their implementation.
If we consider the demand for banking services for a long period, with the help of analysis can establish that among them are banking with a permanent and stable demand -they belong to the group X, banking services, the demand for which ranges (seasonal variations) but predictable patterns of change -group Y, and finally, banking services, the demand for which is unstable (is random), and the laws of its unpredictable fluctuations -the group Z.
With this type of analysis is also singled out six banking groups on the basis of the coefficients of variation, i.e. deviation from the average value of sales.For banking group X coefficient of variation should not exceed 10%, which means that the banking services are characterized by stability and sales opportunities as a result of high sales forecast.Category Y is determined by the coefficient of variation of 10-25%, i.e. banking services in this category are the average fluctuations in demand.Banking services belonging to the group Z, have irregular consumption, so the coefficient of variation greater than 25%.On an example, consider the level of stability in demand for banking services (Table 2).− group Y entered banking with fluctuations in demand and as a consequence, the average sales forecast: hairdressing, gift decoration, repair garments; − group Z is absent, it shows that banking with irregular consumption absent.
The combination of ABC and XYZ analysis reveals the undoubted leader in the implementation of banking services (group AX) and outsiders (CZ).Both methods complement each other well.If ABC analysis allows us to estimate the contribution of each type of banking services in net sales, the XYZ-analysis to evaluate the marketing and racing his instability.Results of the combined analysis can be used to optimize the range of banking services, the assessment of profitability of each type of banking services (Table 3).Based on the results, as reflected in Table 3, we can conclude that in general the range of banking services for retail organizations has a uniform, stable demand and sales revenue.The presence of type AX means that in the range of banking services have services that bring a large stable income.
Results
Innovations in the present are not just one of the phenomena that determine economic growth, development and structural changes.Innovation has become the essence of modern development in all spheres of the economy, including banking.Under the influence of external factors in the comprehensive banking practices occur following innovative changes (Bhat, 2005).
− New banking products (services) on the basis of new information technologies.− Virtual banking and financial technology: managing a bank account, cash transactions, electronic signatures, contracts, financial institutions (stock exchanges, banks).− integrated use of new information and communication technologies for electronic and mixed (traditional and new) marketing.− The collection, storage and analysis of internal information processing.− New features of the internal control and audit.− Changes in the training of employees: product manager, consultant, specialist consultations and transactions.− The new self-service machines (mono and multifunctional, information).The result is a change in the structure and appearance of the bank's overall "multi activity" with a combination of new and traditional technologies and tools; self; remote maintenance; use of the Internet, call centers, In connection with the global Internet penetration in all spheres of our life one of innovation was the use of social networks and banks on-line-games to expand the market for its services.One of the most popular in the world, and now ready to launch in Russia are RFID technology and NFC.
Recent decades have been a period of introduction of new computer technology, credit cards and important innovations monetary and financial market.These include hedging instruments of banking risks, credit derivatives, Internet, smart cards.They can be characterized by innovation in the banking sector as a result of the innovative activities of the bank, brand new set of banking products and services.Banking innovation is a synthetic concept of the purpose and results of operations in the field of new technologies aimed at obtaining additional income in the process of creating favorable conditions for the formation and location of the resource potential of using the introduction of innovation supporting customers in making a profit.
New banking product -combined or alternative form of banking, created on the basis of market research needs of the market.New product can be a bank, credit and financial instrument.Thus, in 1752 as a reaction to any checks on banks ban the British government to issue banknotes by credit institutions, payable on demand at the box office.In 1958 was released the first mass bankcard Bank Americard (modern Visa), provides the ability to extend credit (credit card).Prior to this card scheme were local.In Russia, the international card systems appeared in 1969, but at that time it was the cards issued by foreign companies and banks.In February 1961, the U.S. appeared key innovation of modern banking -the first certificate of deposit.Currently, the concept of deposit and savings certificates reflected in the Civil Code. in Russian banks lean-approach.To identify the main characteristics of banking products to help meet customer needs, should be guided by the model N. Cano, who formulated the "theory of attractive quality" and singled out the following types of needs: − expected (expected), the satisfaction of which necessarily and obviously for the consumer; − your (desired).The better meet these needs, the more satisfied the consumer; − admire the (exited).Their satisfaction delights the consumer, as he did not expect this.Spreading on banks identified logical relationships between customer needs and product quality, we can make the following assumptions.And admire your client's needs are met through additional services included in the mix of banking products.For example, the need for corporate clients in personal service met by Private banking, other clients need the services of cash management services outside the office -through remote banking.
Remote management of bank accounts can be done in different ways: − by phone -telebanking (telebanking); − PC -e-banking (e-banking); − Internet -Internet banking (Internet banking); − portable devices -mobile banking (mobile banking, m-banking).In connection with the transfer of the center of gravity on the remote service function of the existing retail network narrowed gradually and increasingly resemble branches specialized service centers.The evolution of the banking system in the direction of remote banking model is due to a number of objective features of the economic and social environment in which there are banks in the first place -changes in people's lifestyles, the introduction of new information technologies and automation of banking operations, increasing competition.
During the financial crisis, Sberbank became the first Russian bank to start in June 2008 introduction of the program lean (in translation from English.Lean management -"Lean Management").LEAN -a methodology, implementation and use in their daily work which gives you the opportunity to get rid of unproductive processes, actions, inefficient use of space and staff time.
For employees in the branch of Sberbank in Biryulevo of Moscow experiment began in May 2008 It was then working group had the timing of the main processes by which waste time tellers: sber-cards replacement, prompt transfers, utility payments, cash transactions, registration of plastic cards, payment of compensation, opening and closing of the trading day, the formulation of consumer loans.As a result, were chosen to optimize several directions -replacement sber-cards, compensation payments and currency exchange.
The experiment failed to achieve qualitative improvements: halve the waiting time, flatten peak "sagging" of customers by eliminating the lunch break and the universalization of windows operating, reduce the time of the transactions.As part of the quantitative results, it is worth noting a doubling of growth receiving utility payments, a 7.5-fold increase in non-interest income blitz transfers compared with the control group, significant savings boom-gi.For example, the process of replacing the passbook was reduced from 20 required steps, which spent 3.5 minutes to 40 seconds, to fit five actions.
In Moscow today, is considered sufficient to manage the replacement with a single blow hole punch, the other four operations considered superfluous.When compensation (balance of deposits in the Savings Bank of the USSR for 1991), the procedure of obscure client "running around" -for the roster of contributors, for senior-change -and completing the form, where half of the graph is not clear, reduced to answering questions teller who fills out the form itself, will not leave.Calculation and validation of bills instead of three different operations on three different machines were reduced to one on the universal machine.In Nizhny Novgorod office since the start of the program duration MSS trading day closing procedures halved -from 4.5 to 2 h with a small staff and hope to soon bring it to an acceptable standard -1.5 hours Each Regional Bank opened two or three experimental office, which organized the training of employees.As a result, the bank expects 30% increase in productivity.Frees up time bank employees directed at the promotion of sophisticated banking products, workplace learning and improving the quality of service.LEAN-application technologies in Western banks and companies is becoming an increasingly popular.Some of them are self-introduction of lean technologies, others invite consultants.As a rule, corporations and banks to apply these technologies to solve some specific problems.For example, the Nordic financial group Nordea has developed an agenda for change, to free up time in client managers.Application of LEAN-technology today gives tangible results.
Thus, innovation in the banking sector in the context of globalization -the urgent need for Russian banks.To survive in the global competition, the need for Russian banks are oriented only on the most advanced technologies and products.Crucial role currently played by innovative technology.Innovative technology, which is already being tested, can be called wireless technology NFC (Near Field Communication).NFC can be literally translated as "near field communication"), which is based on the use of ideology radio data on the principles of mutual induction for short distances in the frequency range of 13.56 MHz.NFC and RFID act by analogy with WI-FI and Bluetooth, but have fundamental differences (Shoebridge, 2005).
In Russia, the first example of the use of NFC technology was to demonstrate the use of NFC-enabled phones to pay for travel on the Moscow Metro subway station at the opening of "Pipe".That is the widespread use of this technology would replace paper money.Activities of credit institutions in the banking market provide an extension to the supply and demand for banking products, as part of traditional and innovative technologies for their conduct.
In the modern sense of a new banking service -is the result of the collective activities of the bank to provide aid or assistance to the client in making a profit, can yield major operator diet and add an additional net income for a sufficiently long period of time.In this definition, we consider the banking service from a position as a result of an integrated approach, bringing the bank the primary and secondary net income, i.e. income excluding significant implementation costs, and at the same time take into account the time factor.New vision of the scope of banking services based on the concept of "Bank of the Future".When creating the bank of the future innovative technologies in banking services -these are technologies that have "strategic effect" increase customer base, attracting significant persons, reducing servicing costs of banking operations at the optimal level of operational risk and operational costs.NFC technology has been developed by Philips and Sony in 2002 as an evolutionary combination of contactless identification technology and communication technology.NFC provides a convenient, reliable and secure data transmission over the air for short distances between various electronic devices that combine functions contactless reader and contactless cards, as well as able to communicate with each other as peers (Centeno, 2004).
The first real step toward contactless payment was made in August 2004, fast food chain McDonald's. the company entered into an agreement to receive MasterCard PayPass cards in a number of restaurants in the United States.This event has started the process of eliminating cash contactless payment cards when making transactions for small amounts.NFC opens up a huge range of options users, allowing no additional effort to interconnect digital cameras, PDAs, digital consoles, computers and mobile phones.The most common device in the world with NFC-interface today is the mobile phone.NFC has been widely used in such business areas and projects as selling various kinds of electronic ticketing and payment of public transport, entertainment, tickets booking and payment, etc. NFC-enabled mobile phones can be used as a bank plastic card to work with ATMs.The user places the phone next to the ATM machine, which identifies the number and identity of the owner reads the protected information from your mobile phone.This information includes the account number, a predetermined maximum daily LIIT cash withdrawals and other relevant information that may be different for different banks.
As soon as you enter your PIN-code, you get access to the money in his bank account in the normal mode (you can withdraw cash, pay for any services, etc.).One of the main advantages of using NFC-enabled phones is that they can store information about several bank cards, allowing you to not carry a pack of cards.That is an inevitable consequence of the displacement becomes bankcards, through the implementation of the authentication procedure using mobile NFC-phone.
To date, Japan has begun to actively use technology NFC.Some gas stations already equipped with contactless payment terminals for the fuel with the help of NFC-Mobile, which is very convenient and saves time of the driver and passengers.In the foreseeable future payment system based on NFC can be implemented in a Japanese taxi, which means that customers do not have no-Sit with your cash.In the UK restaurant chain EAT implemented in its 88 institutions across the country contactless payment solution order (Bloemer et al., 1998).
Since the summer of 2006, the passengers of the New York subway can now use their cards for MasterCard PayPass through the turnstile at 30 stations Lexington line running through the island of Manhattan to Central Station New York City (Grand Central Station).With the help of automated payment devices and contactless interfaces of veriFone payment for the trip may be charged with a MasterCard card account in the same way as when making other purchases using the card PayPass.
Application of plastic cards to pay for travel to New York, London and Warsaw, as well as the payment of parking in Poland confirms the fact that contactless payment technology is very effective in the field of public transport.Besides the fact that they increase the speed of passenger traffic at bus stops and train stations, this form of automatic fare collection eliminates transport companies from all the complexities associated with the work with cash, as well as the risk of fraud.
Communication technology in the near zone (NFC) -is a technology standard for chips that allows to connect at very close range and enabling the user to initiate and perform contactless transactions, as well as access to digital information, such as ringtones or files for download on the mobile phones.
RFID technology (RFID) -a method of identification using silicon chip located on the label, which allow to receive a radio frequency device requests a read / write and respond to them; many believe that RFID will replace bar codes and magnetic strips.Innovation in the way of communication, data processing and technical capabilities even in unrelated areas should not be left unattended management of commercial banks (Popkova et al., 2013a).So, Citibank is testing a new system of payments using the latest wireless technology NFC (Near Field Communication), which are embedded in the chip cell phones.The first test batch of special payment cards and cell phones began to be used in the Indian city of Bangalore.According to Citibank, the new program Citi Tap and Pay covers 5 thousand mobile phones and several major Indian shopping centers, equipped with special POS-terminals, readers information about purchases and conductive transfer.
New devices with contactless function do not need to be rolled into the payment terminal to pay for your purchase any electronic device equipped with feature NFC (eg mobile phone), the owner enough to hold them near the terminal.Further transaction map or device runs as a common operation in the usual credit card.After the "break-in" business process Citi plans to deploy large-scale action on the use of NFC (DeYoung et al., 2007).
National Bank of Kuwait (NBK), Visa, Zain telecom operator and the company had tested ViVOtech NFC_tehnology April 9, 2009 According to Khalid Al Heydzhray (Khalid Al Hajery), manager of the company Zain Kuwait: "Mobile phones are no longer just devices to commit or receiving calls.Instead, continuous contributions to innovation means that mobile phones are now becoming more and more a platform for new technologies, such as close payments".Visa Inc. announced the launch of contactless payment systems based on mobile phones.While this technology is only available in Malaysia.MasterCard Worldwide in collaboration with the developer of mobile solutions BlazeMobile proposed new stickers PayPass, which can be attached to any mobile device and used to make contactless payments in 141 thousand retail and service locations around the world (Cooke, 1997).
Sticker transmits information from the card to the terminal through PoS-technology RFID.In addition to the stickers there is the development of "smart posters".They can be used to receive promotional information, flyers for bonuses and discounts, contacts and even add video and audio clips or ringtones using contactless technology transfer on a mobile phone.
The main challenges to the development of contactless payments are security and the creation of a business model.With regard to the safety of transactions on the basis of NFC, the banking community is inclined to the use of biometrics for the procedure identification of the client.Most likely will be biometric technologies "imprint" voice and signature.Voice authentication can be carried out successfully even in the presence of cold and runny nose of the owner, and the analysis takes into account the feature of handwriting signature and pressure of the paper.Therefore, many banks already working actively biometric experiments (Mukherjee et al., 2003).
At the conference, the European Banking Association (EBA), devoted to the payments, which took place in Frankfurt in June 2006, were designated certain forward the development of innovative technologies in the banking sector.Mark Garvin, chief analyst at JP Morgan Chase AG, agreed that the payment services industry in 2016 will forget about paper checks that payments via mobile phones will become commonplace, and biometrics eliminate the possibility of fraud.Apple announced on its official website the program IChat, which allows you to broadcast videos for interpersonal communication via the Internet at the level of digital television.
For banks, this means a return to the "live" communication with the client, which was lost in the process of automation of banking operations and the technological revolution.That is, the sending client to ATMs and teaching him to independently carry out simple operations on their accounts through the Internet, through the programs of Internet banking, bankers depersonalize the process of selling their services.CRM-programs analyze customer needs and keep statistics on the use of certain services to customers, but without taking into account short-term desires, moods, circumstances have occurred in a person's life, encourage him to use or refuse to use any banking services.Due to the lack of interpersonal communication banks issued sight of the need to establish a trusting relationship with clients (Loonam & O'Loughlin, 2008).
Given the development of all these technologies, the Internet user can once again meet with the employee of the bank face to face video call.This can be done from anywhere in the world from your computer, TV or mobile phone with high definition images.This innovative service will be called video-banking as the next evolution of mobile and Internet banking.This service will be available on a 24-hour service.That is, the client will "own" the manager, with whom he is personally acquainted and who will know all the individual characteristics of the client service.If a bank provides video, the question arises about the number and work schedule of employees, which may be available to customers at the same time as a video.This problem can be solved through the use of realistic video avatar robot.Actions will adjust robot dialogue system using touch control dial tone on your mobile phone, interactive touch-screen HD TV, touchscreen computer (Popkova et al., 2013b).
With the development of the next generation control system will be built already in sensor technology and mechanisms to recognize voice commands.That is, once recorded video answers a personal manager for possible standard client requests, such as balancing accounts, recent transactions, etc., will allow customers to "meet" with his manager 24 hours a day.
Discussion
The analysis results showed that: − banking groups A and B are the most revenue in comparison with other service activities, so you must ensure that their constant presence; − banking group AX have more revenue from sales and sales stability.Implementation of this banking group is stable and well predicted; − banking groups BY high sales revenues have insufficient stability of implementation; − share in revenue banking group C does not exceed 4.64%, so the leadership of the organization of trade must examine the feasibility of production of the banking group and decide to reduce their range in order to avoid extra costs and free trade area for the more sought-after banking services; − for banking services that are included in the group CX and CY can use ordering system; − lack Group CZ, suggests that spontaneous demand banking services in the organization of trade absent.The proposed method of analysis of the range of banking services, you can: − examine the profitability and efficiency of service activities in trade organizations; − estimate the contribution of each type of banking services in the revenue from sales of goods, works and services; − to evaluate the uniformity of sales; − used to optimize the range of banking services.Banks that can keep pace with technological progress: it is timely to develop business processes, update the software, making it adaptable to virtual services, get a distinct advantage over other market participants.Hardware independent High-Speed Internet allows the user using any wireless device or mobile access to services in a mode on-line around the clock 365 days a year (Kozak, 2005).Only banks that will be willing to continuously online service will be able to survive the technological revolution.Therefore, the main aspect of the competition will be the size of investment in the development and implementation of information technologies that meet the requirements and market.For successful participation in the competition, in your opinion, you need to solve the following problems: − increase flexibility and adaptability to the market, not only to introduce a fundamentally new technology, but also develop a "kaizen-approach" (a philosophy that focuses on continuous improvement of all aspects of life); − go to the new self-service computer technology, remote maintenance, virtual banking and financial technology; − develop and introduce new loan products based on new technologies; meet customer needs as follows: expected -due to the necessary characteristics; desired -at the expense of one-dimensional; admire thedue to attractive quality of the product; − integrated use of new information and communication technologies for e-marketing; − innovate in the field of forms and methods of management, changes in the training of employees; in the management of innovation abroad model was developed to meet the needs of customers, called Kano model.− keep in mind that technological innovation can reduce the effectiveness of control of the banks by the Central Bank of the Russian Federation.Under these conditions, the improvement of banking services through the development of service technology becomes simply necessary for competitiveness and development of the bank's banking market.
At the heart of the creation of new banking products and the introduction of innovative services based on the following theoretical and methodological assumptions: − formed a new paradigm of innovative development of the economy; − banking innovations are part of the total flow of innovations that have become typical for a modern economy; − banking innovations are divided into product and technology; core product strategy is the current account / debit card; − information technology has become a universal banking business environment; − remote banking customer service based on Internet technologies, without a doubt, should be one of the main forms of retail banking services; in 2013 the possibility of Internet banking in Russia used more than 2 million people -most of them are customers of several large banks; − information conception banking network begins to fade into the background, and first place is its use as an integrating tool of human activity; − one of the clearest examples of the introduction of the banking strategy is the use of innovative changes
Table 2 .
XYZ-analysis of banking services − group X includes banking services, which are characterized by stable sales: cutting fabrics.Minor fluctuations in demand, the demand is stable;
Table 3 .
Combination of ABC and XYZ analysis of banking services | 7,709.8 | 2014-11-14T00:00:00.000 | [
"Economics"
] |
In Vivo Tracking of Cell Therapies for Cardiac Diseases with Nuclear Medicine
Even though heart diseases are amongst the main causes of mortality and morbidity in the world, existing treatments are limited in restoring cardiac lesions. Cell transplantations, originally developed for the treatment of hematologic ailments, are presently being explored in preclinical and clinical trials for cardiac diseases. Nonetheless, little is known about the possible efficacy and mechanisms for these therapies and they are the center of continuous investigation. In this scenario, noninvasive imaging techniques lead to greater comprehension of cell therapies. Radiopharmaceutical cell labeling, firstly developed to track leukocytes, has been used successfully to evaluate the migration of cell therapies for myocardial diseases. A substantial rise in the amount of reports employing this methodology has taken place in the previous years. We will review the diverse radiopharmaceuticals, imaging modalities, and results of experimental and clinical studies published until now. Also, we report on current limitations and potential advances of radiopharmaceutical labeling for cell therapies in cardiac diseases.
Introduction
Cardiovascular ailments are still the greatest causes of morbidity and mortality in the world, with significant financial and social consequences [1,2]. Despite recent medical and surgical advances in the past decades, currently there are no effective therapies to allow cardiac regeneration [3]. On this scenario, experimental studies have indicated that cell therapies may target cardiac regeneration in acute and chronic myocardial diseases [3]. Although clinical studies have already been carried out, the efficacy and potential mechanisms of cell therapies for cardiac diseases are still under continuous investigation [4][5][6]. Possible mechanisms of action of cell therapies include the secretion of paracrine factors that reduce cardiomyocyte death, improve local microcirculation, and decrease the amount of fibrous tissue, which may improve heart function [3].
Noninvasive imaging modalities have the potential of providing better understanding of the biological process and the effectiveness of cell therapies for cardiac diseases [7].
One of the main applications of these techniques is to track the migration of cell therapies [7]. Among the different imaging techniques available, Nuclear Medicine has become one of the most employed techniques, due to its favorable characteristics, such as the availability of different radiopharmaceuticals and its high sensitivity [8]. In this paper, we will review preclinical and clinical studies that used Nuclear Medicine to evaluate cell migration and discuss important issues in this area.
Use of Radiopharmaceuticals for Cell Labeling
In the past decades, labeled leukocyte scintigraphy has become an important method to locate sites of infection and inflammation in the body [9,10]. The development of this method had been a key landmark in the history of Nuclear Medicine. Conventional techniques include twodimensional planar scintigraphies and three-dimensional 2 Stem Cells International single photon emission computed tomography (SPECT). Additionally, SPECT images may be acquired together with a computed tomography, resulting in hybrid SPECT/CT images [11]. This technique allows a better location of the findings of Nuclear Medicine, thus increasing the sensitivity and specificity of the method [11]. A variety of labeling methods with radionuclides has been created and used to study cell distribution in the body [12]. Currently, technetium-99m ( 99m Tc) is the most commonly utilized radionuclide in the world, due to favorable properties such as its decay by gamma emission with an energy of 140 kev and a 6-hour half-life, optimum physical characteristics for SPECT, allowing images for up to 24 hours after injection [9]. Radionuclide indium-111 ( 111 In) may also be used for cell labeling in SPECT, for example, through compounds 111 In-oxine and 111 In-tropolone [9].
The radionuclide fluorine-18 ( 18 F) has a half-life of approximately 110 minutes and is the most frequently utilized in positron emission tomography (PET) and hybrid PET/CT, mainly in the radiopharmaceutical 18 F-fluorodeoxyglucose ( 18 F-FDG) [12]. PET has better spatial resolution than SPECT and allows the quantification of the standardized uptake value (SUV) [12,13]. Zirconium-89 ( 89 Zr) is another promising radionuclide for cell labeling in PET that has a 78.4-hour halflife and may allow cell tracking for two to three weeks [14].
Tracking cells with SPECT and PET may be separated in two strategies: direct and indirect [15]. Direct tracking is achieved by labeling cells with a radiotracer in vitro with subsequent cell administration [7,15]. The most widely used radionuclides for direct labeling are 99m Tc and 111 In to perform SPECT and 18 F to perform PET [9,16]. Indirect cell tracking may be achieved employing reporter gene/probe systems that have been the topic of exceptional reviews [8,17]. For instance, a lentivirus may be used to deliver a reporter gene for expression of herpes simplex virus truncated thymidine kinase (TK) that catalyzes a reaction leading to the accumulation of the probe 18 F-9-[4-fluoro-3-(hydroxymethyl)butyl]guanine derivatives ( 18 F-FHBG) for PET imaging [17]. Another example of reporter gene is the Sodium Iodide Symporter (NIS), a cell surface protein expressed usually in thyroid cells, salivary glands, mammary glands, and choroid plexus, but not in organs such as the heart [18]. Cells overexpressing NIS will capture 99m Tc and iodine-123 ( 123 I) for SPECT, as well as iodine-124 ( 124 I) for PET, allowing the evaluation of viable cell homing in the heart after transplantation [18].
Preclinical Studies
3.1. Direct Cell Labeling. We identified 31 published articles that used direct cell labeling to track the migration and homing of cell therapies in preclinical models of heart diseases, all of them for myocardial infarction (Table 1).
Effect on Cell Viability, Metabolic Activity, and Migration.
Although the use of 111 In radiopharmaceuticals allows cell tracking for longer periods in comparison to 99m Tc, it has high energy (171 and 245 keV), which leads to images of lower resolution and greater cell dose that may decrease cell viability [19][20][21]. 111 In can affect the viability, metabolic activity, and migration of stem cells due to internalization of Auger electrons emitted at close distances. These electrons may lead to considerable toxicity to target cells reducing cell viability [10,[19][20][21].
Jin et al. carried out an interesting study where they evaluated the viability of bone marrow-derived mesenchymal stem cells (BM-MSCs) labeled with 111 In [22]. Distinct samples with 5 × 10 6 cells were labeled 0.1 to 18 MBq of 111 Intropolone. The authors reported that cells had 100% viability when incubated with up to 0.9 MBq, which corresponded to 0.14 Bq per cell.
Brenner et al. [19] reported the impact of labeling human CD34 + hematopoietic progenitor cells (HPCs) with 111 Inoxine. HPCs (1 × 10 6 /mL) were incubated with 30 MBq of 111 In-oxine for 1 hour to assess cell viability at 1, 24, 48, and 96 hours. Although no significant changes were observed at 24 hours after labeling, after 48 and 96 hours the number of dead cells increased. Furthermore, cell migration was quickly reduced after 24 hours.
Suhett et al. [23] studied the binding sites for 99m Tc in rat bone marrow mononuclear cells (BM-MNCs). BM-MNCs were labeled with 45 MBq of 99m TcO4-. After being labeled, cells were carefully disrupted and differentially centrifuged for organelle separation. Viability of the labeled cells was 93% and most of the radiation remained in the supernatant comprised of the cytosol and membrane bound ribosomes. 18 F-FDG is regarded as the gold standard for the assessment of myocardial viability. 18 F-FDG is a glucose analogue that enters cardiomyocytes through glucose transporters (GLUTs) such as GLUT1 and GLUT4. Within the cell, 18 F-FDG suffers phosphorylation by hexokinase and converts to 18 F-glucose 6-phosphate. Because it is not metabolized, it is retained within the cell. Preclinical studies made by Chan and Abraham reported that 18 F-FDG caused no interference with proliferation of cardiac-derived stem/progenitor cells (CDCs) [7]. Similarly, Wolfs et al. found no significant changes to the ultrastructure and differentiation of mouse MSCs and rat multipotent adult progenitor cells [24]. Hexadecyl-4-[ 18 F] fluorobenzoate ( 18 F-HFB) is a lipophilic radiopharmaceutical that is absorbed through the cell membrane, allowing cell tracking by PET. Zhang et al. [25] compared the labeling of human peripheral bloodderived circulating progenitor cells (CPCs) with 18 F-HFB and 18 F-FDG in mice after myocardial infarction. Cells were injected close to the site of cardiac injury. The images were made in Micro-PET 10 min and 2 and 4 hours after injection. 13 N-NH3 was used to outline the liver and the heart. Labeling with 18 F-HFB showed no reduction in cell viability with 14.8-22.2 MBq of radioactivity in 2 × 10 6 ; however, higher activities (185-259 MBq) resulted in significant cell death. After 24 hours, the reduction of viability in 18 F-HFB-CPCs was 13.3%, whereas in controls it was 6.9%. After 5 days cell viability decreased for both groups: 18 F-HFB-CPCs (10.4%) and 18 [26]. Radionuclide leakage may occur from viable cells and cellular debris [26]. Many authors applied different in vivo experiments to determine cell death, radiolabel leakage, and cell survival [26,27]. Another issue to be evaluated is the normal turnover of the cells, where one may label cells and administer them in order to study the clearance characteristics from viable cells that did not die in vivo [26]. Blackwood et al. [26] quantified the survival of BM-MSCs labeled with 111 In transplanted into the canine myocardium. The authors also evaluated the clearance of lysed 111 In labeled cells. Serial SPECT images were acquired after direct epicardial injection to determine the time-dependent radiolabel clearance. The average long biologic half-life for labeled cells was 74.3 hours and for lysed cells was 19.4 hours.
The labeling efficiency of direct labels differs between different methods and needs to be taken into account [28]. For instance, it has been reported that the labeling with 99m Tctropolone was more effective and stable in comparison to 99m Tc-hexamethylpropyleneamine oxime ( 99m Tc-HMPAO) [29]. In another example, Zhang et al. reported that 18 F-HFB labeling showed a higher efficiency when compared with 18 F-FDG [25].
Biodistribution after Intravenous Injection.
Kraitchman et al. [30] investigated the migration of BM-MSCs labeled with 111 In-oxine, by intravenous route, 72 hours after the induction of lesion myocardial infarction in dogs. SPECT imaging was carried out up to 8 days after cell transplantation. Uptake on the same day of cell therapy was mainly restricted to the lungs in infarcted animals and control animals with low uptake in the heart. At 24 hours, uptake remained constant in the heart, decreased in the lungs, and increased in the liver and spleen.
Lutz et al. [31] studied the migration of systemically injected bone marrow-derived cells in mice after myocardial infarction. After induction of the infarction, animals received intramyocardial injections of stem cell factor (SCF) in peri-infarcted areas. Cells were labeled with 111 In-oxine and injected in the tail vein 24 hours after the infarction. Animals were sacrificed and hearts removed for analysis in a gamma counter 24 or 72 hours later. The analysis indicated that intramyocardial injections of SCF significantly increased myocardial uptake in comparison with infarcted animals that received saline injections and with sham-operated animals at both time points.
Garikipati et al. [32] investigated the efficacy of therapy with fetal cardiac mesenchymal stem cells (FC-MSCs) in rats after myocardial infarction. FC-MSCs were isolated and cultured from fetal rat hearts. Seven days after the induction of the lesion, mice were divided into FC-MSC or saline group. Cells were labeled with 99m Tc-HMPAO and injected into the tail vein. Multipinhole gated SPECT/CT was carried out six hours after the intravenous infusion and 99m Tc labeled cells were mainly present in the lungs, with focal homing in the heart.
Biodistribution after Intraventricular Injection.
Brenner et al. [19] performed intraventricular injections of human HPCs into the left ventricular cavity of rats after myocardial infarction. SPECT was performed 1, 24, 48, and 96 hours after transplantation. Liver, kidneys, and spleen combined had 37% and lungs 17% of whole body uptake 1 h after cell transplantation. Twenty-four hours after the injection, lung uptake was no longer detected, while homing to the liver, kidneys, and spleen increased to 57%. Only 1% of the injected activity was found in the heart of transplanted animals.
Aicher et al. investigated the transplantation of 111 Inoxine labeled endothelial progenitor cells (EPCs) into rats after myocardial infarction [33]. Labeled cells were delivered in the tail vein or in the left ventricular cavity. Pinhole SPECT was performed after cell administration. Total uptake in the liver, kidneys, and spleen was 71% after 96 hours, while myocardial uptake was only 1-2% after intravenous injection and 3-5% after intraventricular cavity infusion.
Barbash et al. evaluated the effectiveness and feasibility of systemic administration of BM-MSCs in rats following myocardial infarction. Cells were labeled by incubation with 99m Tc-HMPAO [34]. Three injection methods were studied. The first approach was by infusion of BM-MSCs in the femoral vein. In the second strategy, BM-MSCs were infused directly into the left ventricle. In the third group, cells were injected into the right ventricle, but all animals died from pulmonary embolism. Images were acquired 4 hours after the infusion and indicated that rats with myocardial infarction had higher uptake of 99m Tc labeled cells in the heart than sham animals. Moreover, intravenous infusion resulted in lower myocardial homing due to pulmonary cell retention.
Biodistribution after Intramyocardial Injection.
Zhou et al. [35] investigated the distribution of rat embryonic cardiomyoblasts (H9c2) cells after labeling with 111 In-oxine rats after myocardial infarct. Cells were intramyocardially transplanted around the infarcted region immediately after induction of the lesion and SPECT images acquired 2, 24, 48, 72, and 96 hours. The authors reported that cell uptake was detected in the injection site up to 96 hours after administration.
Shen et al. [36] used magnetic resonance imaging (MRI) and SPECT imaging to monitor H9c2 cell transplantation in rats after myocardial infarction. Myocardial infarction was induced and 111 In labeled cells were injected in regions close to the injured site. MRI was performed 5-7 days after SPECT images. Through a coregistration algorithm, it was possible to carry out the fusion of SPECT-MRI images. The authors were able to monitor the uptake of 111 In-oxine labeled cells and the perfusion in 99m Tc-sestamibi images.
Tran et al. [37][38][39] evaluated in a series of studies the migration of 111 In-oxine labeled BM-MSCs in rats one to four months after myocardial infarction in rats. Cells were injected in the infarcted areas. Cell distribution was compared with 99m Tc-sestamibi imaging of myocardial perfusion using a 17-segment division of the left ventricle. The authors concluded that BM-MSCs homing was heterogeneous and did not match in all occasions the infarcted regions [37][38][39].
Wisenberg et al. [40] evaluated dogs using both imaging of 111 Indium-tropolone labeled cells and late gadolinium enhancement cardiac MRI for up to 12 weeks after a 3hour coronary occlusion. The animals were injected with BM-MSCs and imaged at day 0 (surgery) and after 4, 7, 10, and 14 days. SPECT imaging indicated an effective biological clearance half-life from the injection site of ∼5 days, while cardiac MRI demonstrated a pattern of progressive infarct reduction over 12 weeks.
Terrovitis et al. [41] labeled rat CDCs with 18 F-FDG to monitor cell therapy in rats after myocardial infarction. CDCs were injected intramyocardially. In other groups of animals, the effects of fibrin glue, bradycardia (by adenosine injection), and induction of cardiac arrest on cell homing were investigated. One hour after cell transplantation without additional measures, PET indicated that mean myocardial homing was 17.8%. Adenosine injection was able to decrease the heart rate and double cell mean cell homing to 35.4%. A comparable enhancement in cell homing was seen when the authors applied fibrin glue epicardially and mean cell homing increased to 37.5%. However, the greatest increase was seen after induction of cardiac arrest, when mean homing increased to 75.6%.
Lang et al. [42,43] studied the distribution of 18 F-FDG labeled murine embryonic stem cells (ESCs) or fibroblasts in C57BL6/N mice after myocardial infarction, five minutes after the infarct ESCs or fibroblasts were injected intramyocardially [42,43]. Images were made in a preclinical PET. The authors reported that the percentages of uptake in the heart were 5.2-5.3% after 25 minutes, 4.8-5.0% after 1 hour, and 5.6-5.7% after 2 hours.
Danoviz et al. assessed the transplantation of adipose tissue-derived stem cells (ADSCs) with two biopolymers, fibrin and collagen, in murine model of acute myocardial infarction [27]. Cells were labeled with 99m Tc-HMPAO. Twenty-four hours after induction of the lesion, the animals were injected with cells suspended in 100 mL of carrier by intracoronary route. Cells were infused in the border of the lesion with fibrin, collagen, or culture medium. Radioactivity counting of the organs revealed high levels of radioactivity in the liver, kidneys, and lungs. Both biopolymers increased cellular retention, but the collagen group showed higher uptake (26.8%) when compared to fibrin and culture medium (13.7% and 4.84%, resp.).
Mitchell et al. [44,45] and Sabondjian et al. [46] assessed the migration of EPCs in canine models of myocardial infarction up to 7 days after induction of the lesion. EPCs were labeled with 111 In-tropolone and injected by epicardial and endocardial routes. SPECT imaging was performed up to 15 days after cell transplantation. The authors reported that cell homing occurred in hypoperfused areas and that epicardial and endocardial injections led to similar uptake.
Maureira et al. [47] developed an in vivo technique with pinhole SPECT to monitor stem cell migration after myocardial infarction in rats. After coronary occlusion, autologous BM-MSCs were labeled with 111 In-oxine. An intramyocardial injection was administered in the infarcted region. Two days after the procedure, 99m Tc-sestamibi was injected to compare homing of 111 In labeled cells and myocardial perfusion. Left ventricle perfusion and function in all animals were monitored 2 days before cell therapy and 1-6 months after therapy using a pinhole gated SPECT. Significant improvements in cardiac perfusion were observed in injured areas and also in areas not transplanted.
Kim et al. [48] studied the homing of ADSCs after direct labeling with 124 I-hexadecyl-4-tributylstannylbenzoate ( 124 I-HIB) or 18 F-FDG in rats after myocardial infarction. Cells were labeled with 124 I-HIB or 18 F-FDG. An intramyocardial injection was performed in the infarct site. 124 I-HIB labeled cells were seen at the infarct area and monitored for up to 3 days in lesioned animals. The authors reported that labeling efficiency with 124 I-HIB was higher than with 18 F-FDG, indicating it could be a good method to monitor stem cell homing.
Elhami et al. [49] investigated the migration of 18 F-FDG labeled ADSCs after myocardial infarction in rats. Labeling was carried out with 18 F-FDG. Immediately after the infarct induction, cell transplantation was carried out by intramyocardial, intraventricular, or intravenous route. In another group, cells were injected intramyocardially 7 days after the infarct. The authors reported that the intravenous route led to lower cell homing in the heart (1.2% of infused ADSCs) 4 hours after cell transplantation. Intraventricular injection led to an uptake of 3.5% in the heart, while intramyocardial injection led to the highest myocardial cell homing (14%). Interestingly, in the group that received an intramyocardial cell injection 7 days after the myocardial infarction, cell homing was lower (4.5%) than the group that received cells immediately after the infarct induction.
3.1.6. Biodistribution after Intracoronary Injection. Qian et al. [50] determined the distribution of BM-MNCs after myocardial infarction in Chinese mini-pigs. Cells were labeled with 18 F-FDG and injected intramyocardially 7 days after the infarct. One hour after cell transplantation, 6.8% of the whole body uptake was located in the infarct site. Liver and spleen showed more than 90% of the uptake.
Doyle et al. [51] tracked CPCs in pigs after acute myocardial infarction. CPCs were labeled with 18 F-FDG. One group received CPCs divided into 3 cycles after a balloon catheter was positioned and inflated in the lesioned artery. A second group received a single bolus infusion of CPCs without balloon inflation. The authors reported that one hour after cell transplantation the group that received the infusion in 3 cycles with balloon occlusion had lower uptake in the heart than the group that received a single bolus injection (8.7% versus 17.8%, resp.). The majority of activity (>60%) was concentrated in the lungs after 1 hour in both groups, and there was moderate uptake in the liver and spleen.
Keith et al. [52] investigated the impact of using intracoronary human CDC injection on cell homing in a pig model of myocardial infarction. Cells were injected with or without balloon inflation after labeling with 111 In-oxine. SPECT was carried out 24 hours after cell transplantation. The authors reported that the injection with balloon occlusion led to similar myocardial homing as the one without balloon occlusion (5.41% versus 4.87%, resp.) and concluded that the risk involved in the coronary occlusion approach would not be warranted. Hou et al. evaluated the distribution of peripheral blood mononuclear cells (PB-MNCs), labeled with 111 In, in pigs after myocardial infarction. The lungs had 1%, 3%, and 3% of the uptake, while myocardial uptake was 2.6%, 3.2%, and 11% after intracoronary, interstitial retrograde coronary venous, or intramyocardial injections, respectively.
Tossios et al. [53] monitored the distribution of BM-MNCs following induction of myocardial infarction in pigs. After labeling with 111 In-tropolone, cells were injected by intramyocardial or by intracoronary route with or without balloon occlusion. One hour after injection, 20.7%, 4.1%, and 6.1% of the uptake were located in the heart after intramyocardial, intracoronary without balloon, and intracoronary with balloon infusions, respectively. Twenty-four hours later, myocardial uptake was 15.0%, 3.0%, and 3.3%, respectively. The lungs, liver, and spleen had 50%, 10%, and 5% of the uptake in the whole body, respectively.
Mäkelä et al. [54] evaluated the migration of BM-MNCs in a pig model of myocardial infarction. Cells were labeled with 111 In-oxine and transplanted by intramyocardial or intracoronary routes 30 minutes after induction of the lesion. SPECT was acquired 2 and 24 hours after cell transplantation and biopsies from different organs were also performed to allow gamma counting. The authors reported that the intracoronary injection led to <15% of the cardiac uptake observed after intramyocardial injection, while lung uptake after intramyocardial injection was <15% of the pulmonary uptake observed after intracoronary infusion.
Forest et al. studied a preclinical model of myocardial infarction in pigs [55]. Seven days after induction of the lesion, BM-MNCs were labeled with 99m Tc. Animals were divided into three groups: control group, intracoronary injection, and intravenous injection of 99m Tc labeled cells. Intravenous administration led to higher cell accumulation in the lungs, while intracoronary injection led to greater myocardial uptake.
Indirect Radiolabeling: Reporter Gene/Probe Systems.
Reporter gene/probe imaging for SPECT and PET has been applied to evaluate the survival of transplanted cells in animal models of cardiac diseases [8]. Some of the disadvantages of using reporter genes include the possible immunogenicity of the viral reporter gene, which limits the application of the technique in humans [7]. Moreover, the stability of transfection and expression must be improved and the potential interference with stem cell function and differentiation from vector transfection or transduction must be minimized [56]. We identified 9 published articles that used indirect cell tracking to evaluate the migration and homing of cell therapies in preclinical models of heart diseases, all of them for myocardial infarction ( Table 2).
Biodistribution after Intramyocardial Injection.
Gyöngyösi et al. [57] used reporter gene imaging to monitor the migration of BM-MSCs in a pig model of myocardial infarction. Cells transfection was performed with a lentivirus for expression of TK. Sixteen days after the infarction, a group of animals received BM-MSCs by intramyocardial injection. Then, 18 F-FHBG was injected 30 hours and 7 days after cell transplantation for in vivo imaging. The authors reported that there was a decrease in myocardial uptake of 18 F-FHBG after 7 days in comparison with the 30-hour images, as well as mild increase in pericardial and pleural uptake.
Terrovitis et al. [58] transfected rat CDCs with a lentivirus to express the NIS gene. In vivo images were obtained after intramyocardial cell injection in mice after myocardial infarction. An injection of 99m Tc for SPECT imaging or 124 I for PET imaging was used to evaluate the expression of NIS gene in transplanted CDCs. The authors were able to detect the transplanted CDCs with a threshold of approximately 10 5 cells. Cell homing was seen up to 6 days after CDC transplantation but less than 5% of cells remained in the heart, due to migration to the lungs and systemic circulation.
Lee et al. [59] investigated the homing of canine iPSCs after cell transplantation in dogs. Cells were injected intramyocardially 30 minutes after the induction of myocardial infarction. The authors injected an activity of approximately 536 MBq of 18 F-FHBG and carried out PET/CT 8 hours after cell transplantation. Imaging revealed cell homing to the anterior myocardial wall.
Liu et al. [60] and Lan et al. [61] analyzed the migration of human CDCs in rats after myocardial infarction in severe combined immunodeficiency (SCID) Beige mice. A total of 1 × 10 6 cells transfected with a TK reporter gene were injected by intramyocardial route immediately after the induction of myocardial infarction. On days 1, 7, 14, 21, and 28 after cell therapy, an activity of 7.4 MBq of 18 F-FHBG was injected to allow PET imaging. A gradual decrease in the amount of surviving cells was noticed during the follow-up. Interestingly, the authors reported that early cell homing predicted ensuing functional improvement [60].
Using reporter genes, Templin et al. [62] were able to monitor human induced pluripotent stem cells (iPSCs) in pigs after myocardial infarction. Cells were labeled 90 minutes before injection with 100 MBq of 123 I and a volume of 250 L was injected in three regions of the animals' hearts. The anterior wall of the left ventricle received 50 million cells of human MSCs. The lateral and septal walls received 50 million NIS-positive [NIS(pos)] human iPSCs or 50 million NIS(pos) human iPSCs mixed with 50 million human MSCs. 99m Tc-tetrofosmin was intravenously injected to assess myocardial perfusion. Images were acquired for 5 minutes in SPECT/CT equipment. Images were acquired up to 15 weeks after cell transplantation through an intracoronary injection of 123 I. No uptake was seen outside the heart and NIS(pos) human iPSCs were detected in the site of injections, indicating successful cell homing.
Yan et al. [63] assessed the distribution of BM-MSCs in nude mice 10 minutes after induction of myocardial infarction. Cells transfected with a TK gene were injected intramyocardially after induction of the lesion. On the same day and 3 and 7 days after cell transplantation, 18 F-FHBG was injected and PET was carried out. The authors described that the highest myocardial uptake occurred 3 days after cell therapy and that infarcted animals had higher homing than control animals.
Pei et al. [64] evaluated the homing of BM-MSCs in rats after myocardial infarction. Immediately after the lesion, cells were intramyocardially injected. Two, 3, and 7 days after cell transplantation, 18 F-FHBG was injected to allow cell tracking. The authors reported that myocardial uptake could be seen up to 7 days following cell therapy, and homing was mostly distributed to the liver, lungs, intestines, stomach, and spleen.
Lee et al. [65] studied the distribution of ADSCs transfected with the NIS gene in dogs following myocardial infarction. NIS expressing ADSCs were intramyocardially injected 7 days after the infarct induction. 99m TcO4-was injected at 2 hours and 1, 2, 5, 7, 9, and 12 days after cell transplantation. The authors reported that cell homing was identified in the apex and lateral wall of the left ventricle, reached its peak at 2 days, and was seen until 9 days after cell transplantation.
Direct Cell Labeling.
We have found 18 published articles in English regarding 17 different trials that employed radionuclides to track cell therapies for cardiac diseases, with a total of 293 treated patients (Table 3). All studies used direct labeling methods.
Biodistribution after Intracoronary Injection.
Caveliers et al. [66] conducted a cell therapy trial with eight chronic ischemic heart disease patients. They reported that infusion of CD133 + selected PB-MNCs labeled with 111 In-oxine is a safe and feasible procedure. They also performed 99m Tc-MIBI SPECT for evaluation of myocardial perfusion and compared it to cell migration. Uptake in the heart was 6.9% to 8% and 2.3% to 3.2% after 2 and 12 hours, respectively.
Kurpisz et al. [67] studied the migration of BM-MNCs in 3 patients with acute myocardial infarction. Cells were labeled with 111 In-oxine and injected by intracoronary route. Nuclear Medicine imaging was carried out 24 hours after cell transplantation. The authors reported that 2.6-11.0% of the uptake was seen in the heart, 12.3-56.7% in the liver, and 5.2-12.6% in the spleen.
Schots et al. [68] evaluated 13 patients with nonacute myocardial infarction who received CD133 + cells labeled with 111 In-oxine by intracoronary transplantation. Subjects had uptake of 6.9 to 8.0% in the myocardium in 2-hour images and 2.3 to 3.2% in 12-hour images.
Schächinger et al. [69] included 20 patients with ischemic myocardial disease that had myocardial viability confirmed by PET and intracoronary Doppler. The time of coronary injury to BM-MNC therapy ranged from 5 days to 17 years. After administration of 18 F-FDG labeled cells, the average myocardial uptake in the first 24 hours was higher in subjects with acute myocardial infarction and gradually decreased The authors concluded that the low viability of the lesioned myocardium and the reduction of coronary flow reserve were important predictors in the proangiogenic potential of progenitor cells. Dedobbeleer et al. [70] published a study of 12 patients with nonacute myocardial infarction. Five patients were in the control group and 7 patients had CD34 + cells labeled with 18 F-FDG. After an hour of injection, 3.2% of the radioactivity was observed in the myocardial infarction zone.
Blocklet et al. [71] evaluated the injection of PB-MNCs labeled with 111 In-oxine and 18 F-FDG in 6 patients with acute myocardial infarction. The double labeling allowed monitoring of cell with high sensitivity and resolution with PET and performing late images with 111 In. Mean uptake in the myocardium after 1-hour infusion of PB-MNCs was 5.5% by PET, while in images with 111 In-oxine at 19 hours and 43 hours only 1 patient had myocardial uptake.
Comparison of Biodistribution of Intracoronary and
Intravenous Injection. Hofmann et al. [72] carried out a cell therapy trial 5 to 10 days after a myocardial infarction in 9 patients using CD34 + BM-MNCs. Of the total amount of injected cells, 5% were labeled with 18 F-FDG. The patients were divided into 3 protocols. In the first protocol, 3 patients received unselected BM-MNCs by intracoronary route and underwent PET imaging 55 to 75 minutes after infusion. In a second protocol, 3 patients initially received 5% of the unselected BM-MNCs by intravenous route, followed by a first PET 50 to 60 minutes after cell transplantation, and then received the remaining 95% of unselected BM-MNCs by intracoronary route, followed by a second PET 60 to 70 minutes later. In a third protocol, 3 patients received immunomagnetically enriched CD34 + cells by intracoronary route and underwent PET imaging 60 to 75 minutes after cell injection. In the first protocol, homing varied from 1.3% to 2.6%. In the second group, there was no detectable myocardial homing after the initial intravenous infusion, but homing increased to 1.8 to 5.3% after intracoronary injection. In the third group, in which CD34 + cells were injected by intracoronary route, cell homing was higher, varying from 14% to 39%.
Kang et al. [73] published a report in which 20 patients with recent or old myocardial infarctions received PB-MNCs labeled with 18 F-FDG. The PB-MNCs were collected by apheresis after mobilization with granulocyte colony stimulating factor (G-CSF). Seventeen of the patients received cells by intracoronary route and 3 patients by intravenous route. The mean efficiency of cell labeling with 18 F-FDG was of 72% and a total activity of 44.4 to 175 MBq was injected through a catheter after stent implantation in the infarcted artery. PET/CT images were obtained 2, 4, and 24 hours after injection. Two hours after intracoronary injection, 1.5% of the infused cells were present at the lesioned area. Delayed images up to 20 hours indicated prolonged accumulation of the cells in heart tissue. Intravenous infusion of the labeled PB-MNCs revealed high pulmonary trapping and showed no significant activity in the heart.
Goussetis et al. [74] studied 8 subjects with chronic ischemic heart disease undergoing CD133 + and CD133 − CD34 + selected BM-MNC transplantation by intracoronary infusion. Cells were labeled with 99m Tc and scintigraphies acquired 1 and 24 hours after injection indicated cardiac uptake of 9.2% and 6.8%, respectively. Reevaluation with coronary angiography and echocardiography in 6 patients after 3 months of cell therapy revealed no complications.
Penicka et al. [75] included 10 patients, 5 of them with acute myocardial infarction and the other 5 with nonacute myocardial infarction. All patients received BM-MNCs labeled with 99m Tc-HMPAO and myocardial uptake was analyzed 2 and 20 hours after injection. There was a lack of uptake 20 hours after transplantation in subjects with acute myocardial infarction.
A randomized study of 30 subjects with acute myocardial infarction, published by Silva et al. [76] and Moreira et al. [77], compared the distribution and retention pattern of 99m Tc-HMPAO labeled BM-MNCs after anterograde intraarterial or retrograde intravenous coronary routes. The early and late retention of labeled cells, evaluated in 4 and 24 hours SPECT images after injection, were higher in the group that received cells by coronary anterograde, regardless of the presence of microcirculation obstruction. Early and late retention were, respectively, 7.06% and 6.38% in the intraarterial group and 1.4% and 0.99% in the intravenous group.
Musialek et al. [78] compared the cell transplant management techniques: perfusion technique catheter (PC) and the over-the-wire coronary occlusion technique (OTW). Thirty-four patients who suffered myocardial infarction were randomly assigned to PC or OTW infusion of autologous bone marrow CD34 + cells labeled with 99m Tc-HMPAO. One hour after infusion, the images obtained by SPECT indicated the activity of 4.86% and 5.05% in the myocardium after OTW and PC injections, respectively. The authors concluded that although the efficacy of cell delivery did not differ between infusion methods, PC infusion offered a more physiological alternative and avoided causing OTW ischemic episodes. The same group performed another study evaluating the migration of intracoronary injected 99m Tc-HMPAO labeled bone marrow CD34 + cells in subjects after myocardial infarction. The authors described that, one hour after cell transplantation, mean cardiac uptake was 5.2% [79].
Our group published a study with 6 Chagasic cardiomyopathy patients who received intracoronary injection of 99m Tc labeled BM-MNCs [80]. SPECT images performed 1, 3, and 24 hours after administration of the labeled cells revealed a myocardial uptake of 5.4%, 4.3%, and 2.3%, respectively. Such decrease in relative myocardial uptake could be related to leakage of 99m Tc from labeled cells and not to a reduction in the number of cells. We also observed that the cell distribution was heterogeneous and limited and was related with the pattern of myocardial perfusion. Kollaros Haddad et al. [83] included thirty-seven patients with nonischemic dilated cardiomyopathy. On average, 75 × 10 6 CD34 + PB-MNCs were labeled with 99m Tc-HMPAO and infused via transendocardial route. SPECT images were acquired 2 and 18 hours after infusion to assess the homing and cellular distribution as well as detect cell migration potential. Twenty-eight patients consented to further myocardial homing imaging. In those patients, the stem cells homing rate had a median value of 11.4% (range 3.8%-22.3%).
Alternative Approaches to Cell Tracking
Besides radionuclide labeling, different techniques may be used to study cell distribution in vivo. Fluorescence imaging (FLI) and bioluminescence imaging (BLI) have been effectively employed to track cells in preclinical studies of cell transplantation for cardiac diseases [84,85]. Nevertheless, factors such as the limited tissue penetration of light hinder the clinical application of FLI and BLI [86]. Superparamagnetic iron oxide nanoparticles (SPIONs), originally created to detect liver tumors in patients after intravenous infusion, were adapted for preclinical exogenous cell labeling, which allowed the study of cell migration for weeks following transplantation with exceptional resolution and morphologic correspondence with MRI [87]. Early clinical studies have been conducted in studies of cell therapies for noncardiac diseases [88][89][90][91][92]. Nonetheless, SPION labeling has restrictions of other exogenous contrasts, for instance, the possibility of dilution with cellular division and of stem cell phagocytosis by macrophages. Moreover, there are differing data on the burden of nanoparticle cell labeling in biological properties [93][94][95][96], and exogenous SPION cell labeling has only been approved for research applications.
Due to these factors, radiopharmaceutical labeling continues to be a relevant technique for the assessment of stem cell distribution in vivo [7]. It allows more accurate definition of cell location and the combination of Nuclear Medicine with CT or MRI enables the study of diverse characteristics, for example, (1) comparison of cell migration with structural and functional results and (2) the outcome of different cell doses and injection methods on cell homing.
Impact of the Route of Administration
Radiopharmaceutical cell tracking has already increased understanding of cell migration in preclinical and clinical studies of cell therapies for cardiac diseases. Among other conclusions, preclinical [55] and clinical [54,73] studies indicated that intravenous infusions of BM-MNCs and PB-MNCs lead to lower cardiac homing in comparison with intracoronary injections. On the other hand, intramyocardial injection of PB-MNCs [97] and BM-MNCs [53,54] led to greater cardiac homing of transplanted cells in comparison to intracoronary infusion in preclinical studies. Similarly, transendocardial injection of BM-MNCs led to greater homing in comparison to intracoronary infusion in subjects with nonischemic dilated cardiomyopathy [82].
Even though there have been preclinical and clinical studies investigating the potential of MSC transplantation for cardiac diseases, to our knowledge, no clinical studies yet have tracked MSC migration with noninvasive imaging. Moreover, clinical trials of radiopharmaceutical cell tracking remain restricted to PB-MNC and BM-MNC trials.
Nevertheless, it is still unclear if more intense myocardial homing is important to improve the outcome of cell therapies for cardiac diseases. Different groups have suggested that, instead of differentiation into cardiac cells, the mechanisms of stem cell therapies may be at least partially due to interactions between injected and host cells, such as the secretion of trophic factors [98]. For example, BM-MSCs may assume distinctive phenotypes after receiving stimuli from proinflammatory cytokines or when submitted to a hypoxic milieu in vitro [98].
As previously mentioned, intravenously injected stem cells may suffer pulmonary entrapment [99]. The lungs may characterize an obstacle for cell migration [99] but might also be essential for the triggering of stem cell responses, before their homing to the heart. Lee et al. [100] reported that an increased production of the tumor necrosis factor inducible gene 6 protein (TGS-6) in BM-MSCs is entrapped in the lungs after intravenous injection in mice following acute myocardial infarction. Their report suggested that BM-MSCs were stimulated in the lungs to produce TGS-6, which controlled myocardial inflammatory response.
Conclusion
Methods for cell tracking with radioisotopes are feasible and efficient and different studies have used it to monitor migration in cell therapies for cardiac diseases. These techniques provide validated quantifications of cell retention in different organs and the dynamics of cell distribution in the whole body. However, additional reports are needed to increase the knowledge of the mechanisms responsible for cell migration and homing and their relationship with possible structural and functional outcomes of cell transplantation for cardiac diseases. | 9,185.4 | 2016-01-12T00:00:00.000 | [
"Engineering",
"Medicine",
"Physics"
] |
The effect of electron backscatter and charge build up in media on beam current transformer signal for ultra-high dose rate (FLASH) electron beam monitoring
Objective. Beam current transformers (BCT) are promising detectors for real-time beam monitoring in ultra-high dose rate (UHDR) electron radiotherapy. However, previous studies have reported a significant sensitivity of the BCT signal to changes in source-to-surface distance (SSD), field size, and phantom material which have until now been attributed to the fluctuating levels of electrons backscattered within the BCT. The purpose of this study is to evaluate this hypothesis, with the goal of understanding and mitigating the variations in BCT signal due to changes in irradiation conditions. Approach. Monte Carlo simulations and experimental measurements were conducted with a UHDR-capable intra-operative electron linear accelerator to analyze the impact of backscattered electrons on BCT signal. The potential influence of charge accumulation in media as a mechanism affecting BCT signal perturbation was further investigated by examining the effects of phantom conductivity and electrical grounding. Finally, the effectiveness of Faraday shielding to mitigate BCT signal variations is evaluated. Main Results. Monte Carlo simulations indicated that the fraction of electrons backscattered in water and on the collimator plastic at 6 and 9 MeV is lower than 1%, suggesting that backscattered electrons alone cannot account for the observed BCT signal variations. However, our experimental measurements confirmed previous findings of BCT response variation up to 15% for different field diameters. A significant impact of phantom type on BCT response was also observed, with variations in BCT signal as high as 14.1% when comparing measurements in water and solid water. The introduction of a Faraday shield to our applicators effectively mitigated the dependencies of BCT signal on SSD, field size, and phantom material. Significance. Our results indicate that variations in BCT signal as a function of SSD, field size, and phantom material are likely driven by an electric field originating in dielectric materials exposed to the UHDR electron beam. Strategies such as Faraday shielding were shown to effectively prevent these electric fields from affecting BCT signal, enabling reliable BCT-based electron UHDR beam monitoring.
Introduction
Radiotherapy has long been a cornerstone in the treatment of various malignancies, offering a non-invasive approach to target and eradicate tumor cells.Over the years, advancements in technology and techniques have sought to maximize the therapeutic ratio by enhancing tumor control while minimizing damage to surrounding normal tissues.One of the most recent and potentially transformative advancements in this direction is the discovery of the FLASH effect.The FLASH effect refers to the observation that ultra-high dose rate (UHDR) irradiation, delivered at rates above 40 Gy/s, can achieve equivalent tumor control while substantially reducing normal tissue toxicity (Favaudon et al 2014, Bourhis et al 2019, Esplen et al 2020).This phenomenon has been observed across various pre-clinical models, including invertebrates, rodents, and larger mammals (Schüler et al 2017, Vozenin et al 2019).The underlying mechanisms for the FLASH effect are still under investigation, but hypotheses include differential oxygen consumption (Weiss et al 1974, Montay-Gruel et al 2019, Adrian et al 2020), enhanced DNA repair in normal tissues (Liew et al 2021, Friedl et al 2022), and alterations in the immune response (Jin et al 2020, Bertho et al 2023).While the FLASH effect has been observed for photons, electrons and protons (Hughes and Parsons 2020, Zhang et al 2020, Kim et al 2021, Montay-Gruel et al 2022), most research evidence stems from MeV electron beams (Schüler et al 2022).This is due to the wide range of machines capable of generating UHDR electron beams including dedicated accelerators, converted conventional linear accelerators and intraoperative radiotherapy machines (Vozenin et al 2019, Moeckli et al 2021).Electron UHDR radiotherapy consequently represents the current reference for clinical transfer of FLASH radiotherapy in preclinical and clinical settings (Bourhis et al 2019, Schüler et al 2022, Vojnovic et al 2023).
A significant challenge in the translation of UHDR radiotherapy is our currently limited ability to accurately measure key irradiation parameters, such as dose and dose rate, in UHDR radiation beams using standard radiation detectors (Esplen et al 2020, Romano et al 2022, Zou et al 2023).Similarly, real-time monitoring and control of a UHDR beam output present a substantial challenge as monitor ionization chambers used in conventional linac heads fall short in UHDR beamlines due to significant saturation and ion recombination effects (Di Martino et al 2020, Ashraf et al 2022).Novel strategies and detectors have been investigated to enable real-time monitoring of UHDR beams (Konradsson et al 2020, Romano et al 2022), including Cherenkov imaging (Ashraf et al 2020), plastic and inorganic scintillators (Hart et al 2022, Poirier et al 2022), probe calorimeters (Bourgouin et al 2022) and beam current transformers (BCT) (Oesterle et al 2021).BCTs are especially interesting for electron beams as they provide a real-time monitoring solution without causing perturbations or experiencing saturation effects (Goncalves Jorge et al 2022).BCTs consist of a conducting winding wrapped around a toroidal ferromagnetic core, where a voltage proportional to the beam current in the central axis of the toroid is generated through electromagnetic induction.One notable advantage of BCTs over transmission chambers in UHDR beam monitoring is their capability to verify the beam's parameters, such as the number of pulses, pulse width, and pulse repetition frequency, while potentially being able to correlate the measured current or charge with the absorbed dose at a specific point downstream of the BCT (Oesterle et al 2021).For this reason, electron UHDR beam monitoring using BCT is being adopted by several groups (Oesterle et al 2021, Bourgouin et al 2022, Goncalves Jorge et al 2022, Jain et al 2023, Liu et al 2023, No et al 2023).
Despite the potential of BCT for beam current monitoring, previous work has pointed out their high sensitivity to variable irradiation conditions as a potential limitation.Specifically, factors such as source-tosurface distance (SSD), phantom material and collimator size have been observed to influence the BCT readings by up to 12% (Liu et al 2023), potentially compromising their usability for beam output monitoring.To date, backscattered radiation has been suggested as the main contributor to these discrepancies, as backscattered electrons traveling back into the toroid would reduce the net current measured by the BCT.However, previous work has also reported a variation of less than 3% in electron backscatter generated by square fields of 0.5 cm 2 and 40 cm 2 (Verhaegen et al 2000), indicating that backscatter alone probably cannot explain the effect of variable irradiation parameters on BCT signal.Similarly, Marinelli et al (2023) reported disparities in pulse shapes when comparing data obtained from a BCT with that acquired using a FLASH diamond detector on an ElectronFlash linac (SIT S.p.A., Italy), where BCTs are positioned within the linac head at distances of tens of centimeters from the irradiated surface.These findings lend further support to the idea that backscatter may not be the primary factor contributing to the sensitivity of BCT signal against variations in irradiations conditions.
The primary objective of this study is to ascertain whether backscattered radiation is genuinely responsible for the observed fluctuations in BCT signal associated with setup variations using Monte Carlo simulations and experimental measurements.Subsequently, this work aims to explore an alternative hypothesis concerning transient charge buildup in media and its potential influence on the BCT readings.Through this investigation, we intend to provide a comprehensive understanding of the factors affecting BCT reading and offer insights into optimizing its application for UHDR electron beam monitoring.
Monte Carlo simulations
Monte Carlo simulations were performed using the EGSnrc framework with the user-code backscatter_clrp (Ali and Rogers 2008a, 2008b, Ali and Rogers 2008c).This user-code is specifically optimized for calculating the backscatter coefficient resulting from a monoenergetic pencil beam of charged particles incident at a defined angle upon a target material.The backscatter coefficient η, in this context, refers to the probability of an incident particle scattering back into the hemisphere above the designated target.The purpose of utilizing this user code was to validate and investigate the behavior of electrons at various energy levels when interacting with different target materials.The coefficient η represents the worst-case scenario of the perturbation backscattered electrons could induce on the BCT signal for each configuration, as scatter in air as well as the limited aperture of the BCT would reduce the ratio of backscattered to primary electrons traveling back through the toroid.
Electron beams with energies ranging from 50 keV to 9 MeV were employed, oriented perpendicularly to the target surface.The number of histories for each simulation was chosen to achieve a statistical uncertainty of less than 0.5%, resulting in a range of 1000 000 to 50 000 000 histories depending on the energy.The target thickness for all simulations was consistently set to 5 cm, and cross-section data used were those provided with EGSnrcʼs version 4 installation.Default Monte Carlo transport parameters were utilized, incorporating all low-energy physics capabilities available within EGSnrc.Table 1 reports relevant simulation parameters as recommended by the American Association of Medical Physicists (AAPM) Task Group 268 on the reporting of Monte Carlo radiation transport studies (Sechopoulos et al 2018).
Irradiation device, beam parameters and data acquisition
The Mobetron (IntraOp, CA), a mobile linear accelerator designed for intraoperative radiation therapy with FLASH capability through its research console, was used to produce 6 MeV and 9 MeV UHDR electron beams (Moeckli et al 2021).With this version of the console, the Mobetron can generate UHDR beams of pulse widths ranging from 1.0 to 3.8 μs at a pulse rate frequency (PRF) between 5 and 90 Hz.As monitor chambers are not reliable for beam monitoring in UHDR, the control system was modified by the vendor to enable the prospective determination of the number of pulses for the solid-state modulator and electron gun.During the irradiation, the control system oversees the synchronization of each pulse to guarantee consistency across various pulse widths, while also logging each pulse administered.Reproducibility of the beam output in this setting was shown to be within 1% for both energies (Moeckli et al 2021).
A beam current transformer (BCT, model ACCT-S-082-H from Bergoz, Fr) provided by IntraOp with its own differential amplifier was used in this work.This model is either the same or very similar to those used by other groups investigating the use of BCT for UHDR electron beam monitoring (Oesterle et al 2021, Jain et al 2023, Liu et al 2023, Marinelli et al 2023).It has a rise time of 108 ns, a bandwith of 3.075MHz, an inner diameter of 4.1 cm and a signal droop of -0.66%/ms.Our BCT was positioned at the exit of the primary collimator and held in place using custom 3D-printed applicators.These applicators, described in more detail in the next section, were made with polylactic acid (PLA) in a tube-like geometry with a 1 cm wall thickness.At the distal end of the applicator, the electron beam can be shaped by a 4 cm thick collimator made of Delrin and provided by IntraOp, with apertures ranging from 2.5 to 6 cm diameter.This setup, refined from the one used by Oesterle et al (2021), was selected to ensure a reproducible yet removable installation of the BCT on the linac Permanent modification of the linac head to accommodate BCTs, as used in recent versions of the modified Mobetron (Jain et al 2023, Liu et al 2023), was not possible for our machine as it is regularly used clinically in the intraoperative setting.A digital oscilloscope (DT5751, Caen, It) was used to measure the voltage out of the differential amplifier provided with the BCT, from which pulses were automatically detected by the CAENscope software using a fixed threshold of 0.05 V to trigger pulse recording.The signal from 2.75μs before the trigger to 10 μs after the trigger was recorded for each pulse, for a total recording length of 12.75 μs per pulse.The readings were then processed using a custom Matlab script to derive the total BCT signal per pulse, defined as the integral of the BCT signal minus the average signal during the first 2 μs (i.e. the baseline signal) throughout the whole pulse recording.
Irradiation conditions
Two custom 3D-printed applicators were used in this work, as depicted in figure 1.Both applicators were designed to hold and center the BCT around the electron beam at a distance of 2.5 cm out of the linac head.The test applicator had a total length of 6.32 cm and no collimator holder in order to enable SSD measurements as short as 25 cm.Our clinical applicator, that we previously optimized to deliver a dose of 3 Gy per pulse across field sizes of 2-4 cm (Lalonde et al 2023), had a length of 15.0 cm (SSD = 33.3cm) and was used to assess the impact of collimator size on BCT signal.UHDR electron beams of 6 and 9 MeV were used in this work, using a pulse length of 2.4 μs and a PRF of 60 Hz for all irradiations.Three irradiations of five pulses were delivered for each conditions listed below.
Effect of SSD
The effect of SSD on BCT signal was assessed by performing measurements with the BCT held by the test applicator at SSDs between 25 and 70 cm (BCT to surface distance of 6.5 cm to 51.5 cm), using slabs of Solid Water® (Gammex®, Middleton, WI) to define the surface.Measurements were done at 6 and 9 MeV.
Effect of field size
The effect of field size on BCT signal was evaluated by performing irradiations in air using the clinical applicator at 6 and 9 MeV, using Delrin collimators with apertures of 2.5 cm, 4 cm and 6 cm to shape the beam.
Effect of medium
The influence of the phantom material on the BCT response was assessed by comparing the signal captured when irradiating solid water and liquid water in a 68.0 cm × 40.7 cm × 35.0 cm water tank.Measurements were done at SSD ranging between 30 and 50 cm, using the test applicator to hold the BCT.Finally, BCT measurements were also acquired for the irradiation of a 5 cm solid water slab grounded through an aluminium foil and a conductive wire connected to a grounded Faraday cage protecting the electronics of the Linac, as shown in figure 2. Similarly, a solid water phantom with a plane-parallel Roos chamber (PTW, Freiburg, Germany) inserted at 1.6 cm depth was irradiated with the chamber both connected and unconnected to a powered Cardinal Health electrometer using biases of −300 V, 0 V and 300 V. BCT signal was recorded for all conditions, using an SSD of 25 cm and a beam of 9 MeV for both the irradiation of the grounded solid water and that of the solid water with a Roos chamber.
Mitigation of signal perturbation with a faraday shield
The use of a Faraday shield was tested as a mitigation strategy for BCT sensitivity against setup and irradiation parameters.This was achieved by placing an aluminium foil directly at the exit of the test applicator, which was grounded to a Faraday cage on the linac head using a conductive wire with a clip at one end, as shown in figure 3(a).For the clinical applicator, the design was slighly adjusted to hold an aluminium foil at a distance of 1.3 cm below the BCT and above the collimator holder as shown in figure 3(b).All other characteristics of the applicator (material, length, diameter) were kept constant.Measurements for SSD, field size and material dependence were repeated to assess the impact of Faraday shielding on BCT signal variations.To achieve this, irradiations were duplicated with the aluminium foil installed on the applicators, either grounded or ungrounded.This was done to specifically assess the effect of Faraday shielding on the BCT signal, without interference from the presence or absence of the aluminum foil itself.
Monte Carlo simulations
Figure 4 (a) presents the backscattered fraction for monoenergetic electron beams directed towards a water phantom, as calculated by the backscatter_clrp EGSnrc user-code.As expected, low energy electrons yield a higher fraction of backscattered electrons, with a backscattered fraction around 5.5% for energies around 50 keV.For the nominal energies considered in this work (6 and 9 MeV), backscattered fractions on water are
Effect of SSD
Figure 5 shows the total signal captured by the BCT as a function of SSD for 6 and 9 MeV UHDR beams directed towards a solid water phantom.As observed in previous work (Liu et al 2023), the BCT signal is reduced at shorter SSD, despite constant beam output.The effect is slightly higher for the 6 MeV beam, with a variation of 26.5% between 25 and 70 cm SSD, compared to 12.6% at 9 MeV.
Effect of field size
The dependency of the BCT response as a function of the field size is presented in figure 6.The BCT signal is shown to increase with the collimator diameter, with variations as high as 15% between 2.5 cm and 6 cm field sizes for both energies.This behavior, opposite to the effect of jaw setting and collimation size for monitor ion chamber in conventional linac (i.e.lower signal for smaller field size), was also observed in previous work (Liu et al 2023).
Effect of medium
The effect of phantom material on BCT signal was explored by comparing the responses from solid water and water phantoms using a 9 MeV beam across four varied SSDs, ranging from 30 to 50 cm, as depicted in figure 7. Unexpectedly, we observed a substantial difference in the BCT readings between the two phantoms, with discrepancies as high as 14.1% for 30 cm SSD.More specifically, the readings observed when using the water phantom are shown to be relatively unaffected by changes in SSD, maintaining consistent pulse shape and amplitude for all distances.In contrast, the solid water phantom induced BCT signal with noticeable droop within each pulse, an effect shown to be reduced as the phantom was placed further and further from the BCT.
Water and solid water are also shown to influence the BCT signal in different ways on a pulse-by-pulse basis, as reported in figure 8. First, one can see that all pulses yield a lower signal in solid water compared to water, but the difference between the two phantoms is shown to increase after the first pulse.Indeed, the signal quickly reduces after the first pulse when delivered to solid water, while the response is substantially more stable when delivered to water.Figure 9 provides more insights on how phantom properties might affect BCT signal.First, the plot in (a) shows that connecting our solid water phantom to the ground noticeably affected the pulse shape and the total signal measured, despite identical incident beams.Indeed, the grounded solid water results in a pulse shape that is flatter, akin to that seen in water, though not exactly to the same extent.This effect is also observed when a Roos chamber is either connected to or disconnected from the electrometer, with the connected chamber inducing a flatter pulse shape.These observations indicate that grounding a solid water phantom, whether directly or via an ion chamber, substantially impacts the BCT reading.
Mitigation with a Faraday shield
Figure 10 presents the signal measured by the BCT for a 9 MeV UHDR beam at various SSD using solid water, with the Faraday shield installed at the exit of the test applicator, either grounded or ungrounded.Results indicate that the Faraday shield removes essentially all SSD dependency when grounded, while the ungrounded shield generally reproduces what was reported in figure 5, where no aluminium foil was used.
Similarly, figure 11 compares BCT signal for the same beam delivered to water and solid water phantoms at a SSD of 30 cm when a grounded Faraday shield is installed on the applicator.Here again, the phantom materialspecific responses and inter-pulse variability observed respectively in figures 8 and 9 are cancelled, resulting in virtually identical BCT response for water and solid water when installing a grounded Faraday shield downstream of the BCT.
Finally, figure 12 illustrates the impact of field size on the BCT signal when a grounded Faraday shield is incorporated into our clinical applicator.With the shield in place and grounded, the BCT signal variation across the three field sizes remains below 0.5% for a given energy.In contrast, when the aluminum foil is ungrounded, the field size influences the BCT signal in a manner consistent with the observations made in figure 6.
Discussion
In this work, we investigated the effect of irradiation conditions on beam current transformer (BCT) signal, with the objective of enabling reliable electron UHDR real-time beam monitoring.While several groups have validated the linearity of BCT signal as a function of pulse length and pulse rate frequency for UHDR electron beam using a constant setup (Oesterle et al 2021, Goncalves Jorge et al 2022, Jain et al 2023), recent work has reported significant sensitivity of BCT signal (up to 12%) to changes in SSD, field size and phantom material despite constant beam parameters (Liu et al 2023).Until now, the main hypothesis to explain this behavior has been backscattered electrons modifying the net current detected by the device when travelling backwards in the toroid (Liu et al 2023).In this study, Monte Carlo simulations and experimental measurements were performed to test this hypothesis and provide a thorough understanding of the factors affecting BCT signal, while exploring solutions to ensure the robustness of BCT-based UHDR electron beam monitoring.
Monte Carlo simulations performed in this work indicate that the backscatter fraction from a monoenergetic electron beam in water is less than 5% for most relevant energies and less than 1% for energies above 4 MeV.This suggests that even if all backscattered electrons were detected by the BCT at short SSD and none were detected at large SSD, the difference in BCT signal between the two conditions should not be more than 2%-3% for 6 MeV and 9 MeV beams.While the Mobetron is known to have a low-energy component within its spectrum due to the absence of beam a steering magnet (Iaccarino et al 2011), no realistic spectrum could induce variations as high as the ones reported in previous work (Liu et al 2023) and reproduced in this study (more than 20% difference between 25 and 70 cm SSD for 6 MeV).Similarly, as shown in figure 4, the plastic used to make the Mobetron UHDR collimators, Acetal, induces less than 1% backscatter at 9 MeV, indicating that field size should only have a minor impact on BCT signal through backscattered electrons.
With the objective of identifying alternative causes for BCT signal variation as a function of SSD and field size, we compared signal obtained in solid water to that measured in water with otherwise identical beam parameters.Results obtained in these conditions, presented in figure 7, show a dramatic impact of the phantom type on the BCT response, even though water and solid water are considered equivalent in terms of MeV electron interaction properties (Tello et al 1995), including backscatter (Chow and Owrangi 2009).These results therefore suggest that a second phenomenon, unrelated to electron backscatter, substantially affected the BCT signal obtained using solid water.
Since one of the main differences between water and solid water is their electrical conductivity, we made the hypothesis that the effect could be related with the way charge dissipation occurs within the medium as the beam is delivered.Previous studies have demonstrated that some plastics (e.g.PMMA) could experience long-term charging following megavoltage electron irradiation at conventional dose rates (Rawlinson et al 1984, Thwaites 1984) and modern solid water phantoms have been optimized to circumvent this limitation (McEwen and Niven 2006).However, it is plausible that at very high beam current, solid water still experiences a transient charge loading phenomenon, where the electrons do not diffuse fast enough to reestablish charge equilibrium within the phantom.In that scenario, the build-up of negative charge in the irradiated region of the solid water phantom would induce a growing electric field, a phenomenon already observed and documented for strongly insulating plastics under megavoltage electron irradiation (Watson and Dow 1968).
To test if the difference between BCT signal captured for water and solid water phantoms was effectively due to the different levels of charge dissipation in each medium, we irradiated a 5 cm solid water slab placed above a thin aluminium foil grounded to earth through a conductive wire.Results, shown in figure 9, demonstrate that this indeed modified the BCT signal, even though all other irradiation parameters were kept constant.The shape of the pulse captured by the BCT for the grounded solid water was closer to the one measured in water, but not perfectly alike, suggesting partial but incomplete resolution of the charge build-up effect through the grounded foil.
From this point, to evaluate if charge dissipation in medium did affect BCT signal through an electric field generated by transient charge build up during irradiation, we added Faraday shields to prevent such an electric field from reaching the central region of the toroid.This was done on our two applicators by placing a thin aluminium foil just below the BCT on which a grounded electric wire could be connected.Then, measurements were performed at various SSD, field sizes and phantom materials with the foil grounded and ungrounded, to isolate the impact of the Faraday shielding from the simple addition of an aluminium foil downstream of the BCT.Results, reported in figures 10-12, unambiguously demonstrate that Faraday shielding was effective in removing virtually all SSD and field size dependence for the BCT measurements.Similarly, the shielded BCT yielded the same signal for solid water and water for otherwise constant irradiation parameters, as one should expect.
Considering our observations, it can be confidently asserted that the strong dependency of BCT signal as a function of SSD, field size and phantom material is most likely caused by an electric field originating in dielectric materials exposed to the UHDR electron beam.Indeed, for solid water irradiations, the intensity of the electric field through an unshielded BCT would decrease as the SSD increases, which reflects the effect observed in previous work (Liu et al 2023) and reproduced in this study.Similarly, transient charge loading of the collimator material would be more important for smaller field sizes (larger surface of Acetal being irradiated), which is again in line with what has been reported in this work.Although both effects could also be explained by electron backscatter in terms of relation with SSD and field size, our Monte Carlo simulations indicate that the amplitude of the observed effects on BCT signal is too large to be explained by backscatter alone.Instead, by removing virtually all SSD, field size and phantom material dependency through grounded Faraday shielding, our investigation provide strong evidence to support the hypothesis of an electric field originating in the irradiated material.To the best of our knowledge, this is the first time this effect has been identified and reported in the context of FLASH radiotherapy.
While an in-depth analysis of the processes through which the electric field impacted the BCT signal was beyond the scope of this work, we seek to demonstrate that this hypothesis is plausible and in-line with the various observations made in this work.The BCT signal, proportional to the magnetic field B within the toroid, is governed by the Ampère-Maxwell equation: where J is the current density in the aperture of the BCT and E is the electric field within the aperture of the BCT.
Based on this equation, the effect of an electric field on the BCT signal can either come directly from its variation in time ¶ ¶ E t , or through its influence on the total current density within the BCT, J. Our observations suggest that both effects can be present, as schematically represented in figure 13.For the ¶ ¶ E t term, calculations provided in the appendix A show that the growth of the electric field during irradiation could reduce the BCT signal by up to 10% at short SSD.At a few tens of centimeters between the BCT and the charged surface, our calculation however indicates that the rising electric field should only have a negligible impact on the BCT signal.Since we observed signal perturbations at BCT to surface distances up to 50 cm, it is likely that the electric field also impacts the BCT signal through the current density J.For this contribution, we make the hypothesis that when the electric field within the beam's path reaches a sufficiently high amplitude, the immediate recombination of ion pairs generated in air becomes hindered.This, in turn, would lead to the initiation of a drifting current that opposes the direction of the beam, similar to an ion chamber operated in the recombination region.This would explain why the signal perturbation increases with pulse width, as additional charges would induce a larger field strength which would in turn increase the magnitude of this opposing current.While the estimation of the current associated with the incomplete recombination of charges released in air in the presence of an electric field is complex, calculations provided in appendix A show that the total charge of a single 2.4 μs pulse from the Mobetron is enough to induce an electric field up to 50 kV m −1 at 5 cm from the phantom's surface and up to 5 kV m −1 at 25 cm.Considering that a Farmer chamber operated at 150 V induces an internal electric field of approximately 50 kV m −1 , it is likely that a non-negligible amount of the charges generated in air by the primary beam within the toroid do not recombine instantly, are drifted in the electric field and induce a magnetic field opposing the one from the primary beam.Additional experiments supporting the presence of at least two different mechanisms affecting the BCT signal are presented in appendix B.
Observations reported in this work have important implications for the use of BCT to monitor UHDR electron beams.First, our study showed that robust BCT calibration is probably not achievable if the effect of transient charge loading in media is not accounted for.Indeed, the BCT response presented in figure 7 shows that a dramatically different signal can be obtained for the same irradiation delivered to water and solid water.Therefore, a BCT calibrated against a reference detector in solid water is likely to report an inaccurate beam output if the beam is then used to irradiate a subject of different electrical conductivity (e.g.irradiation of cell cultures, organoids, a small animal or a human subject) if this effect is overlooked, as the electric field generated during calibration and beam monitoring might be drastically different.Similarly, using a BCT to correct beam output fluctuations for the cross calibration of an ion chamber and a passive detector (e.g.radiochromic film, alanine, etc.) in solid water should be discouraged if a Faraday shield is not used, as our results indicate that the presence of a connected ion chamber can reduce the BCT signal perturbation caused by solid water.Finally, for BCTs with a wide enough dynamic range to cover conventional and UHDR irradiations (Lahaye et al 2022, Vojnovic et al 2023), outcome comparisons between the two regimes based on the BCT signal would likely be biased if the effect of electric fields is not taken into account.
Based on our observations, strategies to prevent electric fields from affecting the BCT signal are warranted to enable the translation of BCT-based electron UHDR beam monitoring.Although different designs might yield BCT variations of a lower amplitude than what was reported in this work (e.g. with the BCT incorporated to the linac head, further from the collimator and phantom's surface), our general conclusions are expected to remain relevant in these setups, provided that a conductive and grounded window serving a different purpose is not already placed downstream of the BCT.According to the drawings shown in Liu et al (2023) and Di Martino et al (2023), at least two UHDR electron linac models are offered with integrated BCTs placed downstream of any grounded conductive window.While effective, the aluminium foil used in this work was primarily selected for its accessibility and simplicity of integration and might not be the optimal approach in a beam monitoring setting.Since the whole inner-aperture of the toroid needs to be covered by the shield, interactions with the electron beam appears to be inevitable.For this reason, different designs might be more appropriate to limit the effect of the shield on the electron beam's dosimetric properties.Optimization of the position, thickness and composition of the Faraday shield was beyond the scope of this work but warrants future investigation.
Conclusion
In conclusion, this study has explored the relationship between beam current transformer (BCT) signal variations and irradiation parameters in ultra-high dose rate (UHDR) electron beam monitoring, challenging previous assumptions and introducing new perspectives.Our investigations, combining Monte Carlo simulations and experimental measurements, have demonstrated that the impact of backscattered electrons on BCT signal is less significant than previously thought.Instead, our results unveiled the substantial influence of the electric field generated by the transient charging of plastic materials under UHDR electron irradiation.The introduction of Faraday shielding was shown to be a promising solution to mitigate the discrepancies in BCT signal across varying conditions and phantom materials.Future research should delve deeper into optimizing Faraday shielding materials and configurations, ensuring the robustness and consistency of BCT signal in clinical scenarios.
Appendix A
This appendix estimates the potential influence on the BCT signal of an electric field emanating from a phantom undergoing charge deposition.As stated in the discussion, the BCT signal is proportional to the magnetic field B within the toroid, which is defined by the Maxwell-Ampère equation: where J is the current density in the aperture of the BCT, E is the electric field within the aperture of the BCT and μ is the permeability of the toroid.In its integral form, equation (A1) becomes: where Σ is any surface with a closed boundary ∂Σ.In our case, Σ is the internal surface of the BCT, while ∂Σ is the closed boundary of the BCT aperture.First, let us assume that only the primary beam contributes to J (no secondary charges creating an opposing current).In this case, equation (A2) becomes where I beam is the primary beam's current defined by the number of charges per unit of time.To estimate the influence of a variable electric field on the BCT signal, let us approximate all charges from the primary beam are accumulated in a uniformly charged sphere, centered at depth d below the surface.While this approximation is chosen to simplify the calculations, similar results are expected for most realistic deposition patterns at larger distances.In that scenario, the electric field along the radial axis of the sphere r ˆas a function of time is given by: where z is the distance between the surface and the BCT and Q(t) is the charge of the sphere at time t.For the sake of simplicity, let us approximate that the electric field is constant and perpendicular to the aperture of our BCT of radius R In these conditions, the integral of the electric field over Σ becomes simply: Combining equations (A4) and (A5) gives: Neglecting charge dissipation during the irradiation, ¶ ¶ which then gives: Finally, incorporating (A7) into (A3) yields: Let α be the ratio of the contribution of the variable electric field and the primary beam on B. Considering the simplifications made in this analysis (negligible instantaneous charge dissipation, uniform and perpendicular field lines within the BCT), α represents the upper limit (worst-case scenario) of the BCT perturbation that could be induced by the varying electric field alone.Based on this definition and equation (A8): Using R = 4.1 (the inner radius of our BCT) cm and d = 3 cm (the average R50 for 6 and 9 MeV electrons), figure A1 shows the value of α as a function of the distance between the BCT and the phantom's surface.As pointed out in the discussion, this calculation suggests that the variable electric field can represent around 10 % of the primary beam's impact on the BCT signal at short BCT to surface distance, while becoming negligible beyond 30 cm.
The second term that can influence the magnetic field B is the net current density J travelling within the toroid.Equation (A4) can also be used to estimate the electric field amplitude as a function of the charge stored in the sphere considered in this example.According to the vendor, the sensitivity of our BCT and readout system is 33.33 V/A.Based on this and the signal measured by the shielded BCT in figures 10 and 12 (≈1.23 μV•/pulse) the total charge for a single pulse in the setting used in this work is around 37 nC for our 9 MeV beam.Assuming that all electrons remain temporarily stored within the charged sphere considered above, equation (A4) estimates the electric field strength as a function of surface distance as shown in figure A2.
As mentioned in the discussion, our results suggest that the electric field strength induced by a single pulse can reach up to 50 kV m −1 at 5 cm from the surface while remaining around 5 kV m −1 at a distance of 25 cm.In reality, the field must be lower at the beginning of the first pulse and might be larger for subsequent pulses, depending on the charge dissipation rate and pulse frequency.The fact that the BCT signal perturbation was shown to be larger after the first pulse supports this hypothesis while suggesting some level of inter-pulse charge accumulation can happen in the solid water phantom.
Appendix B
In this appendix, we provide additional measurements supporting the hypothesis that at least two different mechanisms drive the influence of the electric field on the BCT signal.
When all electrons are stopped in the middle plane of a solid water phantom, the electric field should be relatively symmetric above and below the phantom.Based on this assumption, if only the variable electric field ¶ ¶ E t affects the BCT signal, the same perturbation should be obtained at equal distances of above and below the phantom.In this context, signal perturbation would mean the difference between the BCT signal measured in a given setup and the one expected from the primary beam in those conditions.When the BCT is placed above the solid water phantom, this simply means the difference in signal with an without a solid water phantom.For the setup with the BCT placed below the solid water phantom, this means the absolute signal measured, as no primary beam is present in these conditions and only the variable electric field could influence the BCT.
To test this hypothesis, we irradiated an 8 cm thick water phantom with a 9 MeV beam at a SSD of 25 cm, using the same irradiation parameters as used in this work (5 pulses of 2.4 μs).This thickness was chosen so that most of the charges were stopped close to the middle plane of the phantom, around R50 (Vandyk & MacDonald 1972).We then placed our BCT below the solid water phantom at distances from the lower surface of the phantom ranging between 5.5 and 20.5 cm by progressively removing slabs of Styrofoam below the BCT, as shown figure B1 (a).The BCT orientation with respect to the beam was kept the same as for the setup above the water phantom, to make sure that the electric field would generate a positive pulse signal detectable by our pulse detection algorithm.Figure B1 (b) shows that we indeed measured a signal in the BCT when it is placed below the water phantom, despite the fact that its thickness was twice the range of the incoming electrons.It is also interesting to observe that in these conditions, the signal was stable across all pulses, suggesting that dE dt is relatively constant from one pulse to another.This contrasts with the effect observed with the BCT above the phantom, where the perturbation is shown to increase after the first pulse.
In figure B2, the signal measured below the water phantom is reported as a function of the distance between the BCT and the surface below the phantom.In the same plot, we report the missing signal for the setup with the BCT above, i.e. the total BCT signal obtained with a grounded Faraday shield minus the one obtained without a shield for different BCT to surface distances.Results indicate that the amplitude of the perturbation is systematically higher when the BCT is placed above the phantom compared to what is obtained when the BCT is placed below for the same BCT to surface distance.
Assuming that the growing electric field has a similar amplitude above and below the phantom for the same BCT to surface distance, our results support the hypothesis that at least one other mechanism drives the .Mean BCT signal perturbation per pulse as a function of BCT to surface distance for the BCT placed above and under an 8 cm thick solid water phantom using a 9 MeV UHDR electron beam.For the BCT position above the phantom, the signal perturbation represents the signal obtained using a Faraday shield minus the one obtained without a Faraday shield for a given distance.For the BCT placed below the water phantom, the signal perturbation is the integral signal measured per pulse.
influence of the electric field on the BCT signal when it is placed above the phantom, around the primary beam.Even in the case where the charge deposition was biased towards one side of the phantom, our general conclusions would hold as a distance shift of more than 20 cm is required to observe the same signal perturbation above and below the phantom.Finally, despite the observations provided in this appendix that support our hypothesis, it is important to note that the formal validation of the proposed mechanisms would only be achieved by a substantially more advanced set of experiments, which are beyond the scope of this study.
Figure 1 .
Figure 1.Custom 3D-printed applicators used in this work to accommodate the BCT: the (a) test applicator and (b) clinical applicator holding a Delrin collimator.Dimensions are in millimeters.
Figure 2 .
Figure 2. Setup used for the irradiation of a 5 cm solid water slab grounded with an aluminium foil connected to a conductive wire reaching a grounded Faraday cage on the linac.
Figure 3 .
Figure 3. Integration of a Faraday shield in (a) the test applicator and (b) the clinical applicator.
Figure 4 .
Figure 4. Monte Carlo calculated fraction of incident electrons backscattered (a) on a water phantom as a function of energy and (b) on different materials for energies of 0.75 and 9 MeV.
Figure 5 .
Figure 5. Mean integral BCT signal per pulse as a function of SSD for beam energies of 6 MeV and 9 MeV, using the test applicator.The black bars represent one standard deviation for the repeated measurements.
Figure 6 .
Figure 6.Mean integral BCT signal relative to the 6 cm field size for beam energies of 6 and 9 MeV using the clinical applicator.The black bars represent one standard deviation for the repeated measurements.
Figure 7 .
Figure7.Mean BCT signal per pulse for various source-to-surface distance (SSD) for water and solid water phantoms using a 9 MeV UHDR beam.Each plot also reports the ratio of the mean integrated signal in solid water to the one in water.
Figure 8 .
Figure 8. Mean BCT signal per pulse for the irradiation of (a) solid water and (b) water using a 9 MeV UHDR beam and a source-tosurface distance (SSD) of 30 cm.The integral BCT signal for scenarios (a) and (b) is presented in (c), where the black error bars show one standard deviation for the repeated measurements.
Figure 9 .
Figure9.Mean BCT signal per pulse for the irradiation of (a) solid water grounded and ungrounded and (b) solid water with a Roos chamber either connected or unconnected to a powered electrometer using a 9 MeV UHDR beam.The bias applied to the chamber did not impact the BCT signal (not shown).
Figure 10 .
Figure 10.Mean BCT signal per pulse for irradiations of a solid water phantom at various SSD using a Faraday shield (a) grounded and (b) ungrounded.The mean integral signal for scenarios (a) and (b) is presented in (c), where the black error bars show one standard deviation for the repeated measurements.
Figure 11 .
Figure 11.Mean BCT signal per pulse in water and solid water for three irradiations of five pulses each at (a) 6 MeV and (b) 9 MeV when using a grounded Faraday shield.The mean integral BCT signal per pulse for these irradiations is shown in (c), with the black error bars representing one standard deviation for the repeated measurements.
Figure 12 .
Figure 12.Mean integral BCT signal per pulse relative to the 6 cm field size with a grounded Faraday shield using the clinical applicator with a Faraday shield either grounded or ungrounded.The black error bars show one standard deviation for the repeated measurements.
Figure 13 .
Figure 13.Suggested mechanisms to explain the effect of the electric field on the BCT signal.
Figure A1 .
Figure A1.Ratio between the magnetic field induced by the primary beam current and the charge accumulating in a sphere centered 3 cm below the surface as a function of the distance from the surface [cm].
Figure A2 .
Figure A2.Static electric field as a function of distance on the radial axis sphere uniformly charged by 37 nC and centered at 3 cm below the surface.
Figure B1 .
Figure B1.(a) Setup used to measure the BCT signal as a function of the distance below the solid water phantom.(b) BCT signal per pulse measured at a BCT to surface distance of 5.5 cm below the 8 cm thick solid water phantom for an electron beam of 9 MeV.
Figure B2
Figure B2.Mean BCT signal perturbation per pulse as a function of BCT to surface distance for the BCT placed above and under an 8 cm thick solid water phantom using a 9 MeV UHDR electron beam.For the BCT position above the phantom, the signal perturbation represents the signal obtained using a Faraday shield minus the one obtained without a Faraday shield for a given distance.For the BCT placed below the water phantom, the signal perturbation is the integral signal measured per pulse.
Table 1 .
Summary of the Monte Carlo simulation parameters used in this work. | 10,708.6 | 2024-04-19T00:00:00.000 | [
"Medicine",
"Physics",
"Engineering"
] |
Word Representation Models for Morphologically Rich Languages in Neural Machine Translation
Out-of-vocabulary words present a great challenge for Machine Translation. Recently various character-level compositional models were proposed to address this issue. In current research we incorporate two most popular neural architectures, namely LSTM and CNN, into hard- and soft-attentional models of translation for character-level representation of the source. We propose semantic and morphological intrinsic evaluation of encoder-level representations. Our analysis of the learned representations reveals that character-based LSTM seems to be better at capturing morphological aspects compared to character-based CNN. We also show that hard-attentional model provides better character-level representations compared to vanilla one.
Introduction
Models of end-to-end machine translation based on neural networks have been shown to produce excellent translations, rivalling or surpassing traditional statistical machine translation systems (Kalchbrenner and Blunsom, 2013;Sutskever et al., 2014;Bahdanau et al., 2015).A central challenge in neural MT is handling rare and uncommon words.Conventional neural MT models use a fixed modest-size vocabulary, such that the identity of rare words are lost, which makes their translation exceedingly difficult.Accordingly sentences containing rare words tend to be translated much more poorly than those containing only common words (Sutskever et al., 2014;Bahdanau et al., 2015).The rare word problem is particularly exacerbated when translating from morphology rich languages, where the several morphological variants of words result in a huge vocabulary with a heavy tail distribution.For example in Russian, there are at least 70 words for dog, encoding case, gender, age, number, sentiment and other semantic connotations.Many of these words share a common lemma, and contain regular morphological affixation; consequently much of the information required for translation is present, but not in an accessible form for models of neural MT.
In this paper, we propose a solution to this problem by constructing word representations compositionally from smaller sub-word units, which occur more frequently than the words themselves.We show that these representations are effective in handling rare words, and increase the generalisation capabilities of neural MT beyond the vocabulary observed in the training set.We propose several neural architectures for compositional word representations, and systematically compare these methods integrated into a novel neural MT model.
More specifically, we make use of character sequences or morpheme sequences in building word representations.These sub-word units are combined using recurrent neural networks (RNNs), convolutional neural networks (CNNs), or simple bag-ofunits.This work was inspired by research into compositional word approaches proposed for language modelling (e.g., Botha and Blunsom (2014), Kim et al. (2016)), with a few notable exceptions (Ling et al., 2015b;Sennrich et al., 2015;Costa-jussà and Fonollosa, 2016), these approaches have not been applied to the more challenging problem of translation.We integrate these word representations into a novel neural MT model to build robust word representations for the source language.
Our novel neural MT model, is based on the operation sequence model (OSM;Durrani et al. (2011), Feng and Cohn (2013)), which considers translation as a sequential decision process.The decisions involved in generating each target word is decomposed into separate translation and alignment factors, where each factor is modelled separately and conditioned on a rich history of recent translation decisions.Our OSM can be considered as a form of attentional encoder-decoder Bahdanau et al. (2015) with hard attention in which each decision is contextualised by at most one source word, contrasting with the soft attention in Bahdanau et al. (2015).
Integrating the word models into our neural OSM, we provide -for the first time -a comprehensive and systematic evaluation of the resulting word representations when translating into English from several morphologically rich languages, Russian, Estonian, and Romanian.Our evaluation includes both intrinsic and extrinsic metrics, where we compare these approaches based on their translation performance as well as their ability to recover synonyms for the rare words.We show that morpheme and character representation of words leads much better heldout perplexity although the improvement on the translation BLEU scores is more modest.Intrinsic analysis shows that the recurrent encoder tends to capture more morphosyntactic information about words, whereas convolutional network better encodes the lemma.Both these factors provide different strengths as part of a translation model, which might use lemmas to generalise over words sharing translations, and morphosyntax to guide reordering and contextualise subsequent translation decisisions.These factors are also likely to be important in other language processing applications.
Related Work
Most neural models for NLP rely on words as their basic units, and consequently face the problem of how to handle tokens in the test set that are out-ofvocabulary (OOV), i.e., did not appear in the training set (or are considered too rare in the training set to be worth including in the vocabulary.)Often these words are either assigned a special UNK token, which allows for application to any data, however it comes at the expense of modelling accuracy especially in structured problems like language modelling and translation, where the identity of the word is paramount in making the next decision.
One solution to OOV problem is modelling sub-word units, using a model of a word from its composite morphemes.Luong et al. (2013) proposed a recursive combination of morphs using affine transformation, however this is unable to differentiate between the compositional and non-compositional cases.Botha and Blunsom (2014) aim to address this problem by forming word representations from adding a sum of each word's morpheme embeddings to its word embedding.Morpheme based methods rely on good morphological analysers, however these are only available for a limited set of languages.Unsupervised analysers (Creutz and Lagus, 2007) are prone to segmentation errors, particularly on fusional or polysynthetic languages.In these settings, character-level word representations may be more appropriate.Several authors have proposed convolutional neural networks over character sequences, as part of models of part of speech tagging (Santos and Zadrozny, 2014), language models (Kim et al., 2015) and machine translation (Costa-jussà and Fonollosa, 2016).These models are able to capture not just orthographic similarity, but also some semantics.Another strand of research has looked at recurrent architectures, using long-short term memory units (Ling et al., 2015a;Ballesteros et al., 2015) which can capture long orthographic patterns in the character sequence, as well as non-compositionality.
All of the aforementioned models were shown to consistently outperform standard word-embedding approaches.But there is no systematic investigation of the various modelling architectures or comparison of characters versus morpheme as atomic units of word composition.In our work we consider both morpheme and character levels and study 1) wether character-based approaches can outperform morpheme-based, and, importantly, 2) what linguistic lexical aspects are best encoded in each type of architecture, and their efficacy as part of a machine translation model when translating from morphologically rich languages.
Operation sequence model
The first contribution of this paper is a neural network variant of the Operational Sequence Model (OSM) (Durrani et al., 2011;Feng and Cohn, 2013).In OSM, the translation is modelled as a sequential decision process.The words of the target sentence are generated one at a time in a left-to right order, similar to the decoding strategy in traditional phrase-based SMT.The decisions involved in generating each target word is decomposed into a number of separate factors, where each factor is modelled separately and conditioned on a rich history of recent translation decisions.
In previous work (Durrani et al., 2011;Feng and Cohn, 2013), the sequence of operations is modelled as Markov chain with a bounded history, where each translation decision is conditioned on a finite history of past decisions.Using deep neural architectures, we model the sequence of translation decisions as a non-Markovian chain, i.e. with unbounded history.Therefore, our approach is able to capture long-range dependencies which are commonplace in translation and missed by previous approaches.
More specifically, the operations are (i) generation of a target word, (ii) jumps over the source sentence to capture re-ordering (to allow different sentence reordering in the target vs. source language), (iii) aligning to NULL to capture gappy phrases, and (iv) finishing the translation process.The probability of a sequence of operations to generate a target translation t for a given source sentence s is where τ j is a jump action moving over the source sentence (to align a target word to a source word or null) or finishing the translation process τ |t|+1 = FINISH.It is worth noting that the sequence of operations for generating a target translation (in a left-to- right order) has a 1-to-1 correspondence to an alignment a, so the use of P (t, a|s) in the left-hand-side.
Our model generates the target sentence and the sequence of operations with a recurrent neural network (Figure 1).At each stage, the RNN state is a function of the previous state, the previously generated target word, and an aligned source word, using a single layer perceptron (MLP) which applies an affine transformation to the concatentated input vectors followed by a tanh activation function, where R (t) ∈ R V T ×E T and R (s) ∈ R V S ×E S are word embedding matrices with V S the size of the source vocabulary, V T the size of the target vocabulary, and E T and E S the word embedding sizes for the target and source languages, respectively.
The model then generates the target word t i and index of the source word to be translated next,1 where affine performs an affine transformation of its input,2 and the parameters include and F is the dimensionality of the feature vector Φ(.) representing the induced alignment structure (explained in the next paragraph).The matrix encoding of the source sentence r (s) ∈ R (|s|+2)×E S is defined as where it includes the embeddings of the source sentence words and the NULL and FINISH actions.
The feature matrix Φ(|s|, i ≤j , t ≤j ) ∈ R (|s|+2)×F captures the important aspects between a candidate position for the next alignment and the current alignment position; this is reminiscent of the features captured in the HMM alignment model.The feature vector in each row is composed of two parts :3 (i) the first part is a one-hot vector activating the proper feature depending whether i j+1 − i j is equal to {0, 1, ≥ 2, ≤ −1} or if the action is NULL or FINISH, and (ii) the second part consists of two features i j+1 − i j and Note that the neural OSM can be considered as a hard attentional model, as opposed to the soft attentional neural translation model (Bahdanau et al., 2015).In their soft attentional model, a dynamic summary of the source sentence is used as context to each translation decision, which is formulated as a weighted average of the encoding of all source positions.In the hard attentional model this context comes from the encoding of a single fixed source position.This has the benefit of allowing external information to be included into the model, here the predicted alignments from high quality word alignment tools, which have complementary strengths compared to neural network translation models.
Word Representation Models
Now we turn to the problem of learning word representations.As outlined above, when translating morphologically rich languages, treating word types as unique discrete atoms is highly naive and will compromise translation quality.For better accuracy, we would need to characterise words by their subword units, in order to capture the lemma and morphological affixes, thereby allowing better generalisation between similar word forms.
In order to test this hypothesis, we consider both morpheme and character level encoding methods which we compare to the baseline word embedding approach.For each type of sub-word encoder we learn two word representations: one estimated from the sub-units and the word embedding. 4Then we run max pooling over both embeddings to obtain the word representation, r w = m w e w , where m w is the embedding of word w and e w is the sub-word encoding.The max pooling operation captures non-compostionality in the semantic meaning of a word relative to its sub-parts.We assume that the model would favour unit-based embeddings for rare words and word-based for more common ones.
Let U be the vocabulary of sub-word units, i.e., morphemes or characters, E u be the dimensionality of unit embeddings, and M ∈ R Eu×|U | be the matrix of unit embeddings.Suppose that a word w from the source dictionary is made up of a sequence of units U w := [u 1 , . . ., u |w| ], where |w| stands for the number of constituent units in the word.We combine the representation of sub-word units using a LSTM recurrent neural networks (RNN), convolutional neural network (CNN), or simple bag-of-units (described below).The resulting word representations are then fed to our neural OSM in eqn (2) as the source word embeddings.
Bag of Sub-word Units
This method is inspired by (Botha and Blunsom, 2014) in which the embeddings of sub-word units are simply added together, e w = u∈Uw m u , where m u is the embedding of sub-word unit u.
Bidirectional LSTM Encoder
The encoding of the word is formulated using a pair of LSTMs (denoted bi-LSTM) one operating leftto-right over the input sequence and another operating right-to-left, where h → j and h ← j are the LSTM hidden states. 5The source word is then represented as a pair of hidden states, from left-and right-most states of LSTMs.These are fed into multilayer perception (MLP) with a single hidden layer and a tanh activation function to form the word representation, e w = MLP h → |Uw| , h ← 1 .
Convolutional Encoder
The last word encoder we consider is a convolutional neural network, inspired by a similar approach in language modelling (Kim et al., 2016).Let U w ∈ R Eu×|U |w denote the unit-level representation of w, where the jth column corresponds to the unit embedding of u j .The idea of unitlevel CNN is to apply a kernel Q l ∈ R Eu×k l with the width k l to U w to obtain a feature map f l ∈ R |U |w−k l +1 .More formally, for the jth element of the feature map the convolutional representation is f l (j) = tanh( U w,j , Q l +b), where U w,j ∈ R Eu×k l is a slice from U w which spans the representations of the jth unit and its preceding k l − 1 units, and A, B = i,j A ij B ij = Tr AB T denotes the Frobenius inner product.For example, suppose that the input has size [4 × 9], and a kernel has size [4 × 3] with a sliding step being 1.Then, we obtain a [1 × 7] feature map.This process implements a character n-gram, where n is equal to the width of the filter.The word representation is then derived by max pooling the feature maps of the kernels: ∀l : r w (l) = max j f l (j).In order to capture interactions between the character obtained by the filters, a highway network (Srivastava et al., 2015) is applied after the max pooling layer, e w = t MLP(r w ) + (1 − t) r w , where t = MLP σ (r w ) is a sigmoid gating function which modulates between a tanh MLP transformation of the input (left component) and preserving the input as is (right component).
Experiments
The Setup.We compare the different word representation models based on three morphologically rich languages using both exterinsic and intrinsic evaluations.For exterinsic evaluation, we investigate their effects in translating to English from Estonian, Romanian, and Russian using our neural OSM.For intrinsic evaluation, we investigate how accurately the models recover semantically/syntactically related words to a set of given words.
Datasets.We use parallel bilingual data from Europarl for Estonian-English and Romanian-English (Koehn, 2005), and web-crawled parallel data for Russian-English (Antonova and Misyurev, 2011).For preprocessing, we tokenize, lower-case, and filter out sentences longer than 30 words.Further- more, we apply a frequency threshold of 5, and replace all low-frequency words with a special UNK token.We split the corpora into three partitions: training (100K), development(10K), and test(10K); Table 1 provides the datasets statistics.
Morfessor Training.We use Morfessor CAT-MAP (Creutz and Lagus, 2007) to perform morphological analysis needed for morph-based neural models.
Morfessor does not rely on any linguistic knowledge, instead it relays on minimum description length principle to construct a set of stems, affixes and paradigms that explains the data.Each word form is then represented as (prefix) * (stem) + (suffix) * .
We ran Morfessor on the entire initial datasets, i.e before filtering out long sentences.The word perplexity is the only Morfessor parameter that has to be adjusted.The parameter depends on the vocabulary size: larger vocabulary requires higher perplexity number; setting the perplexity threshold to a small value results in over-splitting.We experimented with various thresholds and tuned these to yield the most reasonable morpheme inventories. 6able 1 presents the percentage of unknown words in the test for each source language .For reconstruction we considered the words from the native alphabet only.The recovering rate depends on the model.For characters all the words could be easily rebuilt.In case of morpheme-based approach the quality mainly depends on the Morfessor output and the level of word segmentation.In terms of morphemes, Estonian presents the highest reconstruction rate, therefore we expect it to benefit the most from the morpheme-based models.Romanian, on the other hand, presents the lowest unknown words rate being the most morphologically simple out of the three languages.Morfessor quality for Russian was the worst one, so we expect that Russian should mainly benefit from character-based models.
Extrinsic Evaluation: MT
Training.We annotate the training sentence-pairs with their sequence of operations to training the neural OSM model.We first run a word aligner 7 to align each target word to a source word.We then read off the sequence of operations by scanning the target words in a left-to-right order.As a result, the training objective consists of maximising the joint probability of target words and their alignments eqn 1, which is performed by stochastic gradient descent (SGD).The training stops when the likelihood objective on the development set starts decreasing.
For the re-ranker, we use the standard features generated by moses 8 as the underlying phrase-based MT system plus two additional features coming from the neural MT model.The neural features are based on the generated alignment and the translation probabilities, which correspond to the first and second terms in eqn 1, respectively.We train the reranker using MERT (Och, 2003) with 100 restarts.
Translation Metrics.We use BLEU (Papineni et al., 2002) and METEOR 9 (Denkowski and Lavie, 2014) to measure the translation quality against the reference.BLEU is purely based on the exact match of n-grams in the generated and reference translation, whereas METEOR takes into account matches based on stem, synonym, and paraphrases as well.This is particularly suitable for our morphology rep- 7 We made use of fast_align in our experiments https: //github.com/clab/fast_align.
resentation learning methods since they may result in using the translation of paraphrases.We train the paraphrase table of METEOR using the entire initial bilingual corpora based on pivoting (Bannard and Callison-Burch, 2005).
Results.Table 3 shows the translation and alignment perplexities of the development sets when the models are trained.As seen, the CNN char model leads to lower word and alignment perplexities in almost all cases.This is interesting, and shows the power of this model in fitting to morphologically complex languages using only their characters.Table 2 presents BLEU and METEOR score results, where the re-ranker is optimised by the ME-TEOR and BLEU when reporting the corresponding score.As seen, re-ranking based on neural models' scores outperforms the phrase-based baseline.Furthermore, the translation quality of the BILSTM morph model outperforms others for Romanian and Estonian, whereas the CNN char model outperforms others for Russian which is consistent with our expectations.We assume that replacing Morfessor with real morphology analyser for each language should improve the performance of morpheme-based models, but leave it for future research.However, the translation quality of the neural models are not significantly different, which may be due to the convoluted contributions of high and low frequency words into BLEU and METEOR.Therefore, we investigate our representation learning models intrinsically in the next section.
Intrinsic Evaluation
We now take a closer look at the embeddings learned by the models, based on how well they capture the semantic and morphological information in the nearest neighbour words.Learning representations for low frequency words is harder than that for highfrequency words, since they cannot capitalise as reliably on their contexts.Therefore, we split the test lexicon into 6 subsets according to their frequency in the training set: [0-4], [5-9], [10-14], [15][16][17][18][19], [20-50], and 50+.Since we set out word frequency threshold to 5 for the training set, all words appearing in the frequency band [0,4] are in fact OOVs for the test set.For each word of the test set, we take its top-20 nearest neighbours from the whole training lexicon (without threshold) using cosine metric.
Semantic Evaluation.We investigate how well the nearest neighbours are interchangable with a query word in the translation process.So we formalise the notion of semantics of the source words based on their translations in the target language.
We use pivoting to define the probability of a candidate word e to be the synonym of the query word e, p(e |e) = f p(f |e)p(e |f ), where f is a target language word, and the translation probabilities inside the summation are estimated using a wordbased translation model trained on the entire bilingual corpora (i.e.before splitting into train/dev/test sets).We then take the top-5 most probable words as the gold synonyms for each query word of the test set. 10e measure the quality of predicted nearest neighbours using the multi-label accuracy,11 where G(w) and N (w) are the sets of gold standard synonyms and nearest neighbors for w respectively; the function 1 [C] is one if the condition C is true, and zero otherwise.In other words, it is the fraction of words in S whose nearest neighbours and gold standard synonyms have non-empty overlap.
Table 4 presents the semantic evaluation results.As seen, on words with frequency ≤ 50, the CNN char model performs best across all of the three languages.Its superiority is particularly interesting for the OOV words (i.e. the frequency band [0,4]) where the model has cooked up the representations com- pletely based on the characters.For high frequency words (> 50), the BILSTM word outperforms the other models.
Morphological Evaluation.We now turn to evaluating the morphological component.For this evaluation, we focus on Russian since it has a notoriously hard morphology.We run another morphological analyser, mystem (Segalovich, 2003), to generate linguistically tagged morphological analyses for a word, e.g.POS tags, case, person, plurality, etc.We represent each morphological analysis with a bit vector showing the presence of these grammatical features.Each word is then assigned a set of bit vectors corresponding to the set of its morphological analyses.As the morphology similarity between two words, we take the minimum of Hamming similarity12 between the corresponding two sets of bit vectors.Table 5(a) shows the average morphology similarity between the words and their nearest neighbours across the frequency bands.Likewise, we represent the words based on their lemma features; Table 5(b) shows the average lemma similarity.We can see that both character-based models capture morphology far better than morpheme-based ones, especially in the cases of OOV words.But it is also clear that CNN tends to outperform bi-LSTM in case where we compare lemmas, and bi-LSTM seems to be better at capturing affixes.Now we take a closer look at the character-based models.We manually created a set of non-existing Russian words of three types.Words in the first set consist of known root and affixes, but their combination is atypical, although one might guess the meaning.The second type corresponds to the words with non-existing(nonsense) root, but meaningful affixes, so one might guess its part of speech and some other properties, e.g.gender, plurality, case.Finally, a third type comprises of the words with all known root and morphemes, but the combination is absolutely not possible in the language and the meaning is hard to guess.
Table 6 shows that CNN is strongly biased towards longest substring matching from the beginning of the word, and it yields better recall in retrieving words sharing same lemma.Bi-LSTM, on the other hand, is mainly focused on matching the patterns from both ends regardless the middle of the word.And it results in higher recall of the words sharing same grammar features.
Figure 1 :
Figure 1: Illustration of the neural operation sequence model for an example sentence-pair.
Figure 2 :
Figure 2: Model architecture for the several approaches to learning word representations, showing from left: bagof-morphs, BiLSTM over morphs, and the character convolution.Note that the BiLSTM is also applied at the character level.The input word, täppi-de-ga, is Estonian for speckled, bearing plural (de) and comitative (ga) suffixes.
Table 1 :
Corpus statistics for parallel data between Russian/Romanian/Estonian and English.The OOV rate are the fraction of word types in the source language that are in the test set but are below the frequency cut-off or unseen in training.
Table 2 :
BLEU and METEOR scores for re-ranking the test sets.
Table 4 :
Semantic evaluation of nearest neighbours using multi-label accuracy on words in different frequency bands.
Table 5 :
Morphology analysis for nearest neighbours based on (a) Grammar tag features, and (b) Lemma features. German-ness(s,f,nom,sg) | 5,847.8 | 2016-06-14T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
Driving the clean energy transition in Cameroon: A sustainable pathway to meet the Paris climate accord and the power supply/demand gap
The Intergovernmental Panel on Climate Change (IPCC) 2021 report has noted the perceived rise in severe weather phenomena such as heat radiations, hurricanes, flooding, and droughts and the rising scientific evidence attributing these events to anthropogenic sources of climate change. Cameroon as a nation is equally exposed to these climate vulnerabilities, and contributing to global climate efforts is imperative. She has earmarked the integration of 25% renewables in its electricity production mix and a 32% emission reduction, all as part of her commitment to global climate action. The fresh commitments coupled with a rapidly growing power demand have paved the way for a revolutionized approach to electricity generation in Cameroon. However, the imminent changes, as well as their implications, remain uncertain. This study explores how these emission reduction targets can be achieved through the adoption of a more sustainable power transition, which provides realistic solutions for emission reduction, escaping high carbon pathways. The assessment of the level at which long-term electricity generation scenarios in Cameroon could be renewable energy intensive was done using the Low Emissions Analysis Platform (LEAP) tool following a backcasting approach. The study noted that there is an implementation gap between earmarked policy ambitions and existing measures. The study recommended several opportunities in aspects, such as suitable share of technologies, administrative reforms, and required adjustments within the Nationally Determined Contributions (NDCs), which the government could exploit in the electricity sector to sail across the challenging trade-offs needed to become a sustainable economy in a carbon-constrained world. It equally examines actions that could help close the gap between earmarked policy ambitions and existing pathways and proposes cost-effective methods that were identified as priorities.
The Intergovernmental Panel on Climate Change (IPCC) report has noted the perceived rise in severe weather phenomena such as heat radiations, hurricanes, flooding, and droughts and the rising scientific evidence attributing these events to anthropogenic sources of climate change. Cameroon as a nation is equally exposed to these climate vulnerabilities, and contributing to global climate e orts is imperative. She has earmarked the integration of % renewables in its electricity production mix and a % emission reduction, all as part of her commitment to global climate action. The fresh commitments coupled with a rapidly growing power demand have paved the way for a revolutionized approach to electricity generation in Cameroon. However, the imminent changes, as well as their implications, remain uncertain. This study explores how these emission reduction targets can be achieved through the adoption of a more sustainable power transition, which provides realistic solutions for emission reduction, escaping high carbon pathways. The assessment of the level at which long-term electricity generation scenarios in Cameroon could be renewable energy intensive was done using the Low Emissions Analysis Platform (LEAP) tool following a backcasting approach. The study noted that there is an implementation gap between earmarked policy ambitions and existing measures. The study recommended several opportunities in aspects, such as suitable share of technologies, administrative reforms, and required adjustments within the Nationally Determined Contributions (NDCs), which the government could exploit in the electricity sector to sail across the challenging trade-o s needed to become a sustainable economy in a carbon-constrained world. It equally examines actions that could help close the gap between earmarked policy ambitions and existing pathways and proposes cost-e ective methods that were identified as priorities.
KEYWORDS climate action, climate change, backcasting, carbon pathways, emission reduction, LEAP tool, long-term scenarios Climate Agreement, where various countries have set targets to transition to greener and environmentally friendly energy generation (UNFCCC, 2016a). The government has adopted the National Development Strategy for the period of -2030(MINEPAT, 2020Bodjongo et al., 2021) with the main objectives of improving economic growth, accumulating national wealth, and advancing the necessary structural reforms that will facilitate the country's industrialization. To meet this goal, Cameroon intends to increase the installed power capacity to 5,000 megawatts (MW) by 2030 through diversified generation comprising hydroelectricity, solar, thermal (using natural gas as fuel), and biomass-powered power plants (Power Africa, 2019). The installed capacity during the first phase of "Vision 2035" (2010-2020) improved from 933 to 1,650 MW with a shortage of 1,350 MW when compared to the envisaged target of 3,000 MW in 2020 (MINEPAT, 2020). The national electricity access rate in Cameroon is 70% (IEA, 2020b) and has grown considerably compared to the previous years due to a series of projects such as the newly constructed Memve'ele hydroelectric station, the recuperation of the Limbe power plant, and the implementation of some solar photovoltaic (PV) projects. The urban electricity access rate is 98 and 32% for rural areas (IEA, 2020b), implying that an estimated 68% of rural communities are without electricity access in Cameroon. These statistics, which are a reflection of what is obtainable in other countries in sub-Sahara Africa (SSA), convey a narrow part of the reality as it conceals the repeated power outages as a result of the epileptic, outdated, and extremely unstable power grids in this region (Cole et al., 2018). This stresses the crucial necessity for more reliable and efficient unconventional energy systems. This shortfall in power production retards the expansion of economic activities and private investment.
Cameroon, in her NDCs, has earmarked a 25% RE share (excluding hydropower capacity exceeding 5 MW) in the power generation mix and a 32% reduction of GHG emissions (Cameroon Ministry of External Relations, 2015). This measure is quite an urgent step taken by Cameroon, especially as several countries are setting fresh long-term net zero targets or updated Nationally Determined Contributions (NDCs) (UNFCCC, 2016b) in the wake of the 26th Conference of Parties (COP26) held in November 2021. Advancing the renewable energy transition in Cameroon will simultaneously reduce emissions and narrow the irreconcilable difference between rural-urban electrification rates in the country. The dynamics of power demand and supply in Cameroon are pitiably irreconcilable, and the sector presents huge prospects for meeting the Paris Climate Accord through widespread renewable energy deployment. The question of what extent the various RE sources (hydro, wind, biomass, and solar) could contribute to Cameroon's electricity generation mix has not been adequately studied by scholars. While the earmarked 25% of renewable power injection has a huge potential to revolutionize Cameroon's power generation mix, government's current energy plans have not adequately captured the extent to which this can be achievable since the imminent changes, as well as their implications, remain uncertain. Therefore, the study aims to fill the literature gap in the Cameroon power sector by answering three fundamental questions facing the electricity sector: . /frsc. .
• Do the current electricity generation expansion pathway and the RE deployment trend meet the 25% RE target in the generation mix by 2035? • Is the current electricity generation expansion pathway the most cost-effective pathway or technology mix for attaining a 25% RE target? • What alternative generation expansion pathway has the ability to meet the 25% stated RE target and on recent RE deployment trends?
Consequently, this study explores how the planned 25% renewable injection in the Cameroon NDCs could be achieved through the development of a backcasting energy model using the LEAP tool. This model, to the best of our knowledge, is the first integrated energy model for Cameroon available in the energy studies literature. This approach is pertinent for Cameroon as most of the available literature (Bautista, 2012;Lkhagva, 2014;McPherson and Karney, 2014;Senshaw, 2014) has used LEAP models in different countries. This study adds to the literature on LEAP-based country-level studies on power sector planning scenarios in Ghana (Awopone et al., 2017a), Ethiopia (Senshaw, 2014), Greece (Roinioti et al., 2012), and Taiwan (Huang et al., 2011). The study conducted RE allocation-based analysis in the power network and evaluated their potential economic and environmental impacts on Cameroon's power sector. The model used local energy statistics and data on the recent and future plans regarding energy production and transformation in Cameroon. These data were supported by economic and demographic information as well as the GDP. The study finally suggests the need to review and update the existing RE policies and master plans as well as the current NDC document with readjustment in the renewable quotas and timelines allocated to the various technologies as outlined in the modified scenario. The article highlights the RE deployment trends in the country to trigger policy discussions among stakeholders.
This study starts with a presentation of the Cameroon country profile with an overview of the existing situation in the Cameroonian energy sector. The article further reviews the state of energy supply and consumption in Cameroon. Part of the study involved a quantitative description of the climate change vulnerability in Cameroon and a synopsis of initiatives and policy decisions undertaken in Cameroon. Moreover, the study describes the LEAP tool adopted in the study where aspects like scenario setting, future electricity demand and supply, GHG emissions assessment, cost and benefit study, and costoptimized power generation. In addition, the study discusses the results obtained from the various scenarios with emphasis on sustainable policy impact in the country. The study concludes with some recommendations, citing some available options that the government could use to advance clean energy in the country.
. Methods, materials, and the rationale This study is an analytical and informed evaluation of the state of the electricity sector in Cameroon and some proposed reforms to drive the renewable energy (RE) transition in the power sector. The LEAP tool was used in modeling and simulating the scenarios following an energy backcasting approach. The backcasting energy modeling approach explores past events based on the already known information, i.e., stating a desired future and subsequently working backward to determine strategies and policies that are going to link the future to the existing situation (Brandes and Brooks, 2005). The forecasting approach explores future scenarios with no prior knowledge using the assessment of existing trends (Holmberg and Robèrt, 2000), and this is seemingly the most commonly used modeling method. LEAP is an acronym for Low Emissions Analysis Platform, formerly called Long-range Energy Alternatives Planning, a property of the US-based Stockholm Environment Institute (2021) used in energy policy and climate change evaluation. This tool has seen applications in energy consumption monitoring, sectorial energy resource management, and energy-based greenhouse emission accountability (Lee et al., 2008;Huang et al., 2011;Taoa et al., 2011;Bautista, 2012;Lkhagva, 2014;McPherson and Karney, 2014;Senshaw, 2014;Kemausuor et al., 2015). The current study is on a developing country like Cameroon, with the most often scanty data to build a comprehensive energy model. Hence the LEAP tool is appropriate for the research because it is effective in monitoring energy consumption and conversion in developing countries; it has the capability of setting up energy integration scenarios with a rich environmental assessment database; it comprises several energy technologies with both clean power technologies for industrialized nations and conventional systems (common in emerging nations), and above all, the tool requires relatively less initial data with the possibility of improving the model when complete data for the research area becomes available. The power transition model takes into consideration all the existing and future power plants from the reference year of 2015 to the target year of 2035. Three scenarios are developed, and the allocation of power generators to meet the electricity demand at a particular time is based on the 25% renewable energy target in the Cameroon NDCs by 2035, the GDP, the load duration curve, and the mean plant capacity factor which are entirely stated exogenously. The cost of power generation from specific technologies over time, the carbon tax, and power losses from transmission and distribution (T&D) networks are also considered. The key outputs obtained from the scheme is the growth of electricity generation through various technologies, comprehensive economic inferences of the modeled scenarios, and the trend of emissions from the base year (2015) through 2035. The Cameroonian LEAP model offers a backcasting energy approach to Cameroon's energy sector, and it is, so far, the first attempt in the Cameroonian context. The three unique scenarios explore the probability of delving into the huge prospects in Cameroon's energy future. With many sources of uncertainty in the future, using several modifications in the scenarios increase the opportunity of identifying a potential future energy pathway for achieving the Cameroon NDC target. This research contributes to augment the policy literature on the available options for improving the deployment of renewable energy, specifically by analyzing the energy supply/consumption of Cameroon, climate change vulnerability, the state of the power sector, and proposals to mitigate these challenges. This is done through the evaluation of the impact of policy (NDCs assessment), cost-benefit study, and cost-optimized power generation.
Data for this study were obtained from the Cameroon Ministry of Economy, Planning and Regional Development (MINEPAT, 2009a), the Cameroon Ministry of Energy and Water Resources (MINEE, 2015), and Cameroon's National Institute of Statistics (Institut National de la Statistique, 2016), annual reports of the country's energy utility (ENEO, 2019), and the Electricity Sector Regulatory Agency (2015), country reports from international institutions such as the International Renewable Energy Agency IRENA (2020b), World Bank (2021), and IEA (2021). Supplementary sources of information were obtained from energy policy-related official documents, desk studies, and online literature. A clear methodology to develop these pathways would assist in informing policy, evaluating required modifications to the current trajectory, enhancing comprehension and acceptance of the proposed ways of meeting the NDC renewable energy targets, and assuring investors that renewable energy transition can be achieved in Cameroon. Figure 1 shows the methodology for the analysis.
Focusing on the electricity sector makes an interesting case since research on industry and country level in Africa shows huge consequences as a result of interrupted electricity network: As the occurrence of power outages increases by 1%, the output of a firm is estimated to reduce by 3.3% in a short run, and the gross domestic product (GDP) per capita reduces by 2.9% in the long run (Andersen and Dalgaard, 2013;Mensah, 2018). The disruption of services, such as electricity, and water, heavily impacts small firms with a low ability to cope, and this limits entrepreneurship and competition (Alby et al., 2013;Poczter, 2017). Moreover, defective utility infrastructure reduces the number of industries hosted by a country and hence weakens its attractiveness to international investors (World Bank, 2019). These dynamics usually fall back on the citizens through reduced employment and increased costs for consumers. The Cameroon Ministry of Economy, Planning and Regional Development reports losses of 5% of annual GDP as a result of an insufficient and unreliable supply of power. Hence, there is an urgent need for reliable and sufficient power supply because the distressing rationing system, the reduced industrial activity, the loss of jobs, and disturbances in public life are revelations of what presently seems to be a recurrent hindrance to Cameroon's development program.
. . Conception of scenarios
The study built scenarios from a common reference scenario, out of which three additional scenarios (alternative) were generated to achieve a RE-intensive and low-carbon power system in Cameroon. The three other scenarios developed are the stated scenario, the modified scenario, and the optimized scenario. The reference scenario was conceived from the Cameroon government's energy policy ambitions in the Electricity Sector Development Plan, the RE Master Plan, and the Rural Electrification Master Plan (REMP) (MINEE, 2006;AER, 2017;Korea Energy Economics Institute, 2017), which runs from 2015 to 2035. These three other scenarios (alternative) assess the possibilities and costs of meeting Cameroon's NDC targets by 2035. Based on the methodology for scenario planning originally presented by Schwartz (1991), a fivestep scenario development approach was adopted. These steps consist of (i) identification of the focal issue to be examined; (ii) determination of key factors influencing the focal issue; (iii) .
/frsc. . Relations, 2015), and the "Vision 2035" (MINEPAT, 2009b). This policy document shows the country's 20-year power generation plan, where investments in power generation will be guided by the pillars in the report. The BAU scenario presents the evolution of Cameroon's power generation from 2015 (base year) up to the year 2035 with no substantial new power sector policies except the pre-existing ones considered in the scenario. New capacity under this scenario and alternative scenarios is assumed to be added in 2022. Electricity demand projections are mostly influenced by projections in GDP and increase in population. In the base year (2015), power generation in Cameroon was dominated by hydropower (55%), followed by fossil fuel (44%) and, finally, renewables (1%). The total installed capacity used in the base year was 1,361.50 MW, and the various capacities are shown in Table 1. These data were obtained from the Cameroon Ministry of Energy and Water Resources.
. . . The stated scenario
Cameroon NDC targets aim to achieve a 25% share (5,953.8 GWh out of the 23,815.2 GWh) of RE in the generation mix by 2035, as stated in the NDCs (Cameroon Ministry of External Relations, 2015). Unlike the BAU scenario, the stated scenario assumes the renewable energy target is met in the initially specified generation ratio on the policy document submitted by the government to the UNFCCC's secretariat. This scenario is the basis for the backcasting approach used in this study. The renewable energy ambitions within the Cameroon NDCs anticipate power generation by 2035 from non-renewable large hydro (15,607 GWh), small hydro (2,579 GWh), wind energy (464 GWh), solar PV (1,345 GWh), biomass (1,611 GWh), and natural gas (1,882 GWh). In the same scenario, there are plans to generate electricity from petroleum products for up to 928 GWh with no intentions to use coal-fired power plants. Unlike the BAU scenario, the stated scenario assumes the renewable energy targets are met in the officially specified energy generation ratio. Figure 2 shows the percentage of the various sources of power used in this scenario.
. . . The modified scenario
The modified scenario was conceived on the basis that the renewable energy mix in the BAU and stated scenarios (official NDC targets) do not represent the ideal renewable energy mix in Cameroon. For example, an 11% target of small hydro ( Figure 2) in the total generation mix by 2035 amid abundant solar resources raises concerns about the applicability and sustainability of small hydro installations in Cameroon. A 7% biomass share ( Figure 2) also raises economic concerns with regard to the food-energy nexus in addition to the unpopular nature of processes involved in biomass-powered plants (such as pyrolysis, gasification, fermentation, and combustion), especially in developing countries like Cameroon. Depending on feedstock properties, power generation from biomass also raises environmental issues, among others, like terrestrial acidification and particulate matter formation (Paletto et al., 2019). There were no considerations in the NDCs to identify generation technologies that have the potential to generate the least cost and more adaptable power in the Cameroonian context. This aspect is essential because it guides decisions in determining the most economical generation expansion plan since investment in power technologies mostly depends on cost and emission capacity. Thus, the various technology targets in the NDCs do not express the least-cost pathway for renewable energy development in Cameroon, and there is, therefore, a need to redevelop the Cameroon NDC scenario that considers actual realities like resource availability, ease of use of technology, access to system components and economic viability of the technology. The modified scenario is different from the stated scenario in that the various renewable energy technology share targets were developed based on the researchers' knowledge of Cameroon's power system structure, levelized costs of energy (LCOE), and applicability of each technology, and the country's rural electrification targets.
. . . The optimized scenario
The optimized scenario typically seeks the least-cost pathway to achieving the 25% renewable energy target by 2035. The main difference between the optimized and the other scenarios is that the LEAP and the OSeMOSYS optimization model were enabled to endogenously add the required capacity necessary to meet demand and the 25% renewable energy target. The optimized scenario aimed at identifying the generation expansion pathway that adequately meets electricity demand at the lowest discounted net present value (NPV) costs over the entire study period. The cost minimization objective in OSeMOSYS was subject to constraints of capital costs, fixed and variable operation and maintenance costs, and externality costs. The dispatch and capacity addition were also determined by the LEAP/OSeMOSYS framework (Awopone et al., 2017a;Stockholm Environment Institute, 2022). OSeMOSYS is an independent optimization tool that has been integrated into the LEAP environment and used for long-term energy planning. It is modular and capable of building sophisticated energy models with relatively small data requirements. This scenario operates on the assumption of a more aggressive policy instrument with high goals of radically increasing economic growth and the country's commitment to cheap, environmentally friendly energy technologies than the modified scenario. This scenario also assesses the total cost of achieving the 25% renewable energy target from 2015 to 2035, unlike the modified scenarios, which start the assessment in 2022.
. . . Main assumptions in the model
The base year of 2015 and a bottom-up scenario modeling methodology were utilized for this study. The year 2015 was chosen due to data availability and its overlapping nature on most future energy expansion documents in the country. This also provided an opportunity to validate the model results against subsequent past years. Hydropower installations of ≤5 MW in Cameroon are classified as RE systems. The RE sources considered in this study are small hydro, solar PV, wind, and biomass. A fixed discount rate of 10% was considered for the duration of the study.
Cameroon's GDP under all scenarios was assumed to grow from USD 32.21 billion in 2015 to USD 82.06 billion, an average growth rate of 5% annually (World Bank, 2021). The population was also assumed to grow at 2.5% annually, i.e., from 23.3 million in 2015 to 38.18 million by 2035 (World Bank, 2021). A household size of 5 was used for this study (United Nations, 2014;World Bank, 2021). However, total national electricity needs were projected to rise at an annual average of 6.7% from 2015 to 2035, according to official projections from the RE master plan and the PDSEN (MINEE, 2006;Korea Energy Economics Institute, 2017), and this value was used in the model. Household electricity access rates were also assumed to grow according to national projections. In addition, T&D losses increased from 36.27 in 2015 to 38.93% in 2019 and were then assumed to reduce to 25% by 2035 (ENEO, 2017(ENEO, , 2019. This assumed power loss reduction is based on the utility company's earmarked measures, such as the deployment of prepaid meters and the Supervisory Control and Data Acquisition (SCADA) systems intended to reduce system losses. The reserve margin was also assumed to grow from 5% in 2015 to 10% by 2035. The total electricity demand in Cameroon was 5.41 TWh in 2015 (ENEO, 2017), with an overall installed power capacity of 1,315.79 MW (MINEE, 2015) consisting mainly of large hydro (55%) fossil fuel (44%) and renewables (1%). An hourly temporal resolution was considered to account for the intermittency of variable RE sources. Thus, a yearly shape consisting of the total electricity demand of Cameroon for each hour of the year (a total of 8,760 h) was used. Figure 3 shows the annual shape of electricity demand used in the study.
The costs of electricity generation by technology considered in this study are the capital costs and operation and maintenance Frontiers in Sustainable Cities frontiersin.org . /frsc. .
FIGURE
Annual shape of electricity demand in Cameroon in (source: authors' construct from data provided by ENEO). Table 2 shows the cost of generation technology, while Table 3 shows the fuel cost projections used in this study.
For the characteristics of the generation technology used in the model, Table 4 summarizes the capacity factors and the power factors of the plants. The study assumes that there are sufficient transmission and distribution capacity and grid improvements over the course of the generation expansion period.
The seasonal profiles for solar PV, wind, and hydropower in the LEAP model were considered as annual average values, and they were modeled in the form of availability. Availability or the availability factor is the percentage of time a power plant can generate electricity in a specified period (8,760 h for this study) (IEA, 2020b; Stockholm Environment Institute, 2021).
. . Sensitivity analysis
Although perfect foresight is assumed in this study, electricity generation costs are dependent on several factors, such as discount rates, technology capital costs, O&M costs, and feedstock fuel costs. These factors are largely uncertain and cannot be accurately forecasted. However, feedstock fuel costs are arguably the most volatile, given that they depend largely on external stimuli. Thus, .
/frsc. . a fuel price sensitivity analysis is conducted to assess the impact of feedstock price changes on the cumulative costs of the scenarios. A fuel price uncertainty of ±50% was adopted in this study. This assumption stems from the high instability of fossil-fuel prices in recent years.
. Cameroon country profile
Cameroon is within the Central Africa sub-region, and the country is situated at a latitude of 6 • North and a longitude of 12 • East (Kwaye et al., 2015). The country is exposed to a fairly good solar radiation, with an average value of 4.9 kWh/m 2 /day (IEEE, 2018). Electricity generated from hydro resources dominates the generation mix, and only <5% of the country's potential is under exploitation (ANDRITZ Hydro, 2019). Despite the enormous RE potentials, Cameroon has no clear RE policy to harness these resources for power generation (Njoh et al., 2019). The country's institutional energy policy structure provides some faint information on government's position toward renewables as a possible source of power generation in Cameroon. Again, this is only limited to some renewables like solar, hydro, and wind with no details on how private stakeholders' interest can be guaranteed if they decide to invest. These modern energy services provide a huge potential for local community empowerment since the resources can be exploited locally and at a small scale, supporting rural development and electrification. However, this requires strong policy objectives, enabling regulations and a balanced institutional setup in addition to viable business models to accelerate renewable energy deployment.
At the national level, according to the Ministry of Finance, the economic growth rate was estimated at 3.9% in 2019, compared to a 4.1% rate recorded in 2018. This drop in economic activity falls within a context characterized by the persistence of the sociopolitical crisis in the North West/South West Regions, as well as the terrorist threats in the Far-North Region, the fire incident at the National Oil Refinery that occurred in May 2019, and a significant increase in oil and gas production. Concerning inflation, it rose significantly by 2.5% in 2019 compared to 1.1% in 2018 (ENEO, 2019).
. . Cameroon energy supply/consumption
The primary supply of energy in Cameroon comes from biofuels and waste (70.58%), followed by crude oil (20.17%), natural gas (5.34%), hydropower (3.90%), and other renewable sources (0.01%) like solar, geothermal, and wind. Historical trends in energy supply showed a steady rise in energy supply from 1990 to 2005, with a recession from 2005 to 2007 before steadily rising right up to 2018 ( Figure 6) (IEA, IRENA, UNSD, World Bank, WHO, 2020). The most used forms of energy in the country are biomass (74.22%), petroleum products (18.48%), and electric power (7.30%). The country's overall energy usage in 2018 was estimated at 7.41 Mtoe; made up mostly of traditional biomass.
Overall energy consumption within sectors shows 63.68% for the residential sector, followed by 14.92% for both the public service and commercial sector, the transport sector (13.82%), the industry sector (5.15%), the agriculture/forestry sector (0.07%), non-energy use (0.88%), and other sectors (1.48%), as shown in Figure 4 (IEA, IRENA, UNSD, World Bank, WHO, 2020). Figure 4 shows the percentage of energy consumption by sector, and Figure 5 shows the trend of primary energy usage by source in Cameroon from 1990 to 2019. Figure 6 shows the trend of primary energy supply by source in Cameroon from 1990 to 2019, while Figure 7 showed the percentage of primary energy supply in Cameroon in 2019.
. . Climate change vulnerability and carbon dioxide (CO ) emission in Cameroon
The United Nation Development Program's (UNDP's) climate change report on Cameroon (UNDP, 2008) indicates a mean annual temperature rise of 0.7 • C between 1960 and 2007, implying an average temperature rise of 0.15 • C per decade. The report further showed that the temperature increase in the country was highest (0.19 • C per decade) in the months of April and May. Nevertheless, the Northern part of the country had the months of January, September, November, and December with the highest temperature rise (0.2-0.4 • C per decade). Temperature forecasts, according to data from the UNDP, predicted an average annual temperature rise of 1.0-2.9 • C by 2060 and 1.5-4.7 • C by 2090 (Ngnikam and Tolale, 2009). The projected temperature rise was more in the Northern and Eastern parts of Cameroon and lesser in the coastal areas.
The annual rainfall in Cameroon dropped by 2.9 mm each month, implying a 2.2% per decade from 1960. The years 2003 and 2005 have witnessed low rainfalls in the country, and forecasts of average annual rainfall are expected to change from −12 to +20 mm per month (−8 to +17%) in 2090, implying a +1 to +3 mm per month (0-2%) (UNDP, 2008). There are concerns that Cameroon's lowland coastal zones could be susceptible to rising sea levels (Ngnikam and Tolale, 2009). All these projected vulnerabilities justify why the country needs to target sectors such as the power sector with the potentials to mitigate these future problems.
Cameroon has a generally insignificant emission history, with the power sector contributing a small amount (∼3.95%) of the Figure 8 shows the emission history of Cameroon. Due to the growing electricity demand in Cameroon, a low-carbon power network is crucial to decreasing CO 2 emissions from other sectors like industry, transport, and buildings.
. . Developments in the power sector in Cameroon
Power generation and distribution in Cameroon are managed by Energy of Cameroon (ENEO). This para-statal company went operational on 12 September 2014 through the purchase of part of the shared power company between the state of Cameroon and an American company, la Société de l ′ Electricité Frontiers in Sustainable Cities frontiersin.org . /frsc. .
FIGURE
Trend of primary energy supply by source in Cameroon from to .
. Simulation results and discussion
The electricity demand, generation capacity and associated costs, and corresponding environmental emissions are presented in this section. To clearly interpret the results, all the outputs of the various scenarios are discussed on the basis of power demand, installed capacity, the cost-benefit analysis, and environmental assessment. The analysis is based on the power sector with a backcast from the 25% RE target stated in the Cameroon NDCs. The results obtained from various scenarios are presented using charts for simplicity.
. . Electricity demand
The Cameroon electricity demand from 2015 to 2035 under all four scenarios is shown in Figure 11. The same demand growth was used for all scenarios, and it increased from 5.41 TWh in 2015 to 19.79 TWh in 2035. The 2035 value of 19.79 TWh was lower compared to official projections of 23.73 TWh, an indication that the set ambitions failed to consider trends in generation expansion. This is attributed to a slower capacity addition than originally planned by the government. This is further supported by a 15% biomass to be replaced with electricity and liquefied petroleum gas (LPG). This involves the deployment of widespread energy systems, especially in rural areas, to improve the livelihoods of its inhabitants. Figure 9 shows the trends in Cameroon's power demand from 2015 to 2035.
. . Installed generation capacities
The demand growth and reserve margin increased from 5 to 10%, while the installed capacity increased from 1.32 GW in 2015 to 4.68 GW by 2035 under the BAU scenario. This scenario has observed the generation capacity that has more than tripled within the 20-year period. The percentage increase corroborated similar studies by Awopone et al. (Awopone et al., 2017b) in Ghana, where installed capacity was predicted to increase by over three times within a similar study period. From the simulation, the installed capacity in 2035 under the stated, modified, and optimized scenarios were, respectively, 4.89, 5.30, and 4.66 GW. Power generation in all scenarios was through large hydro, natural gas, thermal, and renewables consisting of biomass, small hydro (<5 MW), solar, and wind. The fuels presently used in thermal plants are heavy fuel oil (HFO), natural gas, and light fuel oil (LFO). However, power plants using HFO and LFO will be phased out by 2035, with natural gas exclusively used in operating these thermal plants. Power generated from biomass sources includes municipal solid waste and agriculture waste (timber waste, sugar cane waste, waste from oil palms, and cocoa waste) with prospective generation technologies that involve combustion and anaerobic digestion. The generation of power from these RE technologies is advantageous in that power is produced and utilized around load centers (distributed generation) or fed into the power grid. The total installed capacity under the various scenarios is shown in Figure 10.
From Figure 10, additional capacity of 0.21 and 0.62 GW are, respectively, needed in the stated and modified scenarios, while a capacity of 0.21 GW less is needed under the optimized scenario. The modified scenario has the highest installed capacity due to the consideration of more solar PV in achieving the 25% renewable energy mix. Higher total installed capacities in the alternative scenarios compared to the BAU scenario was as a result of the relatively low-capacity credit of renewable energy sources compared to conventional generation. This agrees with findings from studies in Ghana (Awopone et al., 2017a,b) and Panama (McPherson and Karney, 2014).
Large hydropower continues to be the dominant source under all scenarios, with shares of 58.7, 50, 46.1, and 55.4% under the BAU, stated, modified, and optimized scenarios, respectively, in 2035. This is backed by the fact that Cameroon has huge hydropower potentials (23 GW) that needs exploitation, especially as the country intends to export electricity to neighboring Chad (AfDB, 2017), as highlighted in her National Development Strategy (2020)(2021)(2022)(2023)(2024)(2025)(2026)(2027)(2028)(2029)(2030). Similarly, renewable energy constitutes 12.9, 27.8, 33.3, and 20.2%, respectively. Energy generated under all scenarios increased from 6.5 TWh in 2015 to 26.38 TWh by 2035. This represents an increase of 5% from 2015 to 2035. The increasing impacts of climate change in sub-Saharan Africa, causing periodic droughts, drop in reservoir water levels, and subsequently, electricity crises, especially in the dry season, have seen a reduced dependence on hydropower in many countries. This is illustrated in generation expansion studies in Nigeria (Aliyu et al., 2013;Emodi et al., 2017), Ghana (Awopone et al., 2017a,b), and Africa (Ouedraogo, 2017), which see a greater increase in shares of natural gas and renewables. Although Cameroon has a huge hydropower potential which is poised to play a key role in the Central African Power Pool (CAPP), the nation ought to reconsider her generation expansion objectives in the context of her energy security (Kenfack et al., 2021). Renewables should be prioritized for development as in the alternative scenarios. The percentage share of various technologies under the different scenarios in 2035 is shown in Figure 11.
Under the optimized scenario, the main endogenously added renewables are biomass (New Biomass), which occupies the entire 25% share of renewable injection. This is due to the consideration of biomass to be the least-cost renewable energy resource in Cameroon owing to the fact that Cameroon is part of the Congo basin forest with 25,000,000 hectares of forest coverage (three-fourths of her landmass), ranking her the third country in sub-Saharan Africa in terms of biomass potentials. In general, this scenario would have been the best option; however, given that energy projects in developing countries are influenced by government's ambitions, accessibility of technology, and contextual adaptability, this scenario is not suitable for Cameroon.
Cameroon generates billions of tons of waste yearly from agricultural and agro-industrial activities as well as hospitals which are causing huge environmental problems. This waste has no commercial value in Cameroon, and transforming this waste into electricity could potentially solve the environmental pollution issue and the huge power deficit in the country by embarking on biogas production, which could be further used as fuel for electricity generation. This also has the capability to increase employment opportunities and create new revenue streams within the country. It is worth noting that Cameroon is the breadbasket of the CEMAC sub-region with a commendable agricultural prowess, where the farming sector is driving the economy and safeguarding the food security of inhabitants. Major crops produced include maize, cassava, millet, rice, sweet and Irish potatoes, macabo, taro, yam, peanut, sorghum, bean, and soy (Vintila et al., 2019), which are generating huge waste. These wastes could also be transformed into biofuels that can power generation systems. Cameroon is the biggest coffee and cocoa producer in the Central Africa sub-region (Vintila et al., 2019) and also has an annual cattle production of 7 million, 8 million small ruminants, 2 million pigs, and 50 million poultry (Tagne et al., 2021). In addition, Cameroon is one of African's main palm oil producers (Gelder and German, 2011), with a production capacity of ∼210,000 tons of palm oil in 2011 (Feintrenie, 2012). All these activities generate a large amount of waste that could be used for power generation without necessarily indulging in indiscriminate forest exploitation. Table 5 shows the annual agricultural biomass residues in Cameroon.
More attention is given to petroleum and large hydropower, with gross neglect of the huge amount of waste that could be used for power generation. There is no national policy on the exploitation of biofuels into useful electricity. The regulatory framework involving policy formulation to effective implementation does not exist. Given that strategic policy is the driver of positive change, putting in place appropriate policies and regulatory frameworks would assist in upscaling the level of waste to electricity in the country. This is why the optimum scenario, which is dominated by biomass, is illusive for Cameroon at the moment.
. . Economic analysis
The cumulative net discounted costs (fuel costs, capital costs, and O&M costs) at a 10% discount rate and 2015 US dollars under all scenarios are shown in Figure 13. Under the BAU Frontiers in Sustainable Cities frontiersin.org . /frsc. . respectively, required to meet the 25% renewable energy mix in the NDCs. Contrarily, the optimized scenario witnessed a cost reduction of $87.6 million, indicating extra financial investments in the alternative scenarios (bar the optimized scenario) will be needed to enable Cameroon to achieve her 25% renewable energy target. Figure 12 shows the net present value of various energy technologies under all scenarios from 2015 to 2035. An environmental externality cost of $5/MTCO 2 e in 2022, which rises to $15/MTCO 2 e by 2035, was introduced in this study. This aimed to highlight the environmental impacts of attaining the 25% renewable energy targets under different pathways and quantifying the potential revenue if carbon pricing was to be introduced in Cameroon. Under the BAU scenario, a cumulative carbon revenue of up to $27 million is achievable. However, the stated, modified, and optimized scenarios witnessed potential carbon revenues of $7.1, $3.3, and $16.4 million, respectively. The difference in costs of the alternative scenarios though the same 25% renewable energy target is due to the different share in capacity additions of technologies. It was observed from the environmental externality costs analysis that the country has a small potential for carbon taxation. This is partly due to the low contribution of fossil fuels to the generation mix and low electricity demand (generation) in Cameroon. These are some of the reasons why the implementation of carbon taxation in sub-Saharan Africa has stalled. Although studies are ongoing in Senegal and Cote d'Ivoire, only South Africa is presently implementing carbon taxation in Africa (World Bank, 2020). Higher carbon revenue could be gotten with increased carbon taxes, such as the carbon tax of R120/ton on average (∼6.6 USD) in South Africa or the $20/ton used as an assumption in a study in Ghana (Awopone et al., 2017b). However, it will increase fuel costs, increase electricity costs, and eventually increase the costs of living in the short term, although it will lead to a faster uptake of renewables in the long term (Konrad-Adenauer-Stiftung, 2020;Organization for Economic, Cooperation, and Development, 2021).
Another policy consideration in Cameroon that could encourage the deployment of renewables is the green credit policy, a financial instrument where banks are instructed to grant loans only to companies with strict environmental compliance. However, it is reported by a study in China that this scheme had negative consequences on the ability of companies to innovate (Zhang et al., 2022). Three concerns were raised by the study, namely: (i) It was challenging for highly polluting companies under this policy to acquire more credit and favorable loan interest rates within a short term, resulting in insufficient funds to research on innovative .
/frsc. . environmentally friendly products, (ii) Implementing this policy will cause restrictions of highly polluting companies to get green credits from commercial banks, which will push companies to increase commercial credits instead of debt financing, and (iii) The fact that green projects usually have high risk, long cycles, and huge assessment cost is a barrier.
. . Environmental analysis
The LEAP tool categorizes energy emissions into two constituents; emissions from the demand side, which are emissions coming from the point of energy usage such as refrigerators, waste disposal, land use, and cars, and these type of emissions were ignored in this study. The second category is those from the point of energy transformation, such as power generation, and this category is the lone type considered in the study.
Greenhouse gas emissions under the BAU, stated, modified, and optimized scenarios increased from 0.82 MTCO 2 e in 2015 to 1.2, 0.9, 0.8, and 0.4 MTCO 2 e, respectively, in 2035. Over the 20year study period, cumulative emissions of 28.1 MTCO 2 e were observed. The emissions from various scenarios were persistently rising due to an expected increase in power generation capacity. However, cumulative emission savings of 4.2, 1.7, and 19.2 MTCO 2 e compared to the BAU scenario were observed under the stated, modified, and optimized scenarios, respectively. The cumulative GHG emissions of alternative scenarios in Cameroon compared to the BAU scenario are shown in Figure 13. According to her NDCs, Cameroon is committed to reducing GHG emissions by 32 MTCO 2 e between 2010 and 2035 (Cameroon Ministry of External Relations, 2015). This reduction is supposed to come from all sectors, including agriculture, buildings, and transport sectors. For the electricity generation sector, it mandated GHG emission increase by only ∼84% in the Mitigation scenario, compared to over 175% in the Reference scenario, when both are compared to 2010 values. It is observed from the study that emissions compared to base-year values increased by 146% instead of decreasing. This indicates Cameroon's current generation expansion trajectory digressed from that intended, so meeting her NDC commitments is not possible. However, the expected emission reductions are experienced in the stated, modified, and optimized scenarios where emissions are reduced by 110, 98, and 49%, respectively, compared to 2015 values. Thus, it is important that the country increase its uptake of renewables if it intends to effectively decarbonize the generation section and meet its net zero-carbon commitments. Figure 13 shows the cumulative GHG emissions of the different scenarios with respect to the BAU scenario.
. . Sensitivity results
The results from the feedstock fuel sensitivity analysis (±50% variation) are presented in Table 6.
As expected in Table 6, only the feedstock fuel cost is affected. Furthermore, it is observed that changes in fuel cost have a significant impact on the optimized scenario (up to 80%), contrary to other scenarios. However, lower fuel costs lead to a reduction in capital costs and O&M costs while resulting in additional externality and fuel costs. This is due to the . /frsc. .
FIGURE
Cumulative GHG emissions savings of the di erent scenarios with respect to the BAU scenario. Low fuel cost = 50% decrease in fuel costs; high fuel cost = 50% increase in fuel costs.
LEAP/OSeMOSYS least-costs framework favoring the addition of fossil-fuel generation plants over renewables. Higher fossilfuel power plant development results in higher emissions and, consequently, high externality costs. Under higher fuel costs, this trend is reversed due to the high costs of owning and operating fossil-fuel power plants. This emphasizes the impact of fossil-fuel prices and carbon taxes on the uptake of renewable energy technologies.
The sensitivity of the optimized scenario to changes in fuel costs, contrary to the other scenarios, also highlights the unsuitability of cost optimization models. These models, like the LEAP/OSeMOSYS framework, rely on a series of parametric and structural assumptions (such as fuel costs and discount rates) in assessing scenarios. However, these assumptions are often narrow and subject to uncertainty, which is not accounted for in the scenario development process (DeCarolis et al., 2012;Poncelet . /frsc. . et al., 2016). Thus, cost optimization models are not recommended for assessing the clean energy transition, given that they fail to appropriately capture the systematic changes in real-world transitions (Trutnevyte, 2016).
. . Summary of results and perspectives
From the study, Cameroon's current generation trajectory represented by the BAU scenario is incapable of meeting her NDC objectives. However, meeting the objective per the stated scenario, though with the least costs, does not result in the highest emission savings or sustainable development. The modified scenario, though with the highest costs, represents an ideal 25% renewable energy mix for Cameroon. In terms of capacity, cost, and emission reduction, the scenarios ranked in descending order are the optimized scenario, the stated scenario, and the modified scenario. The optimized scenario considers only biomass to fulfill the 25% renewable target of the NDCs and ignores other sources like solar, hydro, and wind. The energy potentials in Cameroon are such that biomass resources are not evenly distributed across the country (huge biomass and hydro resources are concentrated in the southern part, while high wind and solar resources are in the Northern part); hence, there is a need for diversity in energy supply. Biomass technology for power generation is still very unpopular in Cameroon, with only a few pilot projects from agro-industrial companies [such as SOSUCAM, SODECOTON, SOCAPALM, MAISCAM, and the Cameroon Development Cooperation (CDC)] and allocating a huge percentage biomass penetration amid other competitive and more accessible technologies is irrational. In addition, there are no government policies that intensively consider the development of biomass power generation in Cameroon, which is the reason why this sector has been inactive since the country's independence. This makes the optimum scenario and stated scenario unsuitable for Cameroon. Therefore, the scenario that is suitable for Cameroon is the modified scenario since it rationally allocates the various RE technologies based on the accessibility of technology, availability of RE resources, and the country's economic readiness. In Cameroon, policy support for solar PV is increasing with a wave of economic incentives such as a tax break of 10 years for solar PV developers (Ngalame, 2022) and the exemption from value-added tax (VAT) on imported solar accessories (Cameroon National Assembly, 2011;Electricity Sector Regulatory Agency, 2015). In addition, solar PV is able to compete equally with off-grid systems having battery storage to substitute diesel generators used in some communities or offer backup electricity supply in cases of an unreliable grid.
Due to the high cost of grid extension to some remote areas, the government, in the rural electrification master plan (AER, 2017), plans to develop local mini-grids using RE technologies. Since this initiative started, the government has continued to electrify off-grid communities with solar PV. No off-grid community in Cameroon is powered by a biofuel-based electricity system, and there are no signals that this is happening in the short term as solar and hydro are currently given more priority. This shows the government's reluctance to the advancement of biomass-based generation despite the enormous resource potentials. Table 2, solar PV has a lower capital cost (2,052 $/kW in 2015) than biomass generation (3,000 $/kW in 2015). However, biomass has a higher capacity factor (60%) than solar PV (25%), as shown in Table 4. However, the O&M costs of solar PV technology are lower than that of biomass generation. If the system LCOE is considered the basis for assessment, biomass with a lower LCOE will be the selected option because (i) the higher capacity factor of 60% compared to 25% for solar PV and (ii) the low costs of biomass feedstock in Cameroon which is almost free. It is worth noting that biomass feedstock is not free in developed countries. Nonetheless, the proposed "modified scenario" considered more solar PV than biomass. This is due to the maturity of the technology and the public perception in Cameroon. This is evidenced by the uptake of 14.19 MW solar PV, 0.3 MW small hydropower, and 0 MW biomass since 2015, despite a higher target in the other scenarios. This study equally supports the call for Cameroon to update her NDCs, like other countries have done during the COP26 in Glasgow, with more quantifiable statistics on the requirements for achieving this noble government ambition on combating climate change. Thus, the three fundamental questions facing the RE and the electricity sector of Cameroon, as stated in the introduction, have been examined by this study, and the insights are summarized as follows:
As shown in
• The installed capacity by 2035 reaches 4.89 GW under the BAU scenario and only attains a 12% RE target. This implies the current generation expansion trajectory of Cameroon is inadequate to meet her 25% planned RE target. Moreover, the current deployment trend is unlikely to achieve the planned RE technology share. For example, the current trend is seeing a greater focus on solar PV than on biomass and small hydropower stated in the NDCs. • Apart from the optimized scenario, the current generation expansion pathway (BAU scenario) is the most cost-efficient, given that an additional cost of at least $94 million is required in the alternative scenarios. However, optimal pathways are usually not ideal for decision-making (Trutnevyte, 2016). • The modified scenario, which accounts for recent RE development trends (i.e., more solar PV and wind energy), attains the 25% RE target by 2035, at only $122.3 million more in additional costs and at 6% more in emission savings compared to the BAU scenario.
From the findings, the current trajectory (BAU scenario) would be appropriate if decision-making was based exclusively on economic factors. However, economics was neither the basis for the Paris Climate Agreement nor the basis for the country's ratification of the Paris Accord and the development of her NDCs. Thus, the current policy framework for RE development is inadequate. An appropriate policy review while considering the recommendations in the subsequent section is thus necessary to redirect the nation's development trajectory.
However, the study is subject to some limitations. Cameroon is characterized by increasing and frequent power outages due, in part, to the dilapidated nature of the power grid (ENEO, 2019). Like most countries in SSA, the country has resorted to backup generators (gensets) as a stopgap to deal with grid deficiencies.
The total installed capacity of existing gensets in Cameroon is estimated at ∼100 MW (IFC, 2019). Due to a number of factors, such as non-inclusion in official policy documents, suppressed demand, and paucity of data on the energy generation profiles caused by their spontaneous operation, the study does not integrate the energy generated from these gensets. Thus, the study results may not accurately reflect the nation's real energy generation, total emissions, or future generation expansion trajectory. Furthermore, perfect foresight was assumed in developing the scenarios in this study. However, the parametric and structural assumptions used in the scenarios, such as technology characteristics and fuel costs, are uncertain and subject to variability in time. Energy models, especially the cost-optimization models (optimized scenario), have been proven to have systematic biases, do not account for uncertainty, and consider narrow assumptions as presented in DeCarolis et al. (2012) and Trutnevyte (2016). Thus, the exact figures of this study ought to be interpreted with caution. Moreover, the findings from this study are meant to trigger stakeholder discussions and redirect policy in Cameroon.
. Recommendation for sustainable power sector expansion in Cameroon
Prices of solar PV, wind energy systems, and battery storage systems continue to decrease rapidly. Data from IRENA indicate a drastic drop in the weighted average levelized cost of power of utility-scale globally on solar PV, onshore wind, and battery storage by 77, 35, and 85%, respectively, between 201077, 35, and 85%, respectively, between and 201877, 35, and 85%, respectively, between (IRENA, 2018Bloomberg, 2019). These trends in cost bring fresh potentials for the extensive deployment of renewables and an all-inclusive power sector decarbonization agenda that were not anticipated in previous policy formulation. Cameroon, therefore, needs to take steps through partnering with countries with a reputable history of manufacturing renewable energy components, sign win-win agreements on technology transfer, and have these equipment delivered at a further low cost. This will drastically reduce the cost of acquisition of these equipment by the utility company and private individuals, hence accelerating their deployment in the country. Below are some policy proposals for a swift transition to renewables in Cameroon.
. . The need for an all-inclusive sector planning
An appraisal of the institutional setting of the Cameroonian electricity sector in the domain of both policy and operations shows several gaps and possible misalignment between authority and accountability. This is more evident in the area of planning power demand, PPA negotiations, management of rural electrification, and tariff setting (Electricity Sector Regulatory Agency, 2015). In advanced power grids, utility operators usually have the duty of demand forecasting and the development of medium-term supply strategies in collaboration with the state regulator and other stakeholders. In Cameroon, the government, through policy, has imposed tough supply and access targets to the power utility (ENEO) (Castalia Advisory Group, 2015) while ironically reducing the scope of the flexibility of tariff regulation by utilities. This exposes the utility to a situation of unmet demand, leading to the risk of disrupting the utility's future expansion plans.
We recommend that government should protect ENEO from any risk that forestalls the company's future expansion plans by offering financial reparation guarantees or granting flexibility premiums sufficient enough to cover the impact of probable overcommitment on the electricity supply. The utility should, from time to time, conduct medium-term (5-10 years) demand forecasting studies as part of its duty for integrated RE system planning. Moreover, the overlapping authority among different ministries related to the power sector (Electricity Sector Regulatory Agency (ARSEL), 2020) has always been a source of potential conflict which has hampered, in one way or the other, the utility's ability to fully engage in both on-grid and off-grid electricity access development programs. The government should make steps toward establishing an autonomous Rural Electrification Agency or similar structure that would manage the numerous initiatives geared toward expanding power supply and particularly optimizing the allocation of funding. Simultaneously, the government should establish an organized approach toward the development of industrial zones in the country with the intention of matching industrial development with electricity requirements, laying emphasis on the impact on power demand. This approach could include some extra support from an Industrial Zone Electricity Agency with incentives for demand-side management from ENEO, the county's power utility. The value proposition embodied in robust, rational, and resilient plans for clean electricity generation must be widely advertised to potential international and national investors. This would further push the argument that the electricity supply eases the creation of industries, hence economic growth.
. . Decentralizing the energy governance structure
The power sector in Cameroon operates a highly centralized governance structure, at the top of which is the Ministry of Energy (Njoh et al., 2019), led by a minister. Even though the ministry has regional and divisional offices all over the country, all major decisions on the power sector are taken in Yaounde, the country's capital. Only the most ordinary decisions are made at the regional and district levels, ignoring the contribution of the local population to meeting their energy needs (Iweh et al., 2022b). The government should create technical working groups at regional and district levels on climate change and RE through the development of a nationwide action plan that clearly identifies the duty of stakeholders. The group will support and build synergy among inter-sectorial technical commissions, and the regional secretariat will have an entity authorized to coordinate energy initiatives. This will enhance collaboration between the district and the central government's energy centers. At divisional levels, technical committees on energy initiatives could be established for intersectorial coordination, and divisional administrators should apply strategies that encourage community input. The communication channels and support systems of women and youth groups should be empowered so that problems that need high-level intervention can be resolved and the progress of the program can be evaluated.
. . Adopt a continuous policy review and revision process
Cameroon has outlined a number of ambitions (Vision 2035) (MINEPAT, 2009b) that seek to improve economic growth, accelerate wealth creation, and improve the living standards of its citizens. Since economic development is associated with an increase in electricity supply, the obvious tendency is equally pursuing ambitious plans in the power sector. Nonetheless, there are concerns about how soon Cameroon's vision for 2035 can be realized (Ekeke and Nfornah, 2016), especially when one considers some volatile and uncertain external factors. Therefore, cautious planning in the country's energy policy that strongly supports renewable energy development through robust plans matched by measurable outcomes (reliable supply, affordable power, and improved electricity access) should be implemented. This reduces the risk of unjustifiable energy expenses that do not reflect actual outcomes.
Cameroon urgently needs to prioritize the development of programs that continuously monitor and adjust the timing of long-term expansion projects to avoid a mismatch between power production capacity and grid assets on the one hand and the actual evolving demand and load profile on the other hand. The weakness in Cameroon is that long-term projects rarely have a monitoring mechanism that evaluates progress with the intention of managing risk and delivering in a timely manner. To escape this conundrum, Cameroon should actively develop flexible and responsive power supply plans that establish demand-driven actions for capacity building with dedicated lead times of 1-3 years, with a procedure ready to monitor key indicators so as to determine which indicator needs more attention. This strategy will require the utility (ENEO) and government to liaise with energy developers to build win-win commercial models having Power Purchase Agreements (PPAs) with enhanced flexibility and clear implementation processes. These models should be realizable both in terms of timing and estimated budget. The already set out renewable energy goals, if properly monitored, will upscale the deployment of RE in the power generation mix.
. . The establishment of Feed-in-Tari s (FiTs)
Countries with a successful track record of RE development have adapted various economic instruments to enhance growth. Cameroon has limited incentives geared toward improving the RE penetration in the country (Njoh et al., 2019). The Cameroon electricity market requires a FiTs scheme which will help attract private sector involvement in the sector. The FiTs scheme makes RE projects economically feasible through the compensation of differences in the cost of energy between conventional and RE systems. This is either done through government subsidies or tax levies on consumers, and the government later shifts the financial burden directly or indirectly to users or the public. Notwithstanding the financial implication on either government or consumers, RE development would improve social welfare and lessen environmental threats to citizens.
The initial phase could mean the government soliciting funds through the global energy transfer Feed-in-Tariffs (GET-FiTs) program, a policy scheme where developed nations fund RE projects in developing countries for the application of FiTs. The government could request funding from international institutions or foreign countries (Germany, Japan, Belgium, Canada, China, etc.) that are actively doing business in Cameroon. This will help Cameroon to secure a FiTs premium with a reduction in the burden, and independent power producers (IPPs) with RE generators will sell electricity at a cost pre-contracted with the utility company. Those who use electricity from renewable sources are compensated with premiums from government and foreign funders.
. . Local content development for renewable power generation equipment
The cost of RE deployment has further added in developing countries like Cameroon due to the need to import generation equipment, which acts as a barrier to the rapid growth of renewable energy projects. However, there are already some community micro power projects in Cameroon, such as that of Baleng (Mungwe et al., 2016), where most of the power equipment (hydro turbine, penstock, etc.) were locally manufactured by the villagers. There are potentials for developing local manufacturing competencies in the country, and scaling up the percentage of local content for renewables can assist in reducing costs and contributing to its rapid deployment. It is anticipated that the existing local content in the electricity value chain will grow as knowledge and expertise improve, but this, however, greatly depends on government's commitment to supporting this initiative. Some prospects for local content development exist in inverters and components like LED lamps, as a few universities within the country are already making prototypes that could be scaled. In addition, there exist substantial options for more Cameroonian content in operations and maintenance, installation, and related services. These aspects are crucial to the speedy development of RE in Cameroon.
Renewable energies are also a factor of social transformation in training and employment while granting a certain economic autonomy and energy security to local communities. Better supervision of this sector will make a substantial contribution over the next few years to the creation of a new economic model by allowing the development of scientific, technological, and industrial know-how, the impact of which will already be perceptible in the medium and long term in Cameroon. The government should focus on technology transfer, creating the best conditions for longterm uptake of the new technologies by tailoring them to local needs and engaging local actors and business partners to construct technical and market development capacity.
. . Emphasis on the development of o -grid systems
Cameroon aims to rapidly expand access to electricity through the deployment of renewables, 25% of renewables by 2035 compared to the current <1% (MINEE, 2015) RE penetration. The challenge of realizing these targets is daunting without suitable measures. Any genuine method that seeks to meet this access target will need to emphasize off-grid electrification solutions: Certainly, they require an intensive mobilization for off-grid power supply. The present off-grid development plans need further clarification in terms of choice of technology, funding models, tariff system, and relationship management among private developers. However, an increase in off-grid expansion poses some challenges with regard to off-grid generation and pursuing simultaneous plans for grid expansion. One pertinent issue is the possible future of off-grid users when the main grid arrives, especially when off-grid assets have not been completely paid back at the time of arrival of the main grid. The difficulty at this point is to select suitable compensation mechanisms that help repay the assets.
In addition, considerations for grid power supply will vary significantly across the country since the various regions of Cameroon have dissimilar resource potentials, different load profiles, and variable needs for grid improvement and backup electricity supply with different mixes of on-grid and off-grid electricity access. The relationship between grid developers and offgrid developers will require careful management both at regional and local levels. A common national strategy toward investing in the sector, requesting a connection, managing loads, and tariff collection is perhaps not feasible. This poses a potential challenge to the idea of applying a universal electricity tariff and combined subsidies and financial models. Other issues include power losses mostly in the regional and local distribution network, and the method of tariff collection in villages which will require a robust local management scheme.
. . Access to carbon credits and mobilization of funds
The Clean Development Mechanism (CDM) is a funding source for Cameroon. It was set up by United Nations through the Kyoto Protocol to finance environmentally friendly projects. Certainly, Cameroon can take advantage of the CDM in financing renewable energy projects. This economic instrument permits developed countries to meet up with part of their emission reduction commitments by financing projects in developing countries that reduce GHG emissions since this is believed to be less expensive in developing countries. The fundamental requirement of the CDM is that the invested projects should be in line with the host country's development priorities. The CDM has a wide range of technologies it considers, among which is renewable energy technologies. Cameroon ratified the UNFCCC in October 1994, Kyoto Protocol in July 2002, and most recently the Paris Climate Accord in 2015. In 2006, Cameroon established a national commission for the CDM, while the Ministry of Environment and Nature Protection instituted a national agency to execute CDM projects (Ngnikam, 2009). To date, only the Hygiene and Sanitation Company of Cameroon (HYSACAM), which is in charge of the collection and treatment of municipal solid waste, has benefited from this funding source. Another related funding source Cameroon can benefit from is the Carbon Finance for Sustainable Energy in Africa (CF-SEA) which is funded by a joint World Bank-United Nations support and supervised by the United Nations Environment Program (UNEP). The CF-SEA program has supported several CDM clean energy projects in Cameroon. Cameroon has benefited from ∼60 MW of CDM projects with an emission reduction potential of ∼2 million tons of CO 2 (REEGLE, 2015). Another potential source of funding for Cameroon is the loss and damage fund adopted during COP27, where developing nations with high vulnerability to the climate crisis will be supported (United Nations Environment Programme, 2022). These funds can be used in the deployment of clean energy technologies for power generation.
. Conclusion
Apart from reducing emissions, renewable energy equally has potentially positive impacts on sustainable development, and it is an argument widely supported in the economic literature. If the rural-urban dichotomy in electricity access rate in Cameroon and other countries in SSA could be reduced, especially through RE technologies, it would go a long way to improve rural productivity, alleviate poverty, reduce unemployment, and ruralurban migration.
The study is a synthesis of quantitative analysis that explores the pathway to renewable energy transition in Cameroon's power sector through a policy perspective. The appraisal of power sector initiatives shows several conflicting energy generation ambitions by different agencies in the country, and this is a clear indication that there is sadly limited collaboration among related ministries that can cooperatively tackle the energy transition. The study posits that government policies should endeavor to reduce operational barriers to renewable energy deployment, particularly by empowering local skill acquisition through learningby-doing, as this will increase adoption rates. Policy mechanisms that enhance the implementation of FiT, the unbundling of the over-centralized energy governance structure, and stakeholder cooperation are a favorable environment for renewables. The Cameroon government has recognized that regional cooperation is crucial in galvanizing efficiencies and economies of scale that result in the deployment of renewables in a coordinated way and is currently working on the Chad-Cameroon power project. This approach is effective due to Cameroon's huge renewable energy potential and her willingness to be involved in the largescale deployment of shared renewable resources for electricity generation. However, a lot still needs to be done in the area of embracing an integrated method to transboundary issues, such as trade, regional setup, regulatory frameworks, and policies. The establishment of structures which clarifies these cross-border issues would allow the countries within the sub-region to benefit . /frsc. . from coordinated access to regional renewable resources at a reasonable cost. Projections from IRENA (2015) have indicated that RE project deals in Africa will provide power at some of the cheapest costs globally. Therefore, the abundance of solar, biomass, and hydropower in Cameroon renders these technologies economically competitive and promising for meeting emission reduction commitments in the electricity sector, especially with the rapidly decreasing costs of renewable technologies. The institutional setup in the sector in Cameroon needs to improve its ability to plan projects properly so that future investments are executed in a harmonized and economically viable manner. The reviewed power ambitions in some policy document like the REMP, the energy sector development plan, and the NDCs shows ambitiously contradictory targets within overlapping time periods. The government needs to harmonize the project coordination unit that will work on developing an updated and Harmonized Energy Master Plan (HEMP) since the existing ones are obsolete and no longer representative. The HEMP should conduct an assessment of the planned generation expansion projects with an emphasis on those that were unsuccessful within set time scales and provide solutions to fast-track their implementation. Future research direction should consider energy models that account, to the greatest extent, for uncertainty parameters. Uncertainties in fuel costs, energy demand, fuel availability, and so on. The future study could also consider grid integration studies and power system analysis to ensure the safe and reliable integration of renewables in the grid.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author. | 16,220.4 | 2023-02-17T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Economics"
] |
From Chickens to Humans: The Importance of Peptide Repertoires for MHC Class I Alleles
In humans, killer immunoglobulin-like receptors (KIRs), expressed on natural killer (NK) and thymus-derived (T) cells, and their ligands, primarily the classical class I molecules of the major histocompatibility complex (MHC) expressed on nearly all cells, are both polymorphic. The variation of this receptor-ligand interaction, based on which alleles have been inherited, is known to play crucial roles in resistance to infectious disease, autoimmunity, and reproduction in humans. However, not all the variation in response is inherited, since KIR binding can be affected by a portion of the peptide bound to the class I molecules, with the particular peptide presented affecting the NK response. The extent to which the large multigene family of chicken immunoglobulin-like receptors (ChIRs) is involved in functions similar to KIRs is suspected but not proven. However, much is understood about the two MHC-I molecules encoded in the chicken MHC. The BF2 molecule is expressed at a high level and is thought to be the predominant ligand of cytotoxic T lymphocytes (CTLs), while the BF1 molecule is expressed at a much lower level if at all and is thought to be primarily a ligand for NK cells. Recently, a hierarchy of BF2 alleles with a suite of correlated properties has been defined, from those expressed at a high level on the cell surface but with a narrow range of bound peptides to those expressed at a lower level on the cell surface but with a very wide repertoire of bound peptides. Interestingly, there is a similar hierarchy for human class I alleles, although the hierarchy is not as wide. It is a question whether KIRs and ChIRs recognize class I molecules with bound peptide in a similar way, and whether fastidious to promiscuous hierarchy of class I molecules affect both T and NK cell function. Such effects might be different from those predicted by the similarities of peptide-binding based on peptide motifs, as enshrined in the idea of supertypes. Since the size of peptide repertoire can be very different for alleles with similar peptide motifs from the same supertype, the relative importance of these two properties may be testable.
INTRODUCTION
Molecules encoded by the major histocompatibility complex (MHC) of jawed vertebrates play central roles in immune responses as well as other important biological processes (1). Among these molecules are the classical class I molecules, which are defined by presentation of peptides on the cell surface, high and wide expression and high polymorphism. There are also non-classical class I molecules that lack one or more of these properties; in this report, only the classical class I molecules will be considered and will be abbreviated MHC-I.
MHC-I molecules bound to appropriate peptides on a cell surface are ligands for thymus-derived (T) lymphocytes through the T cell receptor (TCR) composed of a and b chains (along with the co-receptor CD8), with the outcome generally being death of the target cell through apoptosis (2). The cytotoxic T lymphocytes (CTLs) are important agents for response to infectious pathogens (particularly viruses) and cancers. The repertoire of TCRs is formed by somatic mutational mechanisms in individual cells and is vast and crossreactive, so that in principle any MHC molecule bound to any peptide could be recognized (3). In fact, selection of T cells in the thymus strongly affects the TCR repertoire, but, to a first approximation, it is the polymorphism of the MHC molecules along with self-peptides that determines thymic selection, presentation of peptides, and thus immune responses (4).
However, many MHC-I molecules are also ligands for natural killer (NK) cells through a variety of NK receptors (NKRs), with the potential outcomes including cytokine release and target cytotoxicity (2). Analogous to T cell education based on the MHC molecules and self-peptides present in an individual, the responses of NK cells depend on the particular MHC molecules present during development, a phenomenon referred to as education, licencing, or tuning (5). Both NKRs and MHC-I ligands are polymorphic, with the interactions of particular receptors with particular ligands varying markedly in strength.
Since the MHC and the regions encoding NKRs are located on different chromosomes, the genetic result is epistasis, which in humans and mice affects infectious disease, autoimmunity, and reproduction. Indeed, there appears to be antagonistic selection between immune responses and reproduction in humans (6).
MHC-I molecules (7) generally bind short peptides, 9-11 amino acids in length, along a groove between two a-helices above a b-pleated sheet. The peptides are tightly bound at the Nand C-termini by eight highly-conserved amino acids in pockets A and F, so that longer peptides bulge in the middle. Specificity of binding to different MHC-I alleles arises from peptide interactions with the polymorphic amino acids that line the groove, often with deeper pockets B and F being most important, but with other pockets being important in some alleles. The important pockets typically bind just one amino acid or a few amino acids with side chains that have very similar chemical properties, although some promiscuous pockets allow many different amino acids. The particular amino acids generally allowed to bind in the important pockets, the so-called anchor residues, give rise to peptide motifs for MHC-I alleles. Many alleles have been grouped into several supertypes (8) based on similarities in peptide motifs and in polymorphic amino acids lining the pockets. Some motifs are quite stringent in their requirements while others are more permissive, leading the concepts of fastidious and promiscuous MHC-I alleles with differently sized peptide repertoires (9).
TCRs recognize the side chains of peptide residues that point up and away from the peptide-binding groove, mostly in the middle of the peptide (10). It has long been known that the particular peptides bound to MHC-I molecules could influence interaction with inhibitory NKRs (11)(12)(13)(14), which eventually was refined to NKR interaction with side chains near the end of the peptide (typically residues 7 and 8 of a 9mer) (15,16). Moreover, both viral and bacterial peptides have been reported to affect recognition by activating NKRs (17,18).
Among the questions that will be considered in this report are the extent to which the size of the peptide repertoire may influence the binding NKRs, and the extent to which MHC-I alleles within a supertype have the same sized peptide repertoire. In order to approach these questions, it is appropriate to review what is known about peptide repertoires, beginning with chicken class I molecules.
THE CHICKEN MHC: A SIMPLE SYSTEM FOR DISCOVERY
The vast majority of what is known about the MHC and MHC molecules was discovered in humans and biomedical models like mice (1). In typical placental mammals (Figure 1), the MHC is several megabase pairs (Mbp) of DNA with hundreds of genes, separated into haplotype blocks by several centimorgans (cM) of recombination. The few MHC-I genes located in the class I region are separated from the few class II genes in the class II region by the class III region which contains many unrelated genes. Some genes involved in the class I antigen processing and presentation pathway (APP) are also located in the MHC, including two genes for inducible proteasome components (LMPs or PSMBs), two genes for the transporter for antigen presentation (TAP1 and TAP2) and the dedicated chaperone and peptide editor tapasin (TAPBP). However, these class I APP genes are located in the class II region and are more-or-less functionally monomorphic (19)(20)(21), working well for nearly all loci and alleles of MHC-I molecules. In humans, the three loci of MHC-I molecules may not be interchangeable: HLA-A and -B present peptides to CTLs with only some alleles acting as NKR ligands, while HLA-C is less well-expressed and mostly functions as an NKR ligand (22,23). There is also evidence that HLA-A and -B may do different jobs, since HLA-B is more strongly associated with responses to rapidly evolving small (RNA) viruses, while HLA-A may be more involved with large double-stranded DNA viruses (24).
In contrast, the chicken MHC is small and simple (Figure 1), and evolves mostly as stable haplotypes (9,25). The BF-BL region of the B locus is less than 100 kB and contains two MHC-I genes (BF1 and BF2) flanking the TAP1 and TAP2 genes, with the TAPBP gene sandwiched between two class II B genes nearby, and with the class III region on the outside. There is evidence only for historic recombination within this region, with no examples of recombinants from over 20,000 informative progeny in deliberate mating, although there is clear recombination just outside (in the so-called TRIM and BG regions) (26)(27)(28). As a result, alleles of these strongly-linked genes stay together for long periods of time, so that the APP genes are all highly polymorphic and co-evolve with the BF2 gene (9,29). As an example, the peptide translocation specificity of the TAP is appropriate for the peptide binding specificity of the BF2 molecule encoded by that haplotype (30,31). Apparently as a result, the BF2 molecule is far more expressed and also more polymorphic than the BF1 molecule (32,33). Thus far, the evidence is that the BF2 molecule presents peptides to CTLs, while BF1 functions as a ligand for NK cells (34).
This simplicity of the chicken MHC can make it easier to discover phenomena that are difficult to discern in the more complicated MHC of humans and other placental mammals. For example, there are many examples of strong genetic associations of the B locus (and in some cases, the BF-BL region) with responses to economically-important diseases, including Marek's disease caused by an oncogenic herpesvirus, infectious bronchitis caused by a coronavirus and avian influenza (9,35). In contrast, the strongest associations of the human MHC are with autoimmune diseases, with the strongest associations with infectious disease being with small viruses like HIV (1). One hypothesis for this perceived difference is the fact that the human MHC has a multigene family of class I molecules which confer more-or-less resistance to most viral pathogens (reading out as weak genetic associations), while the chicken MHC has a single dominantly-expressed class I molecule, which either finds a protective peptide or not (reading out as strong genetic associations) (9,36).
Other examples of discovery from the apparent simplicity of the chicken MHC will be described below, but it has become clear that other aspects of the avian immunity may be very complex, for instance the chicken NKR system.
PROMISCUOUS AND FASTIDIOUS CLASS I ALLELES IN CHICKENS
One of the discoveries that was facilitated by the presence of a single dominantly-expressed chicken class I molecule is an apparent inverse correlation between peptide repertoire and cell surface expression, along with strong correlations with resistance to infectious diseases. Some so-called promiscuous BF2 alleles bind a wide variety of peptides but have a relatively low expression on the cell surface cell, while other so-called fastidious BF2 alleles bind a much more limited variety of peptides but have higher cell surface expression (9,32,37,38).
It is not clear whether there is a hierarchy or two general groups of alleles, or to what extent the cell surface expression levels are exactly an inverse of the peptide repertoire. The analysis of expression level by flow cytometry is quantitative, but the exact levels vary for different cell types. The peptide repertoires are far more difficult to quantify, with even immunopeptidomics that fairly accurately counts numbers of different peptides by mass spectrometry suffering from the FIGURE 1 | The chicken MHC (BF-BL region) is smaller and simpler than the human MHC (HLA locus), with a single dominantly-expressed MHC-I molecule due to co-evolution with peptide-loading genes. Colored vertical lines or boxes indicate genes, with names above; thin vertical lines indicate region boundaries, with names above or below; location is roughly to scale, with the length of approximately 100 kB indicated. Thickness of arrows pointing up indicate level of expression, coevolution between the TAP genes and the BF2 class I gene indicated by a curved arrow beneath the genes. Genes from the class I system, red; the class II system, blue; the class III or other regions, green; solid colors indicate classical genes while striped colors indicate genes involved in peptide loading. Figure from (9).
drawback that the abundance of any given peptide is laborious to establish definitively. However, for certain well-studied standard B haplotypes, the peptide-motifs based on gas phase sequencing and on immunopeptidomics, as well as the pockets defined by crystal structures, give qualitative rationales for the peptide repertoires (9,32,(37)(38)(39)(40)(41). The peptide translocation specificities of the TAP alleles from the few haplotypes examined provide additional support (30,31).
The high expressing fastidious alleles typically bind peptides through three positions with only one or a few amino acids allowed (32,(39)(40)(41). For instance, the BF2 allele from the B4 haplotype (BF2*004:01) binds almost entirely octamer peptides with three acidic residues: Asp or Glu at positions P2 and P5, and Glu (with very low levels of hydrophobic amino acids) at position P8, which fits the basic amino acids forming the so-called pockets B, C, and F in wire models and the crystal structure. BF2*012:01 binds octamer peptides with Val or Ile at position P5 and Val at position P8, but with a variety of amino acids at position P2, which is an anchor residue as seen by structure. BF2*015:01 binds peptides with Arg or Lys in position P1, Arg in position P2 and Tyr (with very low levels of Phe and Trp) at positions P8 or P9. In fact, these BF2 alleles with fastidious motifs can bind a wider variety of peptides in vitro than are actually found on the cell surface (31,39); the TAP translocation specificities are more restrictive than the BF2 peptide binding specificities.
In contrast, it would appear that a variety of binding mechanisms can lead to low expressing alleles with promiscuous motifs. BF2*021:01 has certain positions with small amino acids leading to a wide bowl in the centre of the binding groove, within which Asp24 and Arg9 can move, remodelling the binding site to accommodate a wide variety of 10mer and 11mer peptides with covariation of P2 and Pc-2 (two from the end), along with hydrophobic amino acids at the final position. Interactions between P2, Pc-2, Asp24, and Arg9 allow a wide range of amino acid side chains in the peptide, with at least three major modes of binding (37,38). Analysis of peptide translocation in B21 cells shows the specificity is less stringent than the BF2*021:01 molecule (31). In another mechanism, BF2*002:01 binds peptides with two hydrophobic pockets for P2 and Pc, but the pockets are wide and shallow, allowing a variety of small to medium-sized amino acid side chains (38). BF2*014:01 also has two pockets, accommodating medium to large-sized amino acid side chains at P2 and positive charge(s) at Pc (38). Binding many different hydrophobic amino acids allows a promiscuous motif, since hydrophobic amino acids are so common in proteins.
Another interesting feature of chicken class I molecules is Cterminal overhang of peptides outside of the groove. In placental mammals, one of the eight invariant residues that bind the peptide N-and C-termini is Tyr84, which blocks the egress of the peptide at the C-terminus. However, in chickens (and all other jawed vertebrates outside of placental mammals), the equivalent residue is an Arg (42,43) and this change allows the peptide to hang out of the groove, as has been found in crystal structures of BF2*012:01 and 014:01 (28,30). At least one low expressing class I allele with an otherwise fastidious motif shows lots of such overhangs (C. Tregaskes, R. Martin and J. Kaufman, unpublished), suggesting that the TAP translocation specificity (or perhaps the TAPBP peptide editing) controls the extent to which overhangs are permitted. Interestingly, the equivalent position in class II molecules is also Arg, allowing most peptides to hang out of the groove, with some of these overhangs recognized by TCRs (40,43,44). Thus, the presence of such overhangs may be a third mechanism for chicken class I promiscuity, and may affect both TCR and NKR recognition, as do peptide sidechains within the groove in humans (10,16).
The reason for the inverse correlation of peptide repertoire with cell surface expression is not clear. Among the possibilities are biochemical mechanisms, which are highlighted by the fact that all chicken BF2 alleles have nearly identical promoters, and that the amount of protein inside the cell does not differ much, but that the amount that moves to the cell surface is more for fastidious than promiscuous alleles (31). Thus, the amount of time associated with the TAPBP and TAP in the peptide-loading complex (PLC) could be a mechanistic reason. Another potential biochemical mechanism might be stability and degradation; promiscuous alleles from cells are overall less stable than fastidious alleles in solution, but pulse-chase experiments of ex vivo lymphoctyes show no obvious difference in turn-over (31). As a second reason, the correlation could arise from the need to balance effective immune responses to pathogens and tumours with the potential for immunopathology and autoimmunity. A third possibility is the need to balance negative selection in the thymus with the production of an effective naïve TCR repertoire: more peptides presented would mean more T cells would be deleted, but since TCR signal depends on the number of peptide-MHC complexes, lower class I expression would mean fewer T cells would be deleted (9,45). If true, the expression level would be the important property, since it would mirror the need for an effective T cell repertoire.
What makes this inverse correlation so interesting is the association with resistance and susceptibility to economicallyimportant pathogens. A correlation with low cell surface expression was first noticed for resistance to the tumours arising from the oncogenic herpesvirus that causes Marek's disease, and later understood to correlate with a wide peptide repertoire (9,(36)(37)(38). Important caveats include the fact that the association of the B locus with resistance to Marek's disease, while still true for experimental lines, have not been found for current commercial chickens (46)(47)(48); an explanation may be the fact that poultry breeders have strongly enriched for low expressing class I alleles in their flocks so that the MHC no longer has a differential effect (C. Tregaskes, R. Martin and J. Kaufman, unpublished). Another caveat may be that there are various measures of the progress of Marek's disease, and the BF-BL region correlations may not be the same for all of them. A third caveat is that the BF-BL region is composed of strongly-linked genes, so that the gene (or genes) responsible for resistance are not yet definitively identified; an example is the evidence for the effect of the BG1 gene (49). An important counter to these caveats is that there is evidence that MHC haplotypes with low-expressing class I alleles confer resistance to other infectious viral diseases, including Rous sarcoma, infectious bronchitis and avian influenza (9,(50)(51)(52). Importantly, there is little recognized evidence that the high expressing alleles provide important immune benefit to chickens.
PROMISCUOUS AND FASTIDIOUS MHC MOLECULES IN HUMANS: GENERALISTS AND SPECIALISTS
Having clear evidence of the inverse correlation of cell surface expression level with peptide repertoire of chicken BF2 alleles and infectious disease resistance, it was natural to ask whether these relationships are fundamental properties of class I molecules, as opposed to some special feature of chicken class I molecules. Such evidence for human HLA-A and B alleles with hints towards potential mechanisms was not hard to find.
Progression of human immunodeficiency virus (HIV) infection to frank acquired immunodeficiency disease syndrome (AIDS) is one of the best examples for an association of infectious disease with the human MHC. Some HLA alleles lead to fast progression and death, while others result in very slow progression, for which the individuals can be called elite controllers (53,54). The number of peptides from the human proteome predicted to bind four HLA-B alleles was compared to odds ratio for AIDS, finding that the most fastidious alleles were the most protective. Although the correlation with disease resistance was the reverse of what was found for chickens (45), flow cytometric analyses of these four alleles on ex vivo blood lymphoid and myeloid cells showed that these human class I molecules had the same inverse correlation between peptide repertoire and cell surface expression as in chickens (38).
A mechanism of resistance by such elite controlling HLA-B alleles has been reported: the presentation of particular HIV peptides to CTLs which the virus can mutate to escape the immune response, but only at the cost of much reduced viral fitness. For such alleles, the virus is caught between a rock and a hard place (55,56). The protection to the human host afforded by binding and presenting such special peptides led to a hypothesis (9,38), in which the promiscuous class I alleles act as generalists, providing protection against many common and slowly evolving pathogens (as in chickens), while the fastidious alleles act as specialists, with particular alleles providing protection against a given new and quickly evolving pathogen (as in humans). There are some caveats to this story. One is that the predictions are only a reflection of reality, based on benchmarking the predictions made by such algorithms against experimental data from immunopeptidomics (57). Another is that other explanations are possible; a study calculating the number of peptides predicted for class II alleles concluded that promiscuous alleles would appear based on the number of pathogens in particular environments (58).
Another study determined the number of peptides from dengue virus predicted to bind 27 common HLA-A and -B alleles, concluding that there is a wide variation in peptide repertoire that is inversely correlated with stability (59), similar to what was found for chicken class I molecules. Three of the four HLA-B alleles analyzed in the human proteome study were also analyzed in this dengue study and followed the same hierarchy ( Figure 2). Interestingly, more HLA-B alleles were found at the fastidious end of the spectrum and more HLA-A alleles were found at the promiscuous end, particularly HLA-A2 variants. It would appear that HLA-A and B alleles have a range of peptide repertoires but perhaps not as wide as in chickens. The fastidious chicken class I molecules typically have three fastidious anchor residues compared to two for human class I molecules, while the promiscuous HLA-A2 variants each allow two or three hydrophobic amino acids compared to five or more for BF2*002:01. Unlike chicken MHC-I molecules, peptide overhangs from human MHC-I molecules are relatively rare and require major re-adjustments of peptide-binding site, such as movement of a-helices that line the groove (60,61), so this is not likely to be a general mechanism for promiscuity in humans.
MECHANISMS FOR ESTABLISHING PEPTIDE REPERTOIRES IN HUMAN CLASS I MOLECULES
The question arises whether the peptide motif determines the peptide repertoire for human class I molecules, given that the APP genes for human class I molecules are more-or-less functionally monomorphic so that all class I alleles will get a FIGURE 2 | The predictive peptide repertoires for 27 common HLA-A and -B alleles [from (59), Copyright 2013. The American Association of Immunologists, Inc.] compared to the supertypes of these alleles [from (8)] show that the peptide motifs do not correlate well with peptide repertoire for some supertypes.
Kaufman
Peptide repertoire versus peptide motif Frontiers in Immunology | www.frontiersin.org December 2020 | Volume 11 | Article 601089 wide and promiscuous set of peptides and peptide editing. As mentioned above, supertypes of MHC-I molecules have been defined based on shared peptide motifs and on amino acids lining the pockets of peptide binding sites (8). A comparison of the peptide repertoires presented in the Dengue study (59) with such supertypes (Figure 2) shows that some peptide motifs correlate well with peptide repertoire (for example A2, A3, etc); for example, the alleles falling within the A2 supertype are all found at the promiscuous end of the repertoire. However, alleles from other supertypes (for example A1, A24, and B7), are found across the spectrum of repertoires. Thus peptide motifs do not equate with peptide repertoires, giving the possibility of discriminating between the two designations in terms of contribution towards disease.
A study on the dependence of cell surface expression of HLA-B alleles on TAPBP (also known as tapasin) may give a clue as to the discrepancy between peptide motif and peptide repertoire (62). There are many reports of particular pairs of alleles varying in TAPBP-dependence, and positions in the a2 and a3 domains have been identified that affect this dependence. A hierarchy of dependence was described for 27 HLA-B alleles (63), and a rough correlation with the hierarchy of peptide repertoire was found: fastidious alleles were by-and-large more dependent of TAPBP for cell surface expression, while promiscuous alleles were not (9). Such dependence would fit with the stability of class I molecules mentioned above: Peptide editing by TAPBP leads to the fastidious class I molecules retaining only the peptides that have the highest affinity, while promiscuous class I molecules would bind and move to the cell surface with any peptide with a minimal affinity. Moreover, the authors concluded that tapasin-independent alleles were linked to more rapid progression from HIV infection to death from AIDS (63).
Interestingly, this dependence of TAPBP correlated with the ease of refolding with peptides in vitro (in the absence of TAPBP), with both human and chicken promiscuous alleles refolding more easily (38,62). Whether chicken class I alleles have the same dependence in vivo is not yet clear, since TAPBP is highly polymorphic, with the TAPBP and BF2 alleles in each haplotype likely to have co-evolved (64).
HLA-C AND BF1: FLIES IN THE OINTMENT?
The fact that there are relationships of cell surface expression, peptide repertoire and resistance to infection disease both for BF2 in chickens and for HLA-A and -B in humans suggested that these are fundamental properties of MHC-I molecules. However, the evidence for HLA-C in humans and BF1 in chickens, which have some intriguing similarities, may not fit this emerging paradigm (9).
HLA-C is the result of an ancient gene duplication of HLA-B, but the two differ in several important ways (22,23). Both HLA-B and -C molecules are polymorphic, are up-regulated upon inflammation, and bind and present peptides to ab T cells. However, HLA-B molecules are expressed at the RNA, protein and cell surface levels as well as HLA-A molecules. HLA-B molecules are major CTL ligands on virally-infected cells, but some alleles carrying the Bw4 epitope on the a1 helix of the peptide-binding domain are also recognized by NKRs, specifically the killer immunoglobulin receptors with three extracellular domains (3D KIRs).
In contrast, HLA-C molecules are expressed at a low RNA level and are found at about 10% of the level of HLA-A or -B molecules on the surfaces of cells where all three loci are expressed. However, they are also expressed on extravillous trophoblasts (EVT) in the absence of HLA-A and -B molecules. HLA-C alleles are known as important NKR ligands, by carrying either C1 or C2 epitopes on the a1 helix of the peptide-binding domain, which are recognized by different KIRs with two extracellular domains (2D KIRs). Moreover, different HLA-C alleles have different RNA and cell surface protein levels, for which those with higher expression are correlated with slow progression from HIV infection to AIDs, and with some evidence to suggest that this correlation is due to recognition by CTLs (65,66). There have been no experiments reported to explicitly test the relationship of peptide repertoire and cell surface expression of HLA-C alleles, but the determination of cell surface expression has been reported to be very complex, including effects of promoters, miRNA, assembly, stability and peptide-binding specificity (67).
Much less is known about the chicken BF1 gene, but it has some similarities to the HLA-C. BF1 molecules are expressed at a much lower level than BF2 molecules, at the level of RNA, protein and antigenic peptide (32,33). There are far fewer alleles of BF1 than BF2, with ten-fold less BF1 RNA found in most haplotypes and with some haplotypes missing a BF1 gene altogether peptide (32,33). BF1 is also thought to be primarily an NKR ligand (34), and most BF1 alleles carry a C1 motif on the a1 helix of the peptide-binding domain (68,69). Examination of sequences suggests that most BF1 alleles have similar peptide-binding grooves, with the few examples of other sequences likely to have been due to sequence contributions from the BF2 locus (C. Tregaskes, R. Martin and J. Kaufman, unpublished). An unsolved question is how BF1 alleles interact effectively with the highly polymorphic TAP and TAPBP alleles, for instance accommodating the very different peptides from translocated by TAPs in different haplotypes. Perhaps the typical BF1 molecule is highly promiscuous, but there are few data for either peptide repertoire or cell surface expression among BF1 alleles.
THE OTHER SIDE OF THE COIN: RECEPTORS ON NATURAL KILLER CELLS
An enormous body of scientific literature describes the very complex evolution, structure and function of NKRs and NK cells in primates and mice (2,70,71). Two kinds of NKRs are found in humans, lectin-like receptors found in the natural killer complex (NKC) and the KIRs in the leukocyte receptor complex (LRC). The KIRs are a highly polymorphic multigene family with copy number variation, and share the human LRC with other immunoglobulinlike receptors, including leukocyte immunoglobulin-like receptors (LILRs) and a single receptor for antibodies (Fcm/aR or CD351). Some of these transmembrane receptors have cytoplasmic tails with immune-tyrosine inhibitory motifs (ITIMs), others have basic residues in the transmembrane region which allow association with signaling chains bearing immune-tyrosine activating motifs (ITAMs), and a few have both. The polymorphic NKRs interact with polymorphic MHC-I molecules, 2D KIRs with HLA-C and 3D KIRs with certain HLA-A and HLA-B alleles. As mentioned above, the interactions of the particular alleles present in LRC and MHC, which are on different chromosomes, lead to differing outcomes, which read out as genetic epistasis with effects on immunity, autoimmunity and reproduction.
In chickens, almost all of the known immunoglobulin-like receptors related to KIRs are found on a single microchromosome, different from the one on which is found the MHC (72,73). These chicken immunoglobulin-like receptors (ChIRs) include those with activating, inhibitory and both motifs (ChIR-A, -B, and -AB), and 1D, 2D, and 4D extracellular regions. Sequencing studies suggest there can be haplotypes with few ChIR genes in common, suggesting both copy number variation and polymorphism (74)(75)(76). However, a gene typing method for 1D domains suggested relatively stable haplotypes, with only some examples of recombination during matings (77). The only molecules that have clear functions are many ChIR-AB molecules that bind IgY, the antibody isotype that acts somewhat like IgG in mammals (78)(79)(80). It seems very likely that there are both activating and inhibitory NKRs among these ChIRs, but thus far no data for NKR function. Whether such putative NKRs recognize BF1, BF2, or both is as yet unknown, and whether there is epistasis between the ChIR and MHC microchromosomes is untested.
Among the lectin-like NKR genes located in the NKC in humans and mice are one or more NKR-P1 genes (also known as NK1.1, KRLB1, or CD161) paired with the lectin-like ligands (LLT1 in humans and Clr in mice). In chickens, there are only two lectinlike genes located in the region syntenic to the NKC, and neither of those appears to encode NKRs; one is expressed mainly in thrombocytes (81,82). However, there is a pair of NKR-P1/ligand genes in the chicken MHC (25,83), known as BNK (sometimes identified as Blec1) and Blec (sometimes identified as Blec2). The receptor encoded by the highly polymorphic BNK gene was assumed to interact with the nearly monomorphic Blec gene, but a reporter cell line with one BNK allele was found not to respond to BF1, BF2 or Blec, but to spleen cells bearing a trypsin sensitive ligand (84,85). A trypsin-sensitive ligand on a particular chicken cell line was found to reproduce the result with the reporter cells, but the nature of that ligand remains unknown (E. K. Meziane, B. Viertlboeck, T. Göbel and J. Kaufman, unpublished). Possibilities include other lectin-like genes in the BG region or the Y region of the MHC microchromosome (28,86).
The effect of peptide repertoire of class I molecules on NK recognition has not carefully examined in either humans or chickens, but some speculations may be worth considering. A wider peptide repertoire may increase the number (although unlikely the proportion) of peptides with appropriate amino acids to affect binding to KIRs and ChIRs, both at the level of response and potentially at the level of education (licensing or tuning), including the recently described phenomenon of cis-tuning (87). However, the increase in breadth of peptide repertoire may be balanced by the decrease of cell surface expression of the class I molecules, which may mean that peptide repertoire may not exert an enormous effect on inhibitory NK responses. In contrast, any increase in peptide repertoire may allow additional pathogen peptides to be recognized by activating NKRs. A special consideration are C-terminal overhangs, which may be particularly frequent in at least some alleles of chicken class I molecules. Such C-terminal overhangs in human class II molecules can directly affect T cell recognition (44), so it is possible that NKR interactions could also be affected.
CONCLUSIONS
The simplicity of the chicken MHC has allowed discoveries of phenomena that were harder to discern from analysis of the more complicated MHC of humans and mice (such as the existence of promiscuous and fastidious MHC-I alleles), but comparison between the immune systems of chickens and mammals has been fruitful (as in the development of the generalist-specialist hypothesis). For human MHC-I molecules, peptide motifs (as identified by supertypes) can be separated from peptide repertoire (as defined thus far by peptide prediction), but their impact on NKR recognition has not been tested. Moreover, careful analysis of Pc-1 and Pc-2 residues in promiscuous versus fastidious alleles with respect to peptide repertoire has not yet been carried for either humans or chickens. Given that the most basic understanding of NKR recognition in chickens has yet to gained, the importance of C-terminal peptide overhang from chicken MHC-I alleles for NKR recognition or NK function has not yet been assessed. Thus, it is clear that there is much work to do to understand NK cell function in chickens, and how that function relates to what is known in typical mammals including humans and mice.
AUTHOR CONTRIBUTIONS
The author confirms being the sole contributor of this work and has approved it for publication. | 7,995.4 | 2020-12-14T00:00:00.000 | [
"Biology"
] |
Stochastic reaction-diffusion equations on networks
We consider stochastic reaction-diffusion equations on a finite network represented by a finite graph. On each edge in the graph a multiplicative cylindrical Gaussian noise driven reaction-diffusion equation is given supplemented by a dynamic Kirchhoff-type law perturbed by multiplicative scalar Gaussian noise in the vertices. The reaction term on each edge is assumed to be an odd degree polynomial, not necessarily of the same degree on each edge, with possibly stochastic coefficients and negative leading term. We utilize the semigroup approach for stochastic evolution equations in Banach spaces to obtain existence and uniqueness of solutions with sample paths in the space of continuous functions on the graph. In order to do so we generalize existing results on abstract stochastic reaction-diffusion equations in Banach spaces.
(1.1) where (β i (t)) t∈[0,T ] are independent scalar Brownian motions and (w j (t)) t∈[0,T ] are independent cylindrical Wiener-processes defined in the Hilbert space L 2 (0, 1; µ j dx) for some µ j > 0, j = 1, . . . , m. The reaction terms f j are assumed to be odd degree polynomials, with possible different degree on different edges, and with possibly stochastic coefficients and negative leading term, see (4.8). The diffusion coefficients g i and h j are assumed to be locally Lipschitz continuous and satisfy appropriate growths conditions (4.11) and (4.13), respectively, depending on the maximum and minimum degrees of the polynomials f j on the edges. These become linear growth conditions when the degrees of the polynomials f j on the different edges coincide. The coefficients of the linear operator satisfy standard smoothness assumptions, see Subsection 2.1, while the matrix M satisfies Assumptions 2.1 and µ j , j = 1, . . . , m, are positive constants. While deterministic evolution equations on networks are well studied, see, [AM84, AM86, AM94, AMvBN01, vB85, vB88a, vB88b, vBN96, Cat97, CF03, EKF19, Kac66, KFMS07, KS05, LLS94,Lum80,MS07,Mug07,MR07, Mug14b,Nic85] which is, admittedly, a rather incomplete list, the study of their stochastic counterparts is surprisingly scarce despite their strong link to applications. In [BMZ08] additive Lévy noise is considered that is square integrable with drift being a cubic polynomial. In [BZ14] multiplicative square integrable Lévy noise is considered but with globally Lipschitz drift and diffusion coefficients and with a small time dependent perturbation of the linear operator. Paper [BM10] treats the case when the noise is an additive fractional Brownian motion and the drift is zero. In [CDP17b] multiplicative Wiener perturbation is considered both on the edges and vertices with globally Lipschitz diffusion coefficient and zero drift and time-delayed boundary condition. Finally, in [CDP17a], the case of multiplicative Wiener noise is treated with bounded and globally Lipschitz continuous drift and diffusion coefficients and noise both on the edges and vertices.
In all these papers the semigroup approach is utilized in a Hilbert space setting and the only work that treats non-globally Lipschitz continuous coefficients is [BMZ08] but the noise is there is additive and square-integrable. In this case, energy arguments are possible using the additive nature of the equation which does not carry over to the multiplicative case. Therefore, we use an entirely different toolset based on the semigroup approach for stochastic evolution equations in Banach spaces [vNVW08]. For results on classical stochastic reaction-diffusion equations on domains in R n we refer, for example, to [BG99,BP99,Cer03,CF17,Pes95]. The papers [KvN12,KvN19] introduce a rather general abstract framework for treating such equations using the above mentioned semigroup approach of [vNVW08]. Unfortunately, the framework is still not quite general enough to apply it to (1.1). The reason for this is as follows. One may rewrite (1.1) as an abstract stochastic Cauchy problem of the form (SCP). The setting of [KvN12,KvN19] require a space B which is sandwiched between some UMD Banach space E of type 2 where the operator semigroup S generated by the linear operator A in the equation is strongly continuous and analytic, and the domain of some appropriate fractional power of A. The semigroup S is assumed to be strongly continuous on B, a property that is used in an essential way via approximation arguments (for example, Yosida approximations). The drift F is assumed to be a map from B to B and assumed to have favourable properties on B. In the abstract Cauchy problem (SCPn) corresponding to (1.1), such a space B given by (4.27) plays the role of B. Here B is the space of continuous functions on the graph that are also continuous across the vertices (more precisely, isomorphic to it). But then the abstract drift F given by (4.14) does not map B to itself unless very unnatural conditions on the coefficients of f j are introduced. One may consider the larger space E c introduced in Definition 4.1, where continuity is only required on each edge (but not necessarily across the vertices). Then F given by (4.14) maps E c to itself and F still has favourable properties on E c and E c is still sandwiched the same way as B. The price to pay for considering this larger space is the loss of strong continuity of the semigroup S generated by the linear operator A of (SCPn) on E c . However, the semigroup will be analytic on E c . This property that can be exploited in various approximation arguments that do not require strong continuity, see, for example, [Lun95]. Such arguments are used in the seminal paper [Cer03], where a system of reaction-diffusion equations are studied but, unlike in the present work, with a diagonal solution operator, and polynomials with the same degree in each component (see [Cer03,Remark 5.1, 2.]). While the framework of [Cer03] is less general than that of [KvN12,KvN19] the approximation arguments in the former do not use strong continuity. We therefore prove abstract results (Theorems 3.6 and 3.10) concerning existence and uniqueness of the solution of (SCPn) in the setting of [vNVW08], similar to that of Theorems 4.3 and 4.9 in [KvN12,KvN19], but without the requirement that S is strongly continuous on the sandwiched space and using similar approximation arguments as in [Cer03]. The assumption on F, in particular, Assumptions 3.7(5), is also more general than the corresponding assumptions in [Cer03] and [KvN12,KvN19] so that we may consider polynomials with different degrees on different edges.
The main results of the paper concerning the system (1.1) are contained in Theorems 4.7 and 4.10. In Theorem 4.7 we show that there is a unique mild solution of (1.1) with values in E c . While, as we explained above, we cannot work with the space B directly, in Theorem 4.10 we prove via a bootstrapping argument that the solution, in fact, has values in B; that is, the solution is also continuous across the vertices even when the initial condition is not.
The paper is organized as follows. In Section 2 we collect partially known semigroup results for the linear deterministic version of (1.1). For the sake of completeness, while the general approach is known, we include the proof of Proposition 2.3 in Appendix A, and the key technical results needed in the proof Proposition 2.4, which is the main result of this section, in Appendix B. In Section 3 we prove two abstract results, Theorems 3.6 and 3.10, concerning the existence and uniqueness of mild solutions (SCP). In Section 4 we apply the abstract results to (1.1). In order to do so, in Subsection 4.1 we first prove various embedding and isometry results and, in Proposition 4.4, we prove that the semigroup S is analytic on E c . Subsection 4.2 contains the main existence and uniqueness results concerning (1.1), see Theorems 4.7, 4.8 and 4.10, 4.11. In the latter cases we treat separately the models where stochastic noise is only present in the nodes.
The heat equation on a network
2.1. The system of equations. We consider a finite connected network, represented by a finite graph G with m edges e 1 , . . . , e m and n vertices v 1 , . . . , v n . We normalize and parameterize the edges on the interval [0, 1]. The structure of the network is given by the n × m matrices Φ + := (φ + ij ) and Φ − : for i = 1, . . . , n and j = 1, . . . m. We denote by e j (0) and e j (1) the 0 and the 1 endpoint of the edge e j , respectively. We refer to [KS05] for terminology. The n × m matrix Φ : is known in graph theory as incidence matrix of the graph G. Further, let Γ(v i ) be the set of all the indices of the edges having an endpoint at v i , i.e., For the sake of simplicity, we will denote the values of a continuous function defined on the (parameterized) edges of the graph, that is of We start with the problem (2.2) on the network. Note that c j (·) and u j (t, ·) are functions on the edge e j of the network, so that the right-hand side of (2.2a) reads in fact as The functions c 1 , . . . , c m are (variable) diffusion coefficients or conductances, and we assume that 0 < c j ∈ C 1 [0, 1], j = 1, . . . , m. The functions p 1 , . . . , p m are nonnegative, continuous functions, hence Equation (2.2b) represents the continuity of the values attained by the system at the vertices in each time instant, and we denote by r i (t) the common functions values in the vertice i, for i = 1, . . . , n and t ≥ 0.
In (2.2c), M := (b ij ) n×n is a matrix satisfying the following On the right-hand-side, [M r(t)] i denotes the ith coordinate of the vector M r(t). The coefficients 0 < µ j , j = 1, . . . , m are strictly positive constants that influence the distribution of impulse happening in the ramification nodes according to the Kirchhoff-type law (2.2c).
We now introduce the n × m weighted incidence matrices With these notations, the Kirchhoff law (2.2c) becomeṡ In equations (2.2d) and (2.2e) we pose the initial conditions on the edges and the vertices, respectively.
Spaces and operators.
We are now in the position to rewrite our system (2.2) in form of an abstract Cauchy problem, following the concept of [KFMS07]. First we consider the (complex) Hilbert space as the state space of the edges, endowed with the natural inner product Observe that E 2 is isomorphic to L 2 (0, 1) m with equivalence of norms.
We further need the boundary space C n of the vertices. According to (2.2b) we will consider such functions on the edges of the graph those values coincide in each vertex, that is, that are continuous in the vertices. Therefore we introduce the boundary value operator Lu := (r 1 , . . . , r n ) ⊤ ∈ C n , r i = u j (v i ) for some j ∈ Γ(v i ), i = 1, . . . , n. (2.5) The condition u(t, ·) ∈ D(L) for each t > 0 means that (2.2b)) is for the function u(·, ·) satisfied. On E 2 we define the operator This operator can be regarded as maximal since no other boundary condition except continuity is imposed for the functions in its domain. We further define the so called feedback operator acting on D(A max ) and having values in the boundary space C n as (2.8) . With these notations, the Kirchhoff law (2.2c) becomeṡ compare with (2.4).
With these notations, we can finally rewrite (2.2) in form of an abstract Cauchy problem on the product space of the state space and the boundary space, endowed with the natural inner product is the usual scalar product in C n .
We now define the operator matrix A 2 on E 2 as (2.11) We use the notation A 2 because the operator will be later extended to other L p -spaces, see Proposition 2.4.
2.3. Well-posedness of the abstract Cauchy problem. To prove well-posedness of (2.12) we associate a sesquilinear form with the operator (A 2 , D(A 2 )), similarly as e.g. in [CDP17a] or (for the case of diagonal M ) in [MR07] and testify appropriate properties of it. Define (2.14) The next definition can be found e.g. in [Ouh05, Sec. 1.2.3].
Definition 2.2. From the form a -using the Riesz representation theorem -we can obtain a unique operator (B, D(B)) in the following way: We say that the operator (B, D(B)) is associated with the form a.
Proof. See the proof of Proposition A.1.
In the subsequent proposition we will prove well-posedness of (2.12) not only on the Hilbert space E 2 but also on L p -spaces, which will be crucial for our later results. Therefore we introduce the following notions. Let and endowed with the norm We now state the main result regarding well-posedness of (2.12). The proof uses a technical lemma that can be found in Appendix B. Furthermore, it generates a C 0 analytic, contractive, positive one-parameter semigroup (T 2 (t)) t≥0 on E 2 . 2. The semigroup (T 2 (t)) t≥0 extends to a family of analytic, contractive, positive one-parameter semigroups (T p (t)) t≥0 on E p for 1 ≤ p ≤ ∞, generated by (A p , D(A p )). Here and in what follows the notion of semigroup and its generator is understood in the sense of [ABHN11, Def. 3.2.5]. That is a strongly continuous function T : (0, ∞) → L(E), (where E is a Banach space and L(E) denotes the bounded linear operators on E) satisfying which is bounded on Σ θ ′ ∩ {z ∈ C : |z| ≤ 1} for all θ ′ ∈ (0, θ). We say that an analytic semigroup is contractive when the semigroup operators considered on the positive real halfaxis are contractions.
We also can prove -analogously as in [MR07,Lem. 5.7] -that the generators (A p , D(A p )) for p < ∞ have in fact the same form as in E 2 , with appropriate domain. (2.17) As a summary we obtain the following theorem.
Theorem 2.6. The first order problem (2.2) considered with A p instead of A 2 is well-posed on E p , p ∈ [1, ∞), i.e., for all initial data ( u r ) ∈ E p the problem (2.2) admits a unique mild solution that continuously depends on the initial data.
Abstract results for a stochastic reaction-diffusion equation
Let (Ω, F , P) is a complete probability space endowed with a right continuous filtration F = (F t ) t∈[0,T ] for a given T > 0. Let (W H (t)) t∈[0,T ] be a cylindrical Wiener process, defined on (Ω, F , P), in some Hilbert space H with respect to the filtration F; that is, (W H (t)) t∈[0,T ] is (F t ) t∈[0,T ] -adapted and for all t > s, W H (t) − W H (s) is independent of F s . First we prove a generalized version of the result of M. Kunze and J. van Neerven, concerning the following abstract equation In what follows let E be a real Banach space. Occasionally -without being stressed -we have to pass to appropriate complexification (see e.g. [MnST99]) when we use sectoriality arguments. If we assume that (A, D(A)) generates a strongly continuous, analytic semigroup S on the Banach space E with S(t) ≤ M e ωt , t ≥ 0 for some M ≥ 1 and ω ∈ R, then for ω ′ > ω the fractional powers (ω ′ − A) α are well-defined for all α ∈ (0, 1). In particular, the fractional domain spaces are Banach spaces. It is well-known (see e.g. [EN00, §II.4-5.]), that up to equivalent norms, these space are independent of the choice of w ′ . For α ∈ (0, 1) we define the extrapolation spaces E −α as the completion of E under the norms u −α := (ω ′ − A) −α u , u ∈ E. These spaces are independent of ω ′ > ω up to an equivalent norm.
Remark 3.1. If ω = 0 (hence, the semigroup S is bounded), then by [Haa06, Proposition 3.1.7] we can choose ω ′ = 0. That is, Let E c be a Banach space, · will denote · E c . For u ∈ E c we define the subdifferential of the norm at u as the set which is not empty by the Hahn-Banach theorem. We introduce the following assumptions for the operators in (SCP).
Assumptions 3.2.
(1) Let E be a UMD Banach space of type 2 and (A, D(A)) a densely defined, closed and sectorial operator on E.
3) The strongly continuous analytic semigroup S generated by (A, D(A)) on E restricts to an analytic, contractive semigroup, denoted by
Moreover, for all u ∈ E c the map (t, ω) → F (t, ω, u) is strongly measurable and adapted. Finally, for suitable constants a ′ , b ′ ≥ 0 and N ≥ 1 we have
ω, u)h is strongly measurable and adapted.
Finally, for some c ′ ≥ 0 we have For a thorough discussion of UMD Banach spaces we refer to [Bur01]. Banach spaces of type p ∈ [1, 2] are treated in depth in [AK16,Sec. 6]. In particular, any L p -space with p ∈ [2, ∞) has type 2. However, the space of continuous functions on any locally compact Hausdorff space is not a UMD space.
Remark 3.3. Assumptions 3.2(1)-(4) and (6) are -in the first 3 cases slightly modified versions of -Assumptions (A1), (A5), (A4), (F') and (G') in [KvN12]. Assumption 3.2(5) is the assumption of [KvN12, Prop. 3.8] on F . The main difference is that here the semigroup S is not necessarily strongly continuous on E c but is analytic and that the embedding of E θ ֒→ E c is not necessarily dense.
Instead of (3.4) in Assumption 3.2(6) one may assume a slightly improved estimate for some small ε > 0 depending on the parameters, as it is stated in (G') of [KvN12]. For simplicity we chose not to include the small ε explicitly because to prove our main results it will not be needed.
We use that E is of type 2 in a crucial way e.g. in the first step of the proof of Theorem 3.6, (3.33) and (4.41), obtaining that the simple Lipschitz and growth conditions for the operator G in Assumption 3.2(6) suffice, see [vNVW08, Lem. 5.2].
Remark 3.4. In Assumptions 3.2(3) we use the fact that since S is analytic on E and by Assumptions 3.2(2), D(A) ⊂ E θ ֒→ E c holds, S leaves E c invariant. Hence, the restriction S c of S on E c makes sense, and by assumption, S c is an analytic contraction semigroup on E c . Using [ABHN11, Prop. 3.7.16] we obtain that this is equivalent to the fact that the generator A c of S c is sectorial and dissipative. Note that since S c is not necessarily strongly continuous, A c is not necessarily densely defined.
However, one can easily prove that By Assumptions 3.2(2), the last integral also converges in the norm of E. Thus, Recall that a mild solution of (SCP) is a solution of the following integral equation denotes the "usual" convolution, and denotes the stochastic convolution with respect to W H . We also implicitly assume that all the terms on the right hand side of (3.5) are well-defined.
The following result is analogous to the statement of [KvN12,Lem. 4.4] but with the semigroup S c being an analytic contraction semigroup on E c which is not necessarily strongly continuous. The main difference in the proof is the use of a different approximation argument as the one in [KvN12] uses the strong continuity of S c on E c (the latter denoted by B there) in a crucial manner.
Lemma 3.5. Let S c be an analytic contraction semigroup on
(3.6) Then We now set a = δ and take u λ,δ (δ) instead of x λ in (3.8). Since u λ,δ (δ) satisfies (3.9) with t = δ and S c is analytic, we obtain that u λ,δ (δ) belongs to D(A c ). Hence, by [Lun95, Thm. 7.1.3(i)], there exists ε > 0 and a unique local mild solution of (3. yields a solution u λ,α ∈ C([0, α], E c ) of (3.8) with a = 0. Again, u λ,α (α) can be taken as initial value for problem (3.8) with a = α, and the above procedure may be repeated indefinitely, up to construct a noncontinuable solution defined in a maximal time interval I(x λ ). As in [Lun95, Def. 7.1.7] we define by I(x λ ) as the union of all the intervals [0, α] such that (3.8) has a mild solution u λ,α on this interval belonging to C([0, α], E c ). Denote by which is well defined thanks to the uniqueness part of [Lun95, Thm. 7.1.3(i)].
In the following we first show that the desired norm estimate (3.7) holds for the maximal solution u λ,max on I(x λ ). At the end we will be able to prove that I( Fix now t ∈ I(x λ ). Then by definition, there exists α > 0 such that t ∈ [0, α] and u λ,max (t) = u λ,α (t) holds for the mild solution u λ,α ∈ C([0, α], E c ) of (3.8). For the sake of simplicity, we denote u λ := u λ,α .
Rewriting (3.9) for u λ we obtain that for t ∈ [0, α] holds, as n goes to infinity. Using that u λ, Hence, for all u λ,n (t) * ∈ ∂ u λ,n (t) , Using the assumption on F , we obtain that (3.12) Observe that by the continuity of F and (3.10), we have that Hence, letting n → ∞ in (3.12) and using (3.10) we obtain Since A c generates a contraction semigroup, x λ ≤ x holds, and we obtain that Since t ∈ I(x λ ) was arbitrary, and the right-hand-side of (3.13) does not depend on t, we have lim sup Following [Cer03], for a fixed T > 0 and q ≥ 1, we define the space (3.14) being a Banach space with norm This Banach space will play a crucial role for the solutions of (SCP).
Theorem 3.6. Let T > 0, let 2 < q < ∞ and suppose that Assumptions 3.2 hold with Then for all ξ ∈ L q (Ω, F 0 , P; E c ) there exists a unique global mild solution X ∈ V T,q of (SCP). Moreover, for some constant C q,T > 0 we have Proof. We only sketch a proof as it is analogous to the proofs of [KvN12,Thm. 4.3] and [Cer03, Thm. 5.3] with highlighting the necessary changes. We set , otherwise. We argue in the same way as in the proof of [KvN12, Prop. 3.8(1)] which uses implicitly that, according to Assumption 3.2(1), the Banach space E is of type 2, see also [vNVW08,p. 978]. The solution space will be V T,q defined in (3.14) instead of L q (Ω; C([0, T ]; E c )) and F n + F instead of F n , we obtain that for each n there exists a mild solution X n ∈ V T,q of the problem (SCP) with F n instead of F (see also the proof of [Cer03, Thm. 5.3]). The mild solution X n satisfies, for all t ∈ [0, T ], the integral equation and obtain that . Using that by Assumptions 3.2(2) E θ ֒→ E c holds, it follows from [vNVW08, Lem. 3.6] with α = 1, λ = 0, η = θ, θ = κ F that S * F (·, X n (·)) ∈ C([0, T ], E c ) is satisfied and 3) and proceeding as in the proof of [KvN12,Thm. 4.3], we obtain that for each T > 0 there exists a constant C q,T > 0 such that We remark that the estimates needed use only the continuity of the embedding E θ ֒→ E c . Once (3.18) has been established we can conclude, the same way as in the proof of [Cer03, Thm. 5.3] the existence and uniqeness of a process X ∈ L q (Ω; L ∞ (0, T ; E c )) with such that for t ∈ [0; T ], almost surely and thus X is the unique mild solution of (SCP) in L q (Ω; L ∞ (0, T ; E c )).
To prove the continuity of the trajectories of X we first note that the analyticity of S on E c immediately implies that (0, T ] ∋ t → S(t)ξ ∈ E c is continuous. Next, we use [Lun95, Pro. 4.2.1] and the continuity of F to obtain that is continuous. Using [vNVW08, Lem. 3.6] as above, we obtain that S * F (·, X(·)) ∈ C([0, T ], E c ) holds. Applying analogous estimates as in the proof of [KvN12,Thm. 4.3], we may conclude that that there exists C(T ) > 0 such that is continuous almost surely. Thus, by (3.20) and the already established fact that X ∈ L q (Ω; L ∞ (0, T ; E c )), we obtain that X ∈ V T,q and by (3.19) the estimate (3.15) holds.
For our next result, introduce the following set of assumptions on the operators in (SCP).
Assumptions 3.7.
(1) Let E be a UMD Banach space of type 2 and (A, D(A)) a densely defined, closed and sectorial operator on E.
(2) We have continuous (but not necessarily dense) embeddings for some θ ∈ (0, 1) Moreover, for all u ∈ E c the map (t, ω) → F (t, ω, u) is strongly measurable and adapted. Finally, for suitable constants a ′ , b ′ ≥ 0 and N ≥ 1 we have for all t ∈ [0, T ], ω ∈ Ω, u, v ∈ E c and u * ∈ ∂ u , and Moreover, for all u ∈ E c and h ∈ H the map (t, ω) → G(t, ω, u)h is strongly measurable and adapted. Finally, for suitable constant c ′ , Remark 3.8. Assumptions 3.7(1)-(5) and (7) are -in the first 3 cases slightly modified versions of -Assumptions (A1), (A5), (A4), (F') and (G') in [KvN12]. Assumption (6) is the assumption of [KvN12, Prop. 3.8] on F . The main difference, besides the lack of strong continuity of S on E c and that the embedding E θ ֒→ E c is not necessarily dense, is that instead of (F") we impose a possibly asymmetric growth condition (3.21) on F . This is necessary so that later when we apply the abstract theory to (1.1) we may consider polynomial reaction terms with different degrees on different edges of the graph. The growth condition on G in Assumption 3.7(7) is also different from the linear growth condition on G in (G") of [KvN12] as it reflects the possibly asymmetric growth condition on F . It becomes a linear growth condition when k = K.
The following result is analogous to the statement of [KvN12,Lem. 4.8] but with the semigroup S c being an analytic contraction semigroup on E c which is not necessarily strongly continuous and with the asymmetric growth condition (3.21) on F . Again, the main difference in the proof is the use of a different approximation argument as the Yosida approximation argument in [KvN12] uses the strong continuity of S c on E c (the latter denoted by B there) in a crucial manner. (3.23) Proof. We will proceed similarily as in Lemma 3.5. We denote by A c the generator of S c , being sectorial and dissipative (see Remark 3.4) and fix v ∈ C([0, T ] E c ) satisfying (3.23).
Hence, for all u λ,n (t) * ∈ ∂ u λ,n (t) , Using now the dissipativity of A c (see Remark 3.4) and the assumptions on F , we obtain that . By the continuity of F and (3.26), we have that ε λ,n → 0, n → ∞. (3.28) We now fix n ∈ N and define ϕ(t) := u λ,n (t) and Then ϕ is absolutely continuous and by (3.27) holds almost everywhere. We will prove that where x ϕ = u λ,n (0). Assume to the contrary that for some t 0 ∈ [0, T ] (3.30) Since ϕ(0) = x ϕ , we have that t 0 ∈ (0, δ]. Let ψ : I → R be the unique maximal solution of We can use [KvN12,Cor. 4.7] with u + (t) = ψ(t), u − (t) = ϕ(t), Using the same arguments as in the proof of [KvN12, Lem. 4.8] we obtain that 0 ∈ I and This implies by the definition of ψ that ψ ′ (t) < 0, hence ψ is decreasing. Combining this together with (3.31) and (3.30) we obtain which is a contradiction. Hence, we have proved (3.29). Since n was arbitrary, we obtain that for all n ∈ N u λ,n (t) ≤ u λ,n (0) Letting n → ∞, by (3.26) and (3.28) we obtain Finally, using the same argument as at the end of the proof of [Cer01, Prop. 6.2.2], we obtain that for any t, the net {u λ (t)} λ∈ρ(A c ) is a Cauchy-net in E c , hence it is convergent and the limit is u(t). This yields (3.24).
The next result is a generalized version of that of Kunze and van Neerven which was first proved in [KvN12, Thm. 4.9] but with a typo in the statement and was later corrected in the recent arXiv preprint [KvN19, Thm. 4.9].
Theorem 3.10. Let T > 0, 2 < q < ∞ and suppose that Assumptions 3.7 hold with Then for all ξ ∈ L q (Ω, F 0 , P; E c ) there exists a global mild solution X ∈ V T,q of (SCP). Moreover, for some constant C q,T > 0 we have . Proof. We can proceed similarly as in the proofs of [KvN12, Thm. 4.9] and [Cer03, Thm. 5.9]. We set G n (t, ω, u) := G(t, ω, u), if u ≤ n, G(t, ω, nu u ), otherwise. We obtain by Theorem 3.6 that for each n there exists a global mild solution X n ∈ V T,q of the problem (SCP) with G n instead of G (see also the proof of [Cer03, Thm. 5.5]). Using (3.16) for X n and setting u n = X n − S ⋄ G n (·, X n (·)), v n = S ⋄ G n (·, X n (·)) we obtain that u n q V T,q = S(·)ξ + S * F (·, X n (·)) + S * F (·, X n (·)) q where denotes that the expression on the left-hand-side is less or equal to a constant times the expression on the right-hand-side. In the last inequality we have used estimate (3.17) with C(T ) → 0 if T ↓ 0 and Lemma 3.9 with u = u n and v = v n . As in the proof of [KvN12,Thm. 4.3] with E c instead of B, we obtain that for each T > 0 there exist a constant C ′ T > 0 such that where in the second inequality we have used Assumptions 3.7(7) and we have C ′ (T ) → 0 if T ↓ 0. Combining this with (3.22) and (3.32) we obtain that there are positive constants C 0 , C 1 and C 2 (T ) such that with C 2 (T ) → 0 as T ↓ 0. The proof can be finished as that of Theorem 3.6.
Preparatory results.
In order to apply the abstract result of Theorem 3.10 to the stochastic reaction-diffusion equation we need to prove some preliminary results regarding the setting of Section 2. We make use of the fact that the semigroups involved here all leave the corresponding real spaces invariant (this follows from the first bullet in the proof of Lemma B.1 and the corresponding Beurling-Deny criterion).
The norm is defined as usual with
This space will play the role of the space E c in our setting. We recall that for p ∈ [1, ∞] the operators (A p , D(A p )) are generators of analytic semigroups (see Proposition 2.4) on the spaces E p defined in (2.15).
For 0 ≤ θ < 1 let E θ p be defined as in (3.1) for the operator A p on the space E p . (4.1) We will need the following result on the fractional power spaces E θ p .
Lemma 4.2.
For the fractional domain spaces E θ p defined in (4.1) and 1 < p < ∞ arbitrary we have that (2) if θ > 1 2p , then Proof. By Proposition 2.4 the operator (A p , D(A p )) generates a positive, contraction semigroup on E p . Hence, we can use [Are04, Thm. in §4.7.3] (see also [Duo90]) obtaining that for any ω ′ > 0, ω ′ − A p has a bounded H ∞ (Σ ϕ )-calculus for each ϕ > π 2 . Proposition 2.4 implies that ω ′ − A p is injective and sectorial thus it has bounded imaginary powers (BIP). Therefore, by [Are04,Prop. in §4.4.10] (see also [MCSA01,Thm. 11.6.1]), it follows that for the complex interpolation spaces holds with equivalence of norms. Denote where W 2,p 0 (0, 1; µ j dx) = W 2,p (0, 1; µ j dx) ∩ W 1,p 0 (0, 1; µ j dx), j = 1, . . . m. Hence, W 0 (G) contains such vectors of functions that are twice weakly differentiable on each edge and continuous in the vertices with Dirichlet boundary conditions. By [Mug14a,Cor. 3.6], where the isomorphism is established by a similarity transform of E p . Using general interpolation theory, see e.g. [Tri78,Sec. 4.3.3], we have that if θ < 1 2p , then In the following we will prove that each of the semigroups (T p (t)) t≥0 restricts to the same analytic semigroup of contractions on E c .
Proof. First we will show that for each p ∈ [1, ∞], D(A p ) ⊂ E c holds. If p ∈ [1, ∞) it follows easily from (2.17) and Sobolev imbedding. For p = ∞ take U ∈ D(A ∞ ). Then for any λ > 0 there exists V ∈ E ∞ such that R(λ, A ∞ )V = U . Using that the semigroup T 2 is the extension of T ∞ to E 2 by (2.16) and E ∞ ֒→ E 2 holds, by a similar argument as in Remark 3.4 we obtain that V ∈ E 2 and R(λ, A 2 )V = U ∈ D(A 2 ). The claim follows now by observing D(A 2 ) ⊂ E c .
From Proposition 2.4 we know that for each p ∈ [1, ∞] the semigroup T p is analytic and contractive. Hence, using the inclusion D(A p ) ⊂ E c and [ABHN11, Thm. 3.7.19], we obtain that T p leaves E c invariant. By (2.16) we also have that the restrictions on E c all coincide, thus we may use S c to denote this common restriction. It is straighforward that S c is a contraction semigroup on E c since T ∞ is a contraction semigroup on E ∞ and the norms on E c and E ∞ coincide.
Using the same argument as in Remark 3.4, and the fact D(A p ) ⊂ E c , we obtain that It remains to prove that S c is analytic. We now use that T ∞ is analytic on E ∞ . That is, by [ABHN11, Cor. 3.7.18], there exists r > 0 such that {is : s ∈ R, |s| > r} ⊂ ρ(A ∞ ) and (4.6) Let (Ω, F , P) be a complete probability space endowed with a right-continuous filtration F = (F t ) t∈[0,T ] for some T > 0 given. We consider the problem (4.7) Hereβ i (t), i = 1, . . . , n, are independent noises; written as formal derivatives of independent scalar Brownian motions (β i (t)) t∈[0,T ] , defined on (Ω, F , P) with respect to the filtration F.
In contrast to Section 2, we add a first order term d j (x) · u ′ j (t, x) to the first equation of (2.2) assuming d j ∈ Lip[0, 1], j = 1, . . . , m. Remark 4.5. The functions coming from the classical FitzHugh-Nagumo problem (see e.g. with a j ∈ (0, 1) satisfy the conditions above.
where the constants k and K are defined in (4.9). We further require that the functions g i are jointly measurable and adapted in the sense that for each i and t ∈ [0, T ], g i (t, ·) is F t ⊗ B R -measurable, where B R denotes the sigma-algebra of the Borel sets on R.
We suppose that . . , m are locally Lipschitz continuous in the fourth variable, uniformly with respect to the first 3 variables, and (4.12) We further assume that the functions h j are jointly measurable and adapted in the sense that for each j and t and B R denote the sigma-algebras of the Borel sets on [0, 1] and R, respectively. We rewrite system (4.7) in an abstract form analogously to (SCP) The operator (A, D(A)) is (A p , D(A p )) for some large p ∈ [2, ∞), where p will be chosen in (4.22) (4.25) and (4.35), (4.43) later. Hence, by Proposition 2.4, A is the generator of the strongly continuous analytic semigroup S := T p (t) on the Banach space E p , and E p is a UMD space of type 2.
For the function F : We now define F and G and after that we prove that they map between the appropriate spaces as assumed in Section 3. Let be defined as a map from (C 1 [0, 1]) m × R n to E p for any p > 1 with (4.16) To define the operator G we argue in analogy with [KvN19, Sec. 5]. First define H := E 2 the product L 2 -space, see (2.9), which is a Hilbert space. We further define the multiplication operator Γ : Because of the assumptions on the functions h j and g i , Γ clearly maps into L(H). Let (A 2 , D(A 2 )) be the generator on H = E 2 , see Proposition 2.4, and pick κ G ∈ ( 1 4 , 1 2 ). Using Lemma 4.2(2) we have that there is an isomorphism By Corollary 4.3, H 1 ֒→ E c holds. Using Corollary 4.3 again, we have that the there exists a continuous embedding : H 1 → E p for p ≥ 2 arbitrary.
The driving noise process W is defined by . . .
and thus (W(t)) t∈[0,T ] is a cylindrical Wiener process, defined on (Ω, F , P), in the Hilbert space H with respect to the filtration F.
Similar to (3.14) for a fixed T > 0 and q ≥ 1 we define the space being a Banach space with norm This Banach space will play a crucial role for the solutions of (SCPn). We will state now the result regarding system (SCPn).
Proof. The condition q > 4 allows us to choose 2 ≤ p < ∞, θ ∈ (0, 1 2 ) and κ G ∈ ( 1 4 , 1 2 ) such that θ > 1 2p (4.22) We will apply Theorem 3.10 with θ and κ G having the properties above. To this end we have to check Assumptions 3.7 for the mappings in (SCPn), taking A = A p for the p chosen above. (4) and (5) we first remark that the locally Lipschitz continuity of F follows from (4.8). In the following we have to consider vectors U * ∈ ∂ U for U = ( u r ) ∈ E c . It is easy to see that there exists U * ∈ ∂ U of the form U * = u * r * with u * ∈ ∂ u (C[0,1]) m and r * ∈ ∂ r ℓ ∞ . Using that the functions f j are polynomials of the 4th variable (see (4.8)), a similar computation as in [KvN12,Ex. 4.2] shows that for all j = 1, . . . , m and for a suitable constant a ′ ≥ 0 f j (t, ω, x, η + ζ) · sgn η ≤ a ′ (1 + |ζ| 2k j +1 ) holds. Using techniques from [DPZ92,Sec. 4.3] we obtain that with K defined in (4.9) and for all U ∈ D(A p | E c ), V ∈ E c and U * ∈ ∂ U . Following the computation of [KvN12,Ex. 4.5], we obtain that for suitable positive constants a, b, c and for all (t, ω, x) ∈ [0, T ] × Ω × [0, 1] and j = 1, . . . , m holds. Using again techniques from [DPZ92,Sec. 4.3] (see also [Cer03,Rem. 5.1.2 and (5.19)], we obtain that for k and K defined in (4.9) K ≥ k holds and for all t ∈ [0, T ], ω ∈ Ω, U, V ∈ E c and U * ∈ ∂ U . Furthermore, for all V ∈ E c . (e) To check Assumption (6) we refer to Lemma 4.6. This implies that F : E c → E −κ F with κ F = 1 2 . Since F is a continuous linear operator, the rest of the statement also follows. (f) To check Assumption (7) note that by Lemma 4.6, G takes values in γ(H, E −κ G p ) with H = E 2 and κ G chosen above. We apply a similar computation as in the proof of [vNVW08, Thm. 10.2]. We fix U, V ∈ E c and let Furthermore, we denote the matrix from (4.17) by For R > 0 we denote where the positive constants L g i (R)'s and L h j (R)'s are the corresponding Lipschitz constants of the functions g i and h j , respectively, on the ball of radius R, see (4.10) and (4.12). From the right-ideal property of the γ-radonifying operators and (4.18) we have that is locally Lipschitz continuous. Using the assumptions (4.11) and (4.13) on the functions g i 's and h j 's and an analogous computation as above, we obtain that G grows as required in Assumption (7) as a map In the following we treat the special case when h j ≡ 0, j = 1, . . . , m, that is, there is stochastic noise only in the vertices of the network. To rewrite the equations (4.7) in the form (SCPn), we define the operator G in a different way than it has been done in (4.18).
Instead of the operator in (4.17) we define Γ : Because of the assumptions on the functions g i , Γ clearly maps into L(E p ).
Now, let
R := 0 m×m 0 m×n 0 n×m I n×n (m+n)×(m+n) Then for all p ≥ 2, R ∈ γ(H, E p ) with H = E 2 holds since R has finite dimensional range. (4.24) In this case we obtain a better regurality in Theorem 4.7.
Proof. We first chose p ≥ 2 such that Hence, we can take θ satisfying 1 2p We will apply Theorem 3.10 with θ having this property and κ G = 0. To this end we have to check Assumptions 3.7 again for the mappings in (SCPn), taking A = A p for the p chosen above. This can be done in the same way as in the proof of Theorem 4.7 up to Assumption (7). It can be easily checked for κ G = 0 for the operator G : G is the maximum of the Lipschitz-constants of the functions g i on the ball {x ∈ R : |x| ≤ r} (see (4.10)) and R γ(H,Ep) is finite.
Furthermore, applying (4.11), the last statement of Assumption (7) follows similarly as above, hence there exists a constant c ′ > 0 such that In the following theorem we will state a result regarding the regularity of the mild solution of (SCPn) that exists according to Theorem 4.7. We will show that the trajectories of the solutions are actually continuous in the vertices of the graph, hence they lie in the space It is easy to see that B ∼ = D(L) and B ⊂ E c . (4.28) We can again prove the following continuous embeddings. In contrast to Corollary 4.3, also the first embedding will be dense. Proposition 4.9. Let E θ p defined in (4.1) for p ∈ [1, ∞). Then for θ > 1 2p the following continuous, dense embeddings are satisfied: Proof. According to Lemma 4.2(2) we have that for θ > 1 holds. In [KS21a,Lem. 3.6] we have proved that To the analogy of V T,q , we define for a fixed T > 0 and q ≥ 1 In the following we will show that the trajectories of the solution of (SCPn) lie in B.
Proof. By Theorem 4.7 there exists a global mild solution X ∈ V T,q , that is This solution satisfies the following implicit equation (see (3.5)): X (t) = S(t)ξ + S * F(·, X (·))(t) + S * F(X (·))(t) + S ⋄ G(·, X (·))(t), (4.32) where S denotes the semigroup generated by A p on E p for some p ≥ 2 large enough, * denotes the usual convolution, ⋄ denotes the stochastic convolution with respect to W. We only have to show that for almost all ω ∈ Ω for the trajectories holds. Then X ∈ V T,q is satisfied since the norms on E c and B coincide and (4.31) is true. We will show (4.33) by showing it for all the three terms on the right-hand-side of (4.32).
(3) For the deterministic convolution term with F in (4.32) we proceed similarly as before.
We apply [vNVW08, Lem. 3.6] with α = 1, λ = 0, θ = 1 2 , q instead of p and for η. We obtain that there exist constants C ≥ 0 and ε > 0 such that . (4.40) Taking the qth power on the right-hand-side of (4.40) and using (4.20) we obtain that Hence, By Proposition 4.9 and (4.37) we obtain that S * F(X (·)) ∈ C([0, T ]; B) holds and for a positive constant C ′ Since we know by (4.31) that for almost all ω ∈ Ω the right-hand-side is finite, we obtain that the left-hand-side is almost surely finite. (4) We now prove that for the stochastic convolution term S ⋄ G(·, X (·)) ∈ C([0, T ]; B) almost surely holds by showing that is finite. By (4.34) we can apply [vNVW08,Prop. 4.2] with λ = 0, θ = κ G , q instead of p, α and η, and we have that there exist ε > 0 and C ≥ 0 such that In the following we proceed similarly as done in the proof of [KvN12,Thm. 4.3], with N = 1 and q instead of p. Since E −κG p is a Banach space of type 2 (because E p is of that type), the continuous embedding holds. Using this, Young's inequality and the growth property of G (see the proof of Theorem 4.7), respectively, we obtain the following estimates Hence, for each T > 0 there exists constant C T > 0 such that Using that k K < 1, we have that holds. By Proposition 4.9 and (4.35) we obtain that for a positive constantC T > 0 is satisfied.
We again treat the case when h j ≡ 0, j = 1, . . . , m separately by defining the operator G as in (4.24) to obtain better regurality for the solutions.
Proof. The claim can be proved analogously to Theorem 4.10 except for step (4). To show that in this case S ⋄ G(·, X (·)) ∈ C([0, T ]; B) almost surely holds we first fix 0 < α < 1 2 and p ≥ 2 such that 1 2p < α − 1 q (4.43) holds (it is possible since q > 2). We further choose η > 0 such that is satisfied. Applying [vNVW08,Prop. 4.2] with θ = λ = 0 and q instead of p, we have that there exist ε > 0 and C ≥ 0 such that In the following we proceed similarly as done in the proof of [KvN12,Thm. 4.3], with N = 1 and q instead of p. Since E p is a Banach space of type 2, we can use the continuous embedding Young's inequality and (4.26), respectively, to obtain the following estimates Using that k K < 1, we have that there exists a constant C ′′ T such that holds. By Proposition 4.9 and (4.44) we obtain that for a positive constantC T > 0 E S ⋄ G(·, X (·)) q C([0,T ];B) ≤C T · 1 + X q V T,q is satisfied and thus S ⋄ G(·, X (·)) ∈ C([0, T ]; B) almost surely.
Using the entries (2.1) of the incidence matrix Φ, the first term above can be written as Observe now that by the definition of V The second term in (A.1) is by the definition (2.6)-(2.7) of (A max , D(A max )) which makes sense because A max f ∈ E 2 . Hence, The proof of the inclusion A 2 ⊂ B is completed.
To check the converse inclusion B ⊂ A 2 take U = ( u r ) ∈ D(B). By definition, there exists V = v q ∈ E 2 such that a(U, H) = V, H E 2 for all H = h d ∈ V, and BU = −V . In particular, for all H = h d ∈ V, hence also for all h j of the form By definition of weak derivative this means that c j · u ′ j ∈ H 1 (0, 1) for all j = 1, . . . , m. Since 0 < c j ∈ H 1 (0, 1), it follows that in fact u ′ j ∈ H 1 (0, 1) for all j = 1, . . . , m. We conclude that u ∈ H 2 (0, 1) m , hence by U ∈ V, also U ∈ D(A 2 ) holds. Moreover, integrating by parts as in (A.1) we see -analogously to the first part of the proof -that if (A.2) holds for some H = h d ∈ V, then That is a(U, H) = − A 2 U, H E 2 = V, H E 2 for arbitrary H ∈ V, hence A 2 U = −V = BU and this completes the proof.
Appendix B. Proof of Proposition 2.4 We defined the spaces E p for p ∈ [1, ∞] in (2.15). In the subsequent lemma we prove crucial properties of the semigroup (T 2 (t)) t≥0 that will imply its extendability to the spaces E p . The proof is similar to that of [MR07,Lem. 4.1 and Prop. 5.3] except the fact that we have non-diagonal matrix M . Therefore we give it in details.
Lemma B.1. If Assumption 2.1 holds for M , then the semigroup (T 2 (t)) t≥0 on E 2 , associated with a, is sub-Markovian, i.e., it is real, positive, and contractive on E ∞ .
By definition we have u j = (u) j , 1 ≤ j ≤ m. It follows from the above arguments that f ∈ H 1 (0, 1) m , and one can see that Hence, U ∈ V. Moreover, the first two sums of a(Re U, Im U ) are sums of m integrals.
Recall that all the weights are real-valued, nonnegative functions. Since all the integrated functions are real-valued, and the third sum is the sum of real numbers, it follows that a(Re U, Im U ) ∈ R. Thus, the first criterion has been checked. Moreover, if U is a real-valued, then |u j | = |u| j , 1 ≤ j ≤ m, |r i | = |r| i , 1 ≤ i ≤ n, and one sees as above that |U | ∈ V. In particular, ||u| ′ | 2 = |u ′ | 2 , and a(|U |, |U |) = holds. We have checked also the third criterion, thus the claim follows. | 12,544 | 2020-11-23T00:00:00.000 | [
"Mathematics"
] |
hyperons and the neutron drip line
Xian-Rong Zhou ( ),1 A. Polls,2 H.-J. Schulze,3 and I. Vidaña2 1Department of Physics and Institute of Theoretical Physics and Astrophysics, Xiamen University, Xiamen 361005, People’s Republic of China 2Departament d’Estructura i Constituents de la Matèria, Universitat de Barcelona, E-08028 Barcelona, Spain 3INFN Sezione di Catania, Via Santa Sofia 64, I-95123 Catania, Italy (Received 31 July 2008; published 7 November 2008)
I. INTRODUCTION
New experimental facilities under construction at GSI, JLAB, J-PARC, and other sites will soon allow a much more precise determination of the properties of hyperon-nucleon and hyperon-hyperon forces than is currently available (see Ref. [1] for a recent account of experimental data).Initial investigations of hypernuclear physics were mainly focused on the spectroscopy of single-hypernuclei.The quantitative information obtained and its theoretical analysis became one of the most relevant tools to constrain N interactions.Complementary to these studies, in this article we consider a particular feature of hypernuclear physics that might be accessible in the future [2]: The change of nuclear structure due to the effect of added hyperons.In particular we focus on the modification of states close to the neutron drip line when adding one or more hyperons to the system.A detailed study of this effect in infinite hypernuclear matter will be complemented by the consideration of some typical hypernuclei.
This subject has generated some theoretical interest in the past and, apart from the exploration of hypernuclear bulk matter [3][4][5][6], several studies of neutron-rich hypernuclei have been performed in different theoretical frameworks.We mention the relativistic mean-field treatments of Refs.[7] and [8], the Skyrme-Hartree-Fock approach of Ref. [9], and the use of a generalized mass formula in [10].
Obviously the results depend, apart from the theoretical scheme, on the nucleon-nucleon and hyperon-nucleon interactions that are used.The purpose of our study is to employ a microscopically derived hyperon-nucleon force together with recent reliable nucleonic interactions suitable for neutron-rich environments.More precisely, we use for this purpose a microscopic in-medium N force without adjustable parameters, derived from Brueckner-Hartree-Fock (BHF) calculations of isospin-asymmetric hypernuclear matter [5,11] with the Nijmegen soft-core hyperon-nucleon potential NSC89 [12] and the Argonne V 18 nucleon-nucleon interaction [13], including explicitly the coupling of the N to the N states.This N force is combined with a standard Skyrme force for the nucleon-nucleon interaction to calculate the properties of homogeneous hypernuclear matter, while hypernuclei are treated in a Skyrme-Hartree-Fock (SHF) model employing the same interactions and including quadrupole deformations and (nucleonic) pairing.This methodology gives access to more refined information than just using a generalized mass formula [10].Furthermore, the microscopically founded N interaction that we use avoids the uncertainties of the parametrizations of the N Skyrme forces used in Ref. [9].
In the next section, we briefly review the necessary formalism.The results for hypernuclear matter and hypernuclei are presented in Sec.III and the main conclusions are summarized in the last section.
II. FORMALISM
For the nucleonic energy density functional ε N to be used for infinite hypermatter or finite nuclei we choose a standard Skyrme functional with the modern Skyrme forces SkI4 [14] or SLy4 [15], which have been specifically devised with attention to the description of neutron-rich systems.For the calculations in the homogeneous system, we also consider alternatively a simple analytical energy density functional developed in Ref. [16] (hereafter referred to as Av18+3BF), which parametrizes the results of the variational calculation in the framework of correlated basis functions with Argonne V 18 potential plus a Urbana three-body force and relativistic boost corrections of Ref. [17].These variational results are in very good agreement with BHF calculations [16].This energy density functional is expressed in terms of a compressional and a symmetry term: In this expression, ρ N = ρ n + ρ p is the total nucleonic density, α = (ρ n − ρ p )/ρ N the nucleon asymmetry, and u = ρ N /ρ 0 the ratio of the nucleonic density to nuclear saturation density.The best fit of the variational calculations of Ref. [17] with this simple functional is obtained with ρ 0 = 0.16 fm −3 , E 0 = 15.8MeV, S 0 = 32 MeV, γ = 0.6, and δ = 0.2.The contribution to the energy density functional due to the presence of hyperons, ε , is written as [18] with τ /2m being the kinetic energy density, C = (3π 2 ) 2/3 3/5 ≈ 5.742, and The last term in Eq. ( 2) vanishes in homogeneous hypermatter.These energy functionals are obtained from a fit to the binding energy per baryon, B/A(ρ n , ρ p , ρ ), of asymmetric hypermatter, as generated by BHF calculations [5,11].The adequate effective mass (used also in the SHF Schrödinger equation), is computed from the BHF single-particle potentials U (k) obtained in the same calculations.In practice we use the following parametrizations of energy density and effective mass in terms of the partial densities ρ n , ρ p , ρ (ρ N and ρ given in units of fm −3 , ε N in MeV fm −3 ): where y = ρ /ρ N .
From the total energy density of hypernuclear matter, ε = ε N + ε , one can then calculate the chemical potentials and the pressure of the system, Regarding the description of hypernuclei, we use the SHF formalism developed in Refs.[18][19][20], using the N energy density functional Eq. ( 2) together with the same nucleonic Skyrme force as in infinite hypermatter.This formalism reproduces fairly well the binding energies and single-particle levels of hypernuclei and can thus be considered sufficiently reliable for our purpose.Microscopic calculations of the self-energy for finite nuclei in certain simple cases (closed shells) and using realistic NN [13] and Y N [12] interactions are also available and provide good agreement with the single-particle levels of hypernuclei [21].
In Ref. [20], the energy density functional has been extended to arbitrary nuclear asymmetry, since we are now interested in very neutron-rich nuclei.Furthermore, modern nucleonic Skyrme forces (SkI4 [14] or SLy4 [15]) suitable for this situation are now used and we include, as already said, the effects of quadrupole deformation and (nucleonic) pairing.
A. Hypernuclear matter
We are mainly interested in the neutron drip properties, determined by the vanishing of the neutron chemical potential, of asymmetric hypermatter characterized by the total baryonic density ρ N , the nucleon asymmetry α, and the presence of a certain amount of 's described by ρ .We consider the case of saturated hypermatter with vanishing pressure as it is the most significant situation for the analysis of finite nuclei.
Figure 1 shows the different chemical potentials, µ n and µ p in the left panel and µ in the right panel, as a function of nucleon asymmetry for different fixed densities (ρ = 0.0, 0.02, 0.04 fm −3 ) under the condition of vanishing pressure, obtained with the SLy4 force together with the parametrization of Eq. ( 5) to describe the nucleonic and the hyperonic contributions, respectively.As expected, the proton chemical potential decreases with the nuclear asymmetry, while the neutron chemical potential increases, and the crossing with zero defines the maximum (neutron drip) asymmetry for a given value of ρ .The neutron drip asymmetry increases with the presence of 's, which act as an additional source of attraction for neutrons.One thus expects that for finite nuclei the presence of one or more 's should translate in an increment of the number of neutrons that a nucleus can support.
When the nucleon asymmetry vanishes, µ n and µ p coincide if ρ = 0.However, the presence of ρ produces an isospin breaking asymmetry (induced by the underlying NSC89 potential [12]) and the chemical potentials of neutrons and protons become different even at α = 0, µ p being slightly more attractive than µ n , which is changing very little with ρ for low asymmetries.
The chemical potential at ρ = 0 (impurity case) has typical values of about −27 MeV (hyperon well depth), and becomes more repulsive when ρ increases, mainly due to the increase of the Fermi motion of the 's and the fact that some of the N bonds are replaced by bonds.For all values of ρ the dependence of µ on the nucleon asymmetry is very smooth, presenting a shallow maximum at low asymmetries.For ρ ≈ 0.04 fm −3 the chemical potential becomes positive and no more 's can be bound by the matter.However, this drip point is in practice unreachable, because → N conversion sets in before [4,5].
One should keep in mind the condition of zero pressure along the curves, such that each asymmetry corresponds to a different total density.In addition, for a given asymmetry, the different values of ρ correspond also to different values of the nucleonic density.Figure 2 illustrates this condition by showing the saturation baryon density as a function of nucleon asymmetry for several values of ρ , obtained with SLy4 for the nucleonic energy density.One observes that the total baryon density at zero pressure decreases with the asymmetry, due to the increment of the Fermi motion and pressure and concurrent reduction of the attractive interaction energy with the asymmetry.Therefore the total density decreases to keep the pressure equal to zero.On the other hand, the increase of the total density with the partial density can be understood using the same type of arguments, i.e., the presence of 's decreases the Fermi motion and therefore one needs increase the density in order to keep the pressure constant.For each value of ρ , the curve is shown up to the nucleon asymmetry corresponding to the neutron drip condition.
As expected, this asymmetry increases with ρ , as can be seen in Fig. 3, where we show the maximum nucleon asymmetry corresponding to the neutron drip condition, as a function of ρ for the three different interactions considered in the paper.The results provided by the Av18+3BF and SLy4 are very similar, while SkI4 produces a slightly larger maximum asymmetry.The relevant fact is that the maximum asymmetry increases with ρ ; this increment at the drip point ρ ≈ 0.04 fm −3 is about 18%, 16%, and 23% for Av18+3BF, SLy4, and SkI4, respectively.We remark that qualitatively similar results were obtained in Ref. [3], employing various hyperonic Skyrme forces.
Since in our case all calculations are performed using the same hyperon-nucleon interaction, the differences are due to the different NN interactions.The larger values of the maximum nucleon asymmetry associated to SkI4 can be easily understood by examining the symmetry energy of nuclear matter shown in Fig. 4. In fact in the region of interest, up to ρ N ≈ 0.2 fm −3 , SLy4 and Av18+3BF provide very similar results for the symmetry energy, which are systematically larger than those of SkI4.Therefore the latter interaction facilitates the creation of larger nuclear asymmetry.
It is not easy from these bulk matter results to make quantitative predictions on how the presence of one or more 's will affect the neutron drip line in the case of finite nuclei, due to the presence of shell structure, pairing, and deformation in that more complex environment.In any case the effect in homogeneous matter seems sizable enough to motivate explicit calculations with finite nuclei.They will be discussed in the next section.
B. Hypernuclei
The influence of a few hyperons on the nucleonic structure of a nucleus is usually small: From Fig. 1 we deduce shifts (additional attraction) of the order of 1 MeV for the neutron and proton chemical potentials in the presence of typical densities of the order of 0.01 fm −3 in a hypernucleus.In order to find appreciable effects we therefore focus on nuclei close to the neutron drip line where the highest (partially) occupied neutron single-particle level is very weakly bound or where additional bound states can be expected with little added attraction, in particular we perform an exploratory study of the neutron-rich isotopes of beryllium and oxygen.In this case the addition of 's might stabilize an otherwise unbound neutron level and thus allow the existence of new isotopes or extend the neutron drip point.[By existence we mean the fulfillment of two simultaneous conditions in our model: (i) A solution of the SHF Schrödinger equation with negative single-particle energies for all occupied neutron levels.(ii) This solution lies in a (local) minimum of the energy vs. deformation plot.]Furthermore the lifetime of very short-lived isotopes (neutron emitters beyond the drip line) might be increased (up to the typical hypernucleus lifetime of the order of 100 ps) and neutron halo features might be either augmented or reduced with respect to the parent nucleus.
In the theoretical treatment the same features might be caused by relaxing the constraint of spherical symmetry and performing deformed SHF calculations instead of spherical ones.It is thus essential to properly take into account this competing effect.In order to remain realistic, we compare in the following ordinary nuclei and double-lambda hypernuclei, which is the maximum that is perhaps experimentally feasible.
We begin in Fig. 5 with the complete chain of Be isotopes and their properties obtained with the SLy4 force, namely the neutron single-particle energies (upper panel), the total binding energies (middle panel), and (lower panel) the quadrupole deformation parameter β 2 = π 5 2z 2 −r 2 z 2 +r 2 in cylindrical coordinates.With the SLy4 Skyrme force we find beryllium isotopes with N 8 and N = 10, 11, 12, 14, 16.The neutron drip defined by the minimum of the B vs. N curve lies at 12 Be, and the heavier isotopes (N > 8) are thus unstable with respect to one or two neutron emission.Experimentally the isotopes up to 16 Be are known [22] and 15 Be and 16 Be may decay via neutron emission and are therefore very short-lived.The heaviest isotopes are extremely unstable due to the very small binding energies of the highest occupied neutron single-particle state.(The neutron Fermi energy is indicated by a red-dashed line in the figure .)We find in fact that the isotopes in the range 8 < N < 16 only exist due to the deformation of the nucleus; in spherical calculations the highest neutron level is unbound in these cases.In addition, N = 14 and N = 16 are metastable deformed states, i.e., they lie in local minima of the energy vs. deformation plot, whereas in the global minima their highest neutron levels would be unbound.Finally, the odd nuclei N = 9, 13, 15 do not exist due to pair breaking.
The plot thus demonstrates the importance and interplay of deformation and pairing for weakly bound isotopes close to the drip line.In order to study these aspects and the effect of added hyperons in more detail, we show in Fig. 6 the energy of the highest (partially) occupied neutron 1d 5/2 single-particle level in several Be isotopes without 's and with two 's and with or without deformation, obtained using the SLy4 force.We note that in general the addition of 's as well as allowing deformation increases clearly the neutron binding energy.More precisely, in the undeformed case without 's only the isotopes N 8 and N = 16 exist, while the addition of two 's stabilizes also N = 14.With deformation the isotopes N 8 and N = 10, 11, 12, 14, 16 exist, and the addition of 's substantially augments their binding energies and allows also the N = 9, 13 nuclei, overcoming the pair breaking effect.Thus, even if in this case the neutron drip point N = 8 (minimum of the B vs. N curve for Be) is not shifted, the two short-lived isotopes N = 9, 13 are obtained by the addition of two lambdas.
On the contrary, the same calculations using the SkI4 force yield the neutron drip at N = 8 and with two 's also N = 10 exists.No other isotopes with N > 8 are found in this case.This demonstrates the strong dependence of the predictions on the nuclear Skyrme forces used, which are mainly constrained by nuclear data far from the drip line.Nevertheless, the main qualitative effect of added hyperons is clearly demonstrated: nuclei close to the drip line are stabilized and new isotopes are potentially made available.
In Fig. 7 we show the equivalent results for oxygen nuclei.In this case all isotopes up to N = 20 exist with both Skyrme interactions.The deformations are very small and not shown here.One observes clearly the attractive effect of the two 's on the neutrons, even if no new isotopes are made available.The shifts of the neutron single-particle levels are slightly smaller than for Be nuclei, because the partial densities are smaller in the larger O nucleus.
The lower panels of the figure show the one-neutron separation energies computed from the binding energies, S n = B(N, Z) − B(N − 1, Z).The difference between S n and −e n is due to the rearrangement of the core (including the change of deformation) of the two nuclei involved.In general the separation energies are slightly smaller in magnitude than the single-particle energies, in particular, S n can become negative while the valence neutrons are still all bound.For our purpose, however, the changes of both quantities due to the addition of two 's are very similar and of the order of some hundreds of keV.Comparing with the experimental values (red-dashed lines) one sees that also in this case neither of the two Skyrme forces can give a really satisfactory description of the neutron-rich isotopes.In particular, the theoretical predictions give positive separation energies up to N = 20, whereas experimentally the neutron drip point is N = 16 [22,23].(Note, however, that for N = 17, . . ., 20 the "experimental" data involve systematic extrapolations, see Ref. [22].)However, this deficiency is not thought to affect significantly the energy gain due to the addition of two 's extracted from the plotted results.
Confronting the results obtained with the two Skyrme forces, one notes that the valence neutrons in the heaviest isotopes (N > 16) are slightly more bound with the SkI4 force, whereas the opposite is true for the lighter isotopes.This demonstrates the importance of finite-size effects beyond the indications given by the nuclear matter results, where the SkI4 's, using the SLy4 (left panels) and the SkI4 (right panels) nucleonic Skyrme forces.The (red) dashed lines indicate experimental data from Ref. [22].symmetry energy is the smaller one.In particular the spin-orbit parts (and their isospin dependence) of the two Skyrme forces are very different, have no influence on the nuclear matter results, but play an important role in light nuclei.
IV. CONCLUSIONS
We studied the effect of adding hyperons to nuclear matter or finite nuclei, in particular in view of the modification of the neutron drip properties.We used a deformed Hartree-Fock approach with a microscopic in-medium N force derived from BHF calculations of hypernuclear matter, together with the modern nucleonic SkI4 or SLy4 Skyrme forces, including nucleonic pairing correlations.The effect is particulary strong in light nuclei due to the relatively high partial densities involved, and might stabilize otherwise unbound isotopes, or increase the lifetime of existing ones beyond the neutron drip line.This has been demonstrated explicitly in exploratory calculations of neutron-rich beryllium and oxygen isotopes.Clearly, the quantitative results depend on the N force and in particular on the nucleonic force used, which should be refined in the future for these exotic situations in order to allow more precise predictions, possibly within a more sophisticated theoretical framework suited for the delicate problem of paired weakly bound halo states [24].
FIG. 1 .
FIG.1.(Color online) Neutron, proton, and lambda chemical potentials at zero pressure as a function of the nucleon asymmetry for different values of ρ , obtained with the SLy4 interaction together with the parametrization of Eq. (5).µ n and µ p are shown in the left panel, while µ is plotted in the right panel.The solid, dotted, and dashed lines correspond to ρ = 0.0, 0.02, and 0.04 fm −3 , respectively.
FIG. 2 .FIG. 3 .
FIG. 2. (Color online)Total baryon density at zero pressure as a function of the nucleon asymmetry for different values of ρ , obtained with the SLy4 interaction for the nucleonic energy density.Going from low to high densities, the different curves correspond to ρ = 0.0, 0.01, 0.02, 0.03, 0.04 fm −3 .
NFIG. 5 .
FIG. 5. (Color online) Neutron single-particle levels (upper panel), binding energies (middle panel), and quadrupole deformations (lower panel) of several Be isotopes, obtained with the SLy4 force.The (red) dashed line indicates the neutron Fermi energy.
2 FIG. 6 .
FIG. 6. (Color online) Upper panel: Energy of the highest (partially) occupied neutron 1d 5/2 single-particle level of several beryllium isotopes containing no (solid lines) or two (dotted lines) 's.Deformed (black) and undeformed (green) SHF calculations with the SLy4 force are compared.Lower panel: Quadrupole deformation of the (hyper)nucleus.
FIG. 7 .
FIG. 7. (Color online)Energy of the highest (partially) occupied neutron single-particle level (upper panels) and one-neutron separation energies (lower panels) of several oxygen isotopes containing no (solid lines) or two (dotted lines) 's, using the SLy4 (left panels) and the SkI4 (right panels) nucleonic Skyrme forces.The (red) dashed lines indicate experimental data from Ref.[22]. | 4,919.6 | 2008-11-07T00:00:00.000 | [
"Physics"
] |
Automatic detection and segmentation of evolving processes in 3D medical images: Application to multiple sclerosis
The study of temporal series of medical images can be helpful for physicians to perform pertinent diagnoses and to help them in the follow-up of a patient: in some diseases, lesions, tumors or anatomical structures vary over time in size, position, composition, etc., either because of a natural pathological process or under the effect of a drug or a therapy. It is a laborious and subjective task to visually and manually analyze such images. Thus the objective of this work was to automatically detect regions with apparent local volume variation with a vector field operator applied to the local displacement field obtained after a non-rigid registration between two successive temporal images. On the other hand, quantitative measurements, such as the volume variation of lesions or segmentation of evolving lesions, are important. By studying the information of apparent shrinking areas in the direct and reverse displacement fields between images, we are able to segment evolving lesions. Then we propose a method to segment lesions in a whole temporal series of images. In this article we apply this approach to automatically detect and segment multiple sclerosis lesions that evolve in time series of MRI scans of the brain. At this stage, we have only applied the approach to a few experimental cases to demonstrate its potential. A clinical validation remains to be done, which will require important additional work.
INTRODUCTION
Speech recognition has two major applications [1] :transcribing ubiquitous speech documents such as presentations, lectures and broadcast news, and dialogue with computer systems.Since speech is the most natural and effective way of communication between human beings, the former application is expected to become very important in the IT era.Although high recognition accuracy can be easily obtained for speech reading text such as anchor speakers' broadcast news utterances, it is still very difficult to recognize spontaneous speech.Spontaneous speech is ill-formed and very different from written text.Spontaneous speech usually includes redundant information such as disfluencies, filled pauses, repetitions, repairs and word fragments.In addition, irrelevant information included in a transcription caused by recognition errors is usually inevitable.Therefore, an approach in which all words are transcribed is not an effective one for spontaneous speech.Instead, speech summarization for extracting important information and removing redundant and incorrect information is necessary for recognizing spontaneous speech.
Techniques for automatically summarizing written text have been actively investigated in the field of natural language processing.However, many of these techniques are not applicable to speech, and techniques for speech summarization have just recently started to be investigated.We have proposed a sentence compaction-based statistical speech summarization technique, in which a set of words maximizing a summarization score indicating appropriateness of summarization is extracted from automatically transcribed speech and then concatenated to create a summary according to a target compression ratio [2][3].The proposed technique can be applied to each sentence utterance as well as whole speech documents consisting of multiple utterances.This technique has been applied to Japanese, as well as English documents, and its effectiveness has been confirmed.However, when multiple spontaneous utterances including many recognition errors and disfluencies are summarized with a high compression ratio (a small summarization ratio), the summary sometimes includes unnatural, incomplete sentences consisting of a small number of words, and it becomes difficult to read.This paper proposes a new twostage summarization method, consisting of important sentence extraction and sentence compaction, to cope with this problem.In the new method, relatively well-structured and important sentences including important information and less speech recognition errors are extracted, and sentence compaction is applied to the set of extracted sentences.The remainder of the paper is organized as follows.In the next section, the two-stage summarization method is described.Section 3 provides results of evaluation experiments for automatically summarizing spontaneous presentation utterances.The paper concludes with a general discussion and issues related to future research.
Important sentence extraction
The important sentence extraction is performed according to the following score for each sentence, W = w 1 , w 2 , . . ., w N , obtained as a result of speech recognition: where N is the number of words consisting the sentence W , and L(w n ), I(w n ) and C(w n ) are the linguistic score, the significance score, and the confidence score of word w n , respectively.The three scores are a subset of the scores originally used in our sentence compaction method and considered to be useful also as measures indicating the appropriateness of including the sentence in the summary.λ I and λ C are weighting factors for balancing the scores.Details of the scores are as follows.
Linguistic score
The linguistic score L(w i ) indicates the linguistic likelihood of word strings in the sentence and is measured by n-gram probability: In our experiment, trigram probability calculated using transcriptions of presentation utterances in the CSJ (Corpus of Spontaneous Japanese) [4] consisting of 1.5M morphemes (words) is used.This score de-weights linguistically unnatural word strings caused by recognition errors.
Significance score
The significance score I(w i ) indicates the significance of each word w i in the sentence and is measured by the amount of information.The amount of information is calculated for content words such as nouns, verbs and adjectives by word occurrences in a corpus as shown in Eq. (3).A flat score is given to other words.
where f i is the number of occurrences of w i in the recognized utterances, F i is the number of occurrences of w i in a large-scale corpus, and F A is the number of all content words in that corpus, that is i F i .For measuring the significance score, the number of occurrences of 120k kinds of words in a corpus consisting of transcribed presentations (1.5M words), proceedings of 60 presentations, presentation records obtained from WWW (2.1M words), NHK (Japanese broadcast company) broadcast news text (22M words), Mainichi newspaper text (87M words) and text from a speech textbook "Speech Information Processing" (51k words) is calculated.Important keywords are weighted and the words unrelated to the original content, such as recognition errors, are de-weighted by this score.
Confidence score
The confidence score C(w i ) is incorporated to weight acoustically as well as linguistically reliable hypotheses.Specifically, a logarithmic value of a posterior probability for each transcribed word, that is the ratio of a word hypothesis probability to that of all other hypotheses, is calculated using a word graph obtained by a decoder and used as a confidence score.
Sentence compaction
After removing sentences having relatively low recognition accuracy and/or low significance, filled pauses are removed from the remaining transcription, and sentence compaction is performed using the method that we have proposed [3].In this method, all the remaining sentences are combined together, and the linguistic score, the significance score, the confidence score and the word concatenation score are given to each transcribed word.The word concatenation score is incorporated to weight a word concatenation between words with dependency in the transcribed sentences.The dependency is measured by a phrase structure grammar, SDCFG (Stochastic Dependency Context Free Grammar).A set of words that maximizes a weighted sum of these scores is selected according to a given compression ratio using a 2-stage dynamic programming(DP) technique.Specifically, each sentence is summarized according to all possible compression ratios, and then the best combination I -385 ¬ ¬ of summarized sentences is determined according to a target total compression ratio.
Ideally, the linguistic score should be calculated using a word concatenation model based on a large-scale summary corpus.Since such a summary corpus is not yet available, the transcribed presentations used to calculate the word trigrams for the important sentence extraction are automatically modified into written editorial style articles and used together with the proceedings of 60 presentations to calculate the trigrams for sentence compaction.
The significance score is calculated using the same corpus as that used for calculating the score for important sentence extraction.The word dependency probability is estimated by the Inside-Outside algorithm, using a manually parsed Mainichi newspaper corpus having 4M sentences with 68M words.
Summarization experiments
One of the presentations in the CSJ by a male speaker having a length of roughly 12 minutes was summarized at summarization ratios of 70% and 50%.The word recognition accuracy of this presentation is 70% in average.Specification of the recognition system is as follows.
Feature extraction
Speech waveform is digitized by 16kHz sampling and 16bit quantization, and a 25-dimensional feature vector consisting of normalized logarithmic energy, 12-dimensional Mel-cepstrum and their derivatives, is extracted using a 24ms frame applied at every 10ms.The cepstral mean subtraction(CMS) is applied for each utterance.
Acoustic and linguistic models
Speaker-independent context-dependent phone HMMs with 3000 states and 16 Gaussian mixtures for each state are made using a part of the CSJ consisting of 338 presentations with a length of 59 hours spoken by male speakers different from the speaker of the presentation for testing.The transcribed presentations in the CSJ with 1.5M words are automatically split into words (morphemes) by the JTAG morphological analysis program, and the most frequent 20k words are selected to calculate word bigrams and trigrams.
Decoder
A word-graph-based 2-pass decoder is used for recognition.In the first pass, frame-synchronous beam search is performed using the above-mentioned HMM and the bigram language model.A word graph generated as a result of the first pass is rescored in the second pass using the trigram language model.
Summarization accuracy
To automatically evaluate summarized sentences, correctly transcribed presentation speech is manually summarized by nine human subjects and used as correct targets.Variations of manual summarization results are merged into a word network as shown in Fig. 2, which is considered to approximately express all possible correct summarization covering subjective variations.Word accuracy of automatic summarization is calculated as the summarization accuracy using the word network [3].
Evaluation of a sentence removed at the sentence extraction stage is performed as follows: if there exists a direct path from the sentence beginning <s> to the sentence ending </s> in the word network, the summarization accuracy for that sentence is 100% (no error); and, if the direct path does not exist, it is considered that there exists an error of the deletion of all the words in that sentence.
Evaluation results
Results of evaluation experiments are shown in Figs. 3 and 4. In all the automatic summarization conditions, both our previous one-stage method without sentence extraction and our new two-stage method including sentence extraction achieve better results than random word selection.In both 70% and 50% summarization conditions, the two-stage method achieves higher summarization accuracy than the one-stage method.In these experiments, the division of summarization ratio into the two stages was experimentally optimized.
Figure 5 shows the summarization accuracy as a function of the ratio of compression by sentence extraction in the total summarization ratio at the 50% and 70% summarization conditions.This result indicates that the best summarization accuracy can be obtained when 2/3 and 1/2 of the I -386 ¬ ¬ Fig. 3. Summarization at 50% summarization ratio.compression is performed by the sentence extraction under the condition of 50% and 70% summarization ratio, respectively.
Comparing the three scores for the sentence extraction, the significance score (I) or the confidence score (C) achieves better results than the linguistic score (L), improving the summarization accuracy by 2% compared with the one-stage method.By combining the two scores (I C) in the sentence extraction, improvement of the summarization accuracy compared with the one-stage method further reaches to 3%.Since the linguistic score is much less effective than other two scores, the combination of all three scores shows only a minor improvement compared with the combination of only the significance and the confidence scores.
CONCLUSION
This paper has proposed a new two-stage automatic speech summarization method consisting of important sentence extraction and sentence compaction.In this method, inadequate sentences including recognition errors and less important information are automatically removed before wordbased sentence compaction.It was confirmed that in spontaneous presentation speech summarization, combining sentence extraction with sentence compaction is effective, and the new method achieves better summarization performance than our previous one-stage method.It has also been found that the word significance score and the word confidence score are effective for extracting important sentences.The two-stage method is effective for avoiding the production of short unreadable sentences, one of the problems that the one-stage method had.
Future research includes evaluation by a larger testing data with manual summary, investigation of other useful information/features for important sentence extraction, and automatic optimization of the division of compression ratio into the two summarization stages.
Figure 1 Fig. 1 .
Figure1shows the new two-stage summarization method consisting of important sentence extraction and sentence compaction.From the speech recognition results, a set of relatively important sentences is extracted, and sentence com-
Fig. 5 .
Fig. 5. Summarization accuracy as a function of the ratio of compression by sentence extraction in the total summarization ratio. | 2,980.6 | 1999-06-28T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Causality & holographic entanglement entropy
We identify conditions for the entanglement entropy as a function of spatial region to be compatible with causality in an arbitrary relativistic quantum field theory. We then prove that the covariant holographic entanglement entropy prescription (which relates entanglement entropy of a given spatial region on the boundary to the area of a certain extremal surface in the bulk) obeys these conditions, as long as the bulk obeys the null energy condition. While necessary for the validity of the prescription, this consistency requirement is quite nontrivial from the bulk standpoint, and therefore provides important additional evidence for the prescription. In the process, we introduce a codimension-zero bulk region, named the entanglement wedge, naturally associated with the given boundary spatial region. We propose that the entanglement wedge is the most natural bulk region corresponding to the boundary reduced density matrix.
Introduction
One of the remarkable features of the holographic AdS/CFT correspondence is the geometrization of quantum-field-theoretic concepts. While certain aspects of recasting fieldtheory quantities into geometric notions have been ingrained in our thought, we are yet to fully come to grips with new associations between QFT and bulk geometry. A case in point is the fascinating connection of quantum entanglement and spacetime geometry. The genesis of this intricate and potentially deep connection harks back to the observation of Ryu-Takayanagi (RT) [1,2] and subsequent covariant generalization by Hubeny-Rangamani-Takayanagi (HRT) [3] that the entanglement entropy of a quantum field theory is holographically computed by the area of a particular extremal surface in the bulk. In recent years, much effort has been expended in trying to flesh out the physical implications of these constructions and in promoting the geometry/entanglement connection to a deeper level [4][5][6][7] which can be summarized rather succinctly in terms of the simple phrases "entanglement builds bridges" and "ER = EPR". Whilst any connection between entanglement and geometry is indeed remarkable, further progress is contingent on the accuracy JHEP12(2014)162 and robustness of this entry in the holographic dictionary. Let us therefore take stock of the status quo. 1 The RT proposal is valid for static states of a holographic field theory, which allows one to restrict attention to a single time sliceΣ in the bulk spacetime M. The entanglement entropy of a region A on the corresponding Cauchy slice Σ of the boundary spacetime B is computed by the area of a certain bulk minimal surface which lies onΣ. In this case we have a lot of confidence in this entry to the AdS/CFT dictionary; firstly the RT formula obeys rather non-trivial general properties of entanglement entropies such as strong subadditivity [8][9][10], and secondly a general argument has been given for it in the context of Euclidean quantum gravity [11].
However, it should be clear from the outset that restricting oneself to static states is overly limiting. Not only is the field theory notion of entanglement entropy valid in a broader, time-dependent, context, but more importantly, one cannot hope to infer all possible constraints on the holographic map without considering time dependence.
The HRT proposal, which generalizes the RT construction to arbitrary time-dependent configurations by promoting a minimal surface onΣ to an extremal surface E A in M, allows one to confront geometric questions in complete generality. However, this proposal has passed far fewer checks, and an argument deriving it from first principles is still lacking. This presents a compelling opportunity to test the construction against field-theory expectations and see how it holds up. Since the new ingredient in HRT is time-dependence, the crucial property to check is causality. The present discussion therefore focuses on verifying that the HRT prescription is consistent with field-theory causality. 2 Let us start by considering the implications of CFT causality on entanglement entropy, in order to extract the corresponding requirements to be upheld by its putative bulk dual. As we will explain in detail in section 2, there are two such requirements. First, the entanglement entropy is a so-called wedge observable. This means that two spatial regions A, A that share the same domain of dependence, D[A] = D[A ], have the same entanglement entropy, S A = S A ; this follows from the fact that the corresponding reduced density matrices ρ A , ρ A are unitarily related [13]. Second, fixing the initial state, a perturbation to the Hamiltonian with support contained entirely inside D[A] ∪ D[A c ] (where A c is the complement of A on a Cauchy slice) cannot affect S A . The reason is that we can choose a Cauchy slice Σ that lies to the past of the support and contains a region A with D[A ] = D[A]; since the perturbation cannot change the state on Σ , it cannot affect S A , which by the previous requirement equals S A . Time-reversing the argument shows that, similarly, S A cannot be affected by a perturbation in D[A] ∪ D[A c ] when we consider time evolution toward the past with a fixed final state.
Having specified the implications of causality for the entanglement entropy in the field theory, let us now translate them into requirements on its holographic dual. First, in 1 We will focus exclusively on local QFTs with conformal UV fixed points which are holographically dual to asymptotically AdS spacetimes in two-derivative theories of gravity. 2 As we elaborate in the course of our discussion this result follows from Theorem 6 of [12]. As this is however not widely appreciated we focus on proving the result from a different perspective highlighting certain novel bulk constructs in the process.
JHEP12(2014)162
order to ensure that the HRT formula in general gives the same entanglement entropy for A and A , they should have the same extremal surface, E A = E A . Second, in order for E A to be safe from influence by perturbations of the boundary Hamiltonian in D[A] and D[A c ] (when evolving either toward the future or toward the past), it has to be causally disconnected from those two regions. This means that the extremal surface has to lie in a region which we dub the causal shadow, denoted by Q ∂A and defined in (2.7) as the set of bulk points which are spacelike-separated from This causality requirement takes an interesting guise in the case where A is an entire Cauchy slice for a boundary. If this is the only boundary, and the bulk is causally trivial, then there is no causal shadow; indeed, E A = ∅, corresponding to the fact that the entanglement entropy of the full system vanishes in a pure state. However, if the state is not pure, the bulk geometry is causally nontrivial: typically the bulk black-hole spacetime has two boundaries, dual to two field theories in an entangled state (which can be thought of as purifying the thermal state of the theory on one boundary). If we take the region A to be a Cauchy slice for one boundary and A c a Cauchy slice for the other, then the extremal surface whose area, according to HRT, measures the amount of entanglement between the two field theories must lie in a region out of causal contact with either boundary. 3 How trivial or expected is the claim that the extremal surface resides in the causal shadow? It is interesting to note that for local CFT observables, analogous causality violation is in fact disallowed by the gravitational time-delay theorem of Gao and Wald [14]. This theorem, which assumes that the bulk satisfies the null energy condition, implies that a signal from one boundary point to another cannot propagate faster through the bulk than along the boundary, ensuring that bulk causality respects boundary causality. However, since entanglement entropy is a more nonlocal quantity, which according to HRT is captured by a bulk surface that can go behind event and apparent horizons [15,16] and penetrate into causally disconnected regions from the boundary, it is far less obvious whether CFT causality will survive in this context.
Let us first consider a static example. Although it is guaranteed to be consistent with CFT causality since it is covered by the RT prescription which is "derived" from first principles, it is useful to gain appreciation for how innocuous or far-fetched causality violation would appear in the more general case. Intriguingly, already the simplest case of pure AdS reveals the potential for things to go wrong. As illustrated in figure 1, the null congruence from a single boundary point (which bounds the bulk region which a boundary source at that point can influence) is simultaneously foliated by spacelike geodesics {E A }. So a signal that can influence a given extremal surface E A in that set can also influence ∂A, thereby upholding CFT causality. However, note that here causality was maintained marginally: if the extremal surface was deformed away from A by arbitrarily small amount, one would immediately be in danger of causality violation.
Another, less trivial, test case is the static eternal Schwarzschild-AdS black hole. The extremal surface that encodes entanglement between the two boundaries is the horizon JHEP12(2014)162 bifurcation surface. Again, arbitrarily small deformation of this surface would shift it into causal contact with at least one of the boundaries, thereby endangering causality; in particular, entanglement entropy for one CFT should not be influenced by deformations in the other CFT. For static geometries we're in fact safe because extremal surfaces do not penetrate event horizons [17]; however this is no longer the case in dynamical situations [15,16,[18][19][20]. Moreover, as illustrated in [21], in Vaidya-AdS geometry, E A can be null-related to the past tip of D[A], thereby again upholding causality just marginally -an arbitrarily small outward deformation of the extremal surface would render it causally accessible from D[A]. These considerations demonstrate that the question of whether the HRT prescription is consistent with field-theory causality is a highly nontrivial one.
The main result of this paper is a proof that, if the bulk spacetime metric obeys the null energy condition, then the extremal surface E A does indeed obey both of the above requirements. We conclude that the HRT formula is consistent with field-theory causality. This theorem can be viewed as a generalization of the Gao-Wald theorem [14]. We regard it as a highly nontrivial piece of evidence in favor of the HRT formula. Along the way, we will also slightly sharpen the statement of the HRT formula, and in particular clarify the homology condition on E A .
Partial progress towards this result was achieved in [22,23], which showed that the extremal surface E A generically lies outside of the "causal wedge" of D[A], the intersection of the bulk causal future and causal past of D[A]. (However, these works did not make the connection to field-theory causality). A stronger statement equivalent to our theorem was proved in [12] (cf., Theorem 6) and it is noted in passing that this would ensure field theory causality. We present an alternate proof which brings out some of of the bulk regions more cleanly and make the connections with boundary causality more manifest.
As a byproduct of our analysis, we will identify a certain bulk spacetime region, which we call the entanglement wedge and denote W E [A], which is bounded on one side by D[A] and on the other by E A . Apart from providing a useful quantity in formulating and deriving our results, the entanglement wedge is, as we will argue, the bulk region most naturally associated with the boundary reduced density matrix ρ A .
JHEP12(2014)162
The outline of this paper is as follows. We begin in section 2 with an overview of the causal domains of interest on each side of the gauge-gravity duality, and motivate and state the core theorem of the paper, which shows that the HRT proposal is consistent with boundary causality. We motivate one of the major implications of our theorem by considering spherically symmetric deformations of the eternal black hole containing a region out of causal contact with both asymptotically AdS boundaries, the causal shadow, and showing that the HRT surface lies in this causal shadow. In section 3, we begin to develop some intuition used in the proof of our main theorem, by considering classes of null geodesic congruences in AdS 3 . In section 4 we prove the general theorem which establishes the main result of the paper. We conclude in section 5 with a discussion of the physical implications of our result and open questions.
Note added: while this paper was nearing completion [24] appeared on the arXiv, which has some overlap with the present work. It introduces the notion of quantum extremal surfaces and argues that for bulk theories that satisfy the generalized second law such surfaces satisfy the causality constraint.
Causal domains and entanglement entropy
In this section we will state our basic results and discuss some of their implications. The specific proof, and some additional results, will be presented in section 4. In section 5 we will suggest some further interpretations of our results, particularly regarding the dual of the reduced density matrix.
We will open in section 2.1 by deriving the causality properties of entanglement entropy in a QFT, and setting up some notation regarding causal domains which will be useful in the sequel. In section 2.2, we will review the HRT formula and discuss various causal regions in the bulk. In section 2.3, we state the basic theorem and some implications for the bulk causal structure relative to specific regions arising in the HRT conjecture. section 2.4 spells out a particular consequence of our results for spacetimes with multiple boundaries.
Causality of entanglement entropy in QFT
Consider a local quantum field theory (QFT) on a d-dimensional globally hyperbolic spacetime B. The state on a given Cauchy slice 4 Σ is described by a density matrix ρ Σ ; this could be a pure or mixed state. We are interested in the entanglement between the degrees of freedom in a region 5 A ⊂ Σ and its complement A c . Following established terminology, we call the boundary ∂A the entangling surface. 4 Throughout this paper we will require all Cauchy slices to be acausal (no two points are connected by a causal curve). This is slightly different from the standard definition in the general-relativity literature, in which a Cauchy slice is merely required to be achronal. The reason is to ensure that different points represent independent degrees of freedom, which is useful when we decompose the Hilbert space according to subsets of the Cauchy slice. 5 Technically, A is defined as the interior of a codimension-zero submanifold-with-boundary in Σ, ∂A is the boundary of that submanifold, and A c := Σ \ (A ∪ ∂A).
JHEP12(2014)162
The entanglement entropy is defined by first decomposing the Hilbert space H of the QFT into H A ⊗ H A c , after imposing some suitable cutoff. 6 The reduced density matrix ρ A := Tr H A c ρ Σ captures the entanglement between A and A c ; in particular, the entanglement entropy is given by its von Neumann entropy: S A := − Tr (ρ A ln ρ A ). For holographic theories, we expect that this quantity has good properties in the large-N limit, 7 unlike the Rényi entropies S n,A := − 1 n−1 ln Tr (ρ n A ) [10,31]. Note that both quantities are determined by the eigenvalues of ρ A , and are thus insensitive to unitary transformations of ρ A . Now, since Σ is a Cauchy slice, the future (past) evolution of initial data on it allows us to reconstruct the state of the QFT on the entirety of B. In other words, the past and future domains of dependence of Σ , D ± [Σ], together make up the background spacetime on which the QFT lives, i.e., , is the region where the reduced density matrix ρ A can be uniquely evolved once we know the Hamiltonian acting on the reduced system in A. 8 A c similarly has its domain of dependence D figure 2), then the state ρ Σ is related by a unitary transformation to the state ρ Σ . It is clear that such a transformation can be constructed from operators localized in A, and so does not change the entanglement spectrum of ρ A . Furthermore, if we fix the state at t → −∞, then a perturbation to the Hamiltonian with support R cannot affect the state on a Cauchy slice to the past of R (i.e. that doesn't intersect J + [R]). Such a perturbation can 6 In the case of gauge fields, this decomposition is not possible even on the lattice. Instead, one must extend the Hilbert spaces HA, HAc to each include degrees of freedom on ∂A, so that H ⊂ HA ⊗ HAc [26][27][28][29]. 7 Technically, by "large-N " we mean large c eff , where c eff is a general count of the degrees of freedom (see [30] for the general definition of c eff ). 8 We remind the reader that D These are the crucial causality requirements that entanglement (Rényi) entropies are required to satisfy in any relativistic QFT. The essential result of this paper is that the HRT proposal for computing S A satisfies these causality constraints. In the conclusions we will revisit the question of what the dual of ρ A , and thus of the data in D[A], might be.
Bulk geometry and holographic entanglement entropy
Let us now restrict attention to the class of holographic QFTs, which are theories dual to classical dynamics in some bulk asymptotically AdS spacetime. To be precise, we only consider strongly coupled QFTs in which the classical gravitational dynamics truncates to that of Einstein gravity, possibly coupled to matter which we will assume satisfies the null energy condition.
The dynamics of the QFT on B is described by classical gravitational dynamics on a bulk asymptotically locally AdS spacetime M with conformal boundary B, the spacetime where the field theory lives. We defineM := M ∪ B.M is endowed with a metricg ab which is related by a Weyl transformation to the physical metric g ab on M,g ab = Ω 2 g ab , where Ω → 0 on B. 9 Causal domains onM will be denoted with a tilde to distinguish JHEP12(2014)162 them from their boundary counterparts, e.g.,J ± (p) will denote the causal future and past of a point p inM andD[R] will denote the domain of dependence of some set R ⊂M.
It will also be useful to introduce a compact notation to indicate when two points p and q are spacelike-separated; for this we adopt the notation , i.e. p q ⇔ a causal curve between p and q. (2.2) Moreover, to denote regions that are spacelike separated from a point, we will use S(p) andS(p) in the boundary and bulk respectively, Just as for other causal sets, we can extend these definitions to any region R, namely S[R] := ∩ p∈R S(p) is the set of points which are causally disconnected from the entire region R, etc.
Having established our notation for general causal relations, let us now specify the notation relevant for holographic entanglement entropy. As before we will fix a region A on the boundary. The HRT proposal [3] states that the entanglement entropy S A is holographically computed by the area of a bulk codimension-two extremal surface E A that is anchored on ∂A; specifically, In the static (RT) case, it is known that the extremal surface is required to be homologous to A, meaning that there exists a bulk region R A such that ∂R A = A ∪ E A . So far, it has not been entirely clear what the correct covariant generalization of this condition is. In particular, should it merely be a topological condition, or should one impose geometrical or causal requirements on R A , for example, that it be spacelike? (A critical discussion of the issues involved can be found in [32].) In this paper, we will show that a clean picture, consistent with all aspects of field-theory causality, is obtained by requiring that R A be a region of a bulk Cauchy slice. 10 We will call this the "spacelike homology" condition. 11 The homology surface R A naturally leads us to the key construct pertaining to entanglement entropy, which we call the entanglement wedge of A, denoted by 12 W E [A]. This can be defined as a causal set, namely the bulk domain of dependence of R A , (2.5) Note that the entanglement wedge is a bulk codimension-zero spacetime region, which can be equivalently identified with the region defined by the set of bulk points which are 10 Technically, similarly to A, we define RA to be the interior of a codimension-zero submanifold-withboundary of a Cauchy sliceΣ ofM (withΣ ∩ B = Σ). SinceΣ itself has a boundary (namely its intersection with B), the interior of a subset (in the sense of point-set topology) includes the part of its boundary along B. Thus, RA includes A (but not EA). 11 If there are multiple extremal surfaces obeying the spacelike homology condition, then we are to pick the one with smallest area. However, in this paper we will not use this additional minimality requirement; all our theorems apply to any spacelike-homologous extremal surface. 12 While we have associated it notationally with the region A, it depends only on D[A].
JHEP12(2014)162
spacelike-separated from E A and connected to D[A]. The latter definition has the advantage of absolving us of having to specify an arbitrary homology surface R A rather than just E A and D[A]. As we shall see below, the bulk spacetime can be naturally decomposed into four regions analogously to the boundary decomposition (2.1); the entanglement wedge is then the region associated with (and ending on) While we have focused on the regions in the bulk which enter the holographic entanglement entropy constructions, we pause here to note two other causal constructs that can be naturally associated with A. First of all we have the causal wedge W C [A] which is set of all bulk points which can both send signals to and receive signals from boundary points contained in D[A], i.e., 13 and causal wedge W C [A] are in fact special cases of the "rim wedge" and "strip wedge" introduced recently in [33] as bulk regions associated with residual entropy.) The second bulk causal domain which will play a major role in our discussion below is a region we call the causal shadow Q ∂A associated with the entangling surface ∂A. We define this region as the set of points in the bulk M that are spacelike-related to both D[A] and D[A c ], i.e., (2.7) For a generic region A in a generic asymptotically AdS spacetime, the causal shadow is a codimension-zero spacetime region; see figure 3 for an illustrative example. 14 In certain special (but familiar) situations, such as spherically symmetric regions in pure AdS (where ρ A is unitarily equivalent to a thermal density matrix), it can degenerate to a codimension-two surface. In such special cases, the entanglement wedge and the causal wedge coincide [22].
In general, the causal information surface for A and that for A c comprise the edges of the causal shadow. For a generic pure state these causal information surfaces each recede from E A towards their respective boundary region but approach each other near the AdS boundary. Hence the geometrical structure of Q ∂A , described in language of a three-dimensional bulk, is a "tube" (connecting the two components of ∂A) with a diamond cross-section, which shrinks to a point where the tube meets the AdS boundary at ∂A. For topologically trivial deformations of AdS, in the absence of E A (i.e. when the state is pure and A = Σ) the causal shadow disappears, but intriguingly, even when A is the 13 Following [22], we can also define a particular bulk codimension-two surface ΞA, the causal information surface, to be the rim of the causal wedge; in fact, it is the minimal area codimension-two surface lying on ∂WC[A].
14 The bulk metric used in the plot for figure 3 is The matter supporting this geometry satisfies the null energy condition as can be checked explicitly. Figure 3. Example of a causally trivial spacetime and a boundary region A whose causal shadow is a finite spacetime region. We have engineered an asymptotically AdS 3 geometry sourced by matter satisfying the null energy condition (see footnote 14) and taken A to nearly half the boundary, ϕ A = 1.503, at t = 0 (thick red curve). The shaded regions on the boundary cylinder are D[A] and D[A c ] respectively. The extremal surface is the thick blue curve, while the purple curves are the rims of the causal wedge (causal information surfaces) for A and A c respectively. A few representative generators are provided for orientation: the blue null geodesics generate the boundary of the causal wedge for A while the green ones do likewise for A c . The orange generators in the middle of the spacetime generate the boundary of the causal shadow region Q ∂A . entire boundary Cauchy slice, the causal shadow can be nontrivial. This occurs for example in the AdS 3 -geon spacetimes 15 [34] and in perturbations of the eternal AdS black hole, such as those studied by [35]. In such a situation we simply define the casual shadow of the entire boundary (dropping the subscript) as
JHEP12(2014)162
Here B is understood generally to include multiple disconnected components; the causal shadow is the region spacelike separated from points on all the boundaries.
Causality constraints on extremal surfaces
Having developed the various causal concepts which we require, let us now ask what the constraints of field-theory causality concerning entanglement entropy translate to in the 15 Since these describe pure states, the presence of a causal shadow region does not necessarily guarantee the presence of an extremal surface whose area gives the entanglement entropy contained within it. However, there will be some extremal surface spanning this region.
JHEP12(2014)162
bulk. The first constraint is that S A should be a wedge observable, i.e. if D[A] = D[A ] then S A = S A . For this to hold in general, we need E A = E A . The second concerns perturbations of the field-theory Hamiltonian. Such perturbations will source perturbations of the bulk fields, including the metric, that will travel causally with respect to the background metric. In particular, disturbances originating in D[A] will be dual to bulk modes propagating inJ + D[A] (if we fix the state in the far past) or inJ − D[A] (if we fix the state in the far future). If either of these bulk regions intersected E A , the dual of local operator insertions in D[A] could change the area of E A , meaning that the HRT proposal would be inconsistent with causality in the QFT. By the same token, the extremal (2.9) In others words, using (2.7) we can say that E A has to lie in the causal shadow of ∂A It is known, based on properties of extremal surfaces, that E A lies outside the causal wedges . A theorem of Wall [12] (Theorem 6 of the reference), guarantees that this does not occur (modulo some assumptions).
We will prove an essentially equivalent statement in section 4, directly for extremal surfaces in an asymptotically AdS spacetime. The main result however can be stated in terms of three simple causal relations: In other words, the causal split of the bulk into spacelike-and timelike-separated regions from E A restricts to the boundary at precisely the boundary split (2.1). Given the decomposition (2.1), these causal relations imply that perturbations in D[A] ∪ D[A c ] are not in causal contact with E A . So, as required, the extremal surface lies in the causal shadow.
As a consequence of this theorem, we will also show that, if there is a spacelike region Thus, the HRT formula gives the same entanglement entropy for A and A, as required on the field-theory side.
Entanglement for disconnected boundary regions
A striking consequence of the theorems discussed above emerges when we consider spacetimes with two boundary components, and let A be (a Cauchy slice for) all of one component.
As a starting point, consider the eternal Schwarzschild-AdS d+1 black hole in the Hartle-Hawking state, with a Penrose diagram shown in figure 4(a) below. The left and right boundaries of the diagram each have the topology S d−1 × R. This geometry is believed to be dual to the CFT on the product spatial geometry S d−1 L × S d−1 R , in the entangled "thermofield double" state [36][37][38][39]: where | E i R,L is the energy eigenstate of the CFT on S d−1 R,L . Let Σ R lie on the t = 0 slice of the right boundary, and consider the reduced density matrix for some region A ⊂ Σ R . Since this is a static geometry, its entanglement entropy S A is computed by a minimal surface E A which never penetrates past the bifurcation surface X of the black hole [17]. 16 If we let A be the full Cauchy slice of one of the boundaries, say A = Σ R , the extremal surface precisely coincides with the black hole bifurcation surface, as indicated in figure 4. Note that E A lies on the edge of the causally acceptable region since X sits at the boundary of both W C [A] and W C [A c ], and therefore constitutes the entire causal shadow for this special case.
One might now wonder what happens if we deform the state (2.12). This is not an innocuous question. In time-dependent geometries, the global (teleological) nature of the event horizon implies that extremal surfaces anchored on the boundary can pass through this horizon [15]. Furthermore, as first explicitly shown in [16], even apparent horizons do not form a barrier to the extremal surfaces. Hence we see that, a priori, in a state which is a deformation of (2.12), The theorems we have stated above indicate that this does not happen. The question is, how precisely does the extremal surface E A avoid doing so? As a first step to answering this, consider a deformation of the static eternal case localized along a null shell emitted from the right boundary at some time. The corresponding metric is given by the global Vaidya-SAdS geometry, where both the initial (prior to the shell) and final (after the shell) spacetime regions describe a black hole. Figure 4b presents a sketch of the Penrose diagram of such a geometry, contrasted with the standard static eternal Schwarzschild-AdS black hole (figure 4a). The diagonal brown line represents the shell which is sourced at some time on the right boundary and implodes into the black hole (terminating at the future singularity), and the blue lines represent the various (future and past, left and right) event horizons. The solid parts of these lines indicate where these event horizons coincide with apparent horizons (as well as isolated horizons); the dashed parts are parts of the event horizon which are not apparent horizons. In such a geometry, let us again consider A = Σ R . Then our theorems guarantee that the extremal surface must lie on the null sheet separating regions R c and P c : it is again spacelike-separated from both D[Σ L ] and D[Σ R ]. (In fact, since the spacetime prior to the shell is identical to the eternal static case, the extremal surface remains in the same location as for the static case, namely the bifurcation surface where regions R c and L touch.) The situation is again marginal, much like the original undeformed case. Indeed, any perturbation to Schwarzschild-AdS which emanates from (or reaches to) the right boundary cannot change the location of the original extremal surface by causality; it could at most generate a new extremal surface.
A less marginal case occurs when we symmetrically perturb both copies of the CFT as above. Consider a perturbation at t = 0 such that spherically symmetric null shells are emitted both to the past and future on both sides of the diagram. One then obtains the Penrose diagram shown in figure 5; this has time-reflection symmetry about t = 0, symmetry under exchanging the left and right sides, and the SO(d) rotational symmetry.
According to the theorems above, the extremal surface must be spacelike-separated from both boundaries, when we take A = Σ R . Using both time and space reflection symmetry, it is clear that E A must sit in the center of the causal shadow Q of the two boundaries, spacelike separated from both.
In the general case of spherically symmetric spacetime (even in the absence of time or space reflection symmetry) there is an easy proof of our claim that E A must lie in the causal shadow. We proceed by contradiction: suppose that a spherical extremal surface E A lies inJ + [Σ L ]. This means that on a Penrose diagram, it lies somewhere in the top-left region; say it is the surface F A indicated in figure 5 (which by rotational symmetry is a copy of S d−1 ). Let us then consider the past congruence of null normal geodesics from F A towards B L . Since we assume that F A candidate surface lies inJ + [Σ L ], past-going null JHEP12(2014)162 Figure 5. Sketch of Penrose diagram for a symmetric Vaidya-Schwarzschild-AdS geometry obtained by imploding null shells to the past and future from both boundaries. The crucial new feature of note is the presence a causal shadow region that is spacelike separated from both boundaries. We have also indicated the extremal surface E A for the region A = Σ R in red at the center of the figure and F A is a S d−1 of finite area in the causal future of the left boundary. The lightly shaded regions are the causal wedges associated with A and A c respectively. congruences from the surface intersect B L on a spacelike codimension-one surface. In other words, the area of the spheres grows without bound along this past-directed congruence.
However, by definition, for an extremal surface the initial expansion is vanishing. Moreover, if the matter in the spacetime satisfies the null energy condition,then it also follows that the area along the congruence is guaranteed not to grow. Nor can the area go to zero along the congruence, since the area of the S d−1 represented by each point on the Penrose diagram is finite. It therefore follows that our assumption about E A penetratingJ + [Σ L ] must be erroneous; F A cannot be an extremal surface. Running a similar argument for the other unshaded regions in figure 5, we learn that the extremal surface must indeed lie in the causal shadow region, as denoted by the red surface E A .
Indeed, in this particular case, the extremal surface lies at the point on the Penrose diagram where the future and past apparent horizons meet -the "apparent bifurcation surface". The fact that it lies in the causal shadow is a consequence of the familiar fact that the apparent horizon can never be outside the event horizon, applied to both future and past horizons.
While the above result relied on the special properties of spherically symmetry (both of the spacetime and the null congruences therein), the theorems we prove in section 4 will establish this in full generality.
In the next two sections we set out to prove the theorems stated in section 2.3. The proof in our spherically symmetric case indicates that understanding null congruences JHEP12(2014)162 leaving the extremal surface might play a key role. We will therefore spend some time in section 3 examining null congruences emanating from bulk codimension-two surfaces in AdS 3 , in order to develop a picture of the relevant causal domains, before embarking on a general proof in section 4.
Null geodesic congruences in AdS 3
In this section, we consider null geodesic congruences emanating from curves in AdS 3 that are anchored at the boundary. Our aim is to build some intuition about such congruences in a simple setting, since their properties will play a crucial role in the proofs in what follows. Readers familiar with the general statements are invited to skip ahead to the abstract discussion.
We work in the Poincaré patch of AdS 3 with the standard metric: Since our aim is to understand specifically the (causal) boundary of bulk causal domains, we are going to examine properties of null geodesic congruences. In particular, for a spacelike codimension-one region R ⊂ M which is anchored on the AdS boundary, the domain of dependenceD[R] is bounded by a family of outgoing null geodesics emanating from ∂R, up to the point where each geodesic encounters a caustic or intersects another generator. 17 To gain intuition for how these null congruences behave in the context of the extremal surfaces of interest, we examine a more general family of codimension-two surfaces (these are curves in AdS 3 ) which in the above coordinates are given by parameterized by a. Note that all of these are anchored on the boundary R 1,1 at the ends of the interval A = {(t, x) ∈ R 1,1 | t = 0, x ∈ [−1, 1]}. (For orientation, see the bottom set of curves in figure 7.) When a = 1, the surface is a semi-circle, which is simultaneously the causal information surface Ξ A defined in [22], and the extremal surface E A for the region A under consideration. Surfaces with a < 1 lie inside the causal wedge W C [A], while those with a > 1 lie outside i.e., they are spacelike related to D[A]. We wish to study the family of null congruences leaving these surfaces, as we vary a. The geodesics will be labelled by their starting position x 0 and parameterized by an affine parameter λ (fixed such that we have unit energy along each geodesic).
Explicit solutions for geodesic congruences
Since the a = 1 surface is extremal, the null expansion Θ(λ; a = 1) = 0 for each generator.
For the surfaces with a < 1, closer to the boundary, we expect that the expansion is positive and the congruence intersects the boundary in a spacelike curve inside D For curves with a > 1, long ellipse, we expect the expansion to be negative. The resulting congruence should develop a caustic before reaching the boundary. Due to the relative simplicity of the set-up, we can confirm these expectations explicitly. Since everything is time-symmetric, let us consider just the future-directed outgoing congruence: Note that the endpoints of these generators at λ = ∞ are given by A representative plot of the generators is given in figure 6 for a = 0.5 (left) and a = 1.5 (right). We see that when a < 1, the generators don't intersect each other before reaching the boundary, and they reach within D + [A]. On the other hand, when a > 1, the generators intersect in a seam (drawn as thick blue curve, whose explicit expression is given below in (3.5)), before reaching the boundary (with the geodesic endpoints indicated by the red curves in figure 6). We call the points on this seam the cross-over points; non-neighbouring geodesics intersect at these points. This seam terminates in a caustic, which as always refers to the locus where neighbouring geodesics intersect.
Intersections within congruences
We can determine the intersection between distinct geodesics in the bulk using the explicit expressions from (3.3). By symmetry of the set-up, we know that geodesics with opposite values of x 0 necessarily intersect, and they must do so at x = x × = 0. Solving for the intersection of the pair of geodesics starting from x 0 and −x 0 we find that they meet at: This generates the seam of cross-over points depicted in the right panel of figure 6, and plotted for various values of a in figure 7 (the top set of curves, color-coded by a corresponding to the initial surface indicated by the thick horizontal curve of the same color). It is easy to see from (3.5) that the cross-over points terminate on the boundary at the future tip of D + [A], i.e., at z = 0, x = 0, t = 1, corresponding to the intersection of the boundary geodesics x 0 = ±1. On the other hand, the cross-over seams for different a start at the point in the bulk when neighbouring geodesics from x 0 0 intersect which happens at JHEP12(2014)162 To summarize, depending on whether a is greater or less than 1, the congruence has qualitatively different behaviour, as illustrated in figure 7. For a < 1 (depicted by colors from red toward green), the congruence reaches the boundary inside D + [A], while for a > 1, the generators intersect each other at the seam of crossover points (depicted by colors from red toward purple). At precisely a = 1, all generators reach the boundary at the future tip of D + [A], namely z = 0, x = 0, t = 1.
Expansion of congruences and caustics
Let us now analyze the expansion along this congruence. This can be calculated as the change in area along the wavefront where t (λ, x 0 ) ≡ ∂ ∂x 0 t(λ; x 0 ) etc., using the expressions given in (3.3). While one can numerically solve for Θ(λ) it is easier to obtain the solution for small λ and evolve using the Raychaudhuri equation.
Near λ = 0, the leading order expression for Θ is: This is plotted in the left panel of figure 8 (with same color-coding by a as employed in figure 7). At the ends of the interval x 0 = ±1, Θ 0 vanishes (which is to be expected since the congruence approximates a larger one with a = 1), while Θ 0 reaches its extremum at the midpoint, x 0 = 0 (again, expected by symmetry), where Θ 0 (x 0 = 0) = a (1 − a 2 ). Furthermore, Θ 0 is positive for a < 1 and negative for a > 1; that is, the congruences are expanding for a < 1 and converging for a > 1). The former make it out to the boundary without intersecting, while the latter have a seam of cross-overs. As we will see below, the geodesics end in a curve of caustics, which touches the seam of cross-overs at the endpoint of the latter. Given Θ 0 as our initial condition, it is straightforward to solve the Raychaudhuri equation to find the expansion along the geodesics. Here ξ a is the tangent vector to the null geodesics and σ µν is the shear of the congruence. For a one-dimensional congruence the shear trivially
JHEP12(2014)162
vanishes and the Ricci tensor contracted with null tangents likewise vanishes upon using the bulk equations of motion R ab = −2 g ab , so (3.10) simplifies to: Using (3.9), we find: In figure 8 we have plotted this as a function of λ for x 0 = 0, at which Θ = a (1−a 2 ) 1+a (1−a 2 ) λ . For a > 1, we expect the congruence to develop a caustic where the expansion diverges. This occurs when infinitesimally nearby geodesics intersect each other. Eq. (3.12) shows that this can only occur for a > 1, where the second term in the denominator is negative for positive λ. In this case Θ(λ) → −∞ at a finite value of λ = λ c , for any x 0 . The spacetime coordinates for the points along the congruence where this happens are given by Viewed as a pair of parametric curves parametrized by x 0 which starts at x 0 = 0 and ends at x 0 = ±1, the caustic seams are null curves, starting at the intersection point (3.6) and ending on the boundary at z c = 0, x c = ±(1 − a 2 ), and t c = a 2 . Note that this is a finite distance on the boundary. The divergence Θ → −∞ signifies the presence of conjugate points, but their geometric meaning is a bit obscure in our discussion so far. The reason is as follows: as we see in figure 6 and can check explicitly, we generically have caustics in the neighbourhood of x 0 0, but more generally encounter cross-over points from the intersection geodesics symmetrically placed about x 0 = 0. The expansion is finite along the cross-over seam (3.5) for x 0 = 0. This can be understood by realizing that the expansion is a local property of the nearby geodesics which doesn't know about any other piece of the congruence. So nothing special ought to happen at the cross-over points which are non-local in the congruence, and indeed these are not conjugate points.
The clue as to the geometric meaning of Θ → −∞ comes from plotting this locus on the surface of the null congruence (continued through the cross-over seam). This is presented in figure 9 by the thick red curves. We see that the surface intersects itself at the cross-over seam, beyond which the constant-λ wavefronts form closed loops. On the sharp flank, these wavefronts turn around and locally become null; this is precisely where A(λ, x 0 ) vanishes and therefore Θ → −∞.
Summary
The upshot of our calculations can be summarized as follows. Consider the null geodesic congruence emanating from a codimension-two spacelike surfaces F A ⊂ M anchored on the boundary of a region A with ∂A = F A ∩ B. This gives a clear picture of the causal domains for regions bounded by curves inside and outside of W C [A]. As we will see in our explicit proof, the extremal surface will in general lie outside of W C [A]; in special cases it can at best lie on the boundary, but never in the interior, of the causal wedge. 18 In higher-dimensional setting, D[A] itself may terminate in a crossover seam rather than a single point, which occurs when the null generators of ∂D[A] on the boundary themselves cross over.
JHEP12(2014)162 4 Theorem and proof
We now get to the main part of the paper where we prove that the extremal surface E A satisfies the causality requirements discussed in section 2.3. Our main goal will be to establish the causal relations quoted there in (2.11). These will establish for us the consistency of the HRT proposal for computing holographic entanglement entropy. In section 4.1, we remind the reader of the holographic set-up and of our assumptions. In section 4.2, we study null geodesic congruences in the bulk and their intersections with the boundary. In particular, since a geodesic that reaches the boundary travels an infinite affine parameter, a non-expanding congruence that reaches the boundary without hitting a caustic must have vanishing shear, and therefore must intersect the boundary at a single point. This allows us to show, using the null energy condition, that the intersection with the boundary of the causal future of an extremal bulk surface equals the causal future of its intersection with the boundary. As a warm-up, we prove a version of the Gao-Wald theorem [14]. Finally, in section 4.3, we carefully define what we mean by a region and by the spacelike homology condition. We prove that a region A implies a natural decomposition of the spacetime into four regions D[A], D[A c ], and J ± [∂A]. Then, given the spacelike homology condition, and using the results of section 4.2, we establish the compatibility of the boundary and bulk decompositions, (2.11), and prove that the extremal surface is a wedge observable.
Holographic setup
In this subsection we will describe our holographic setup and assumptions. 19 Let (M, g ab ) be a connected spacetime, of dimension greater than or equal to 3, that can be embedded in a spacetime (M,g ab ), such that the boundary B of M inM is a smooth timelike hypersurface inM, and such thatg ab = Ω 2 g ab , where Ω is a smooth function on M that vanishes on B. (We do not assume that B is connected.) We defineM := M ∪ B. OnM we have a causal structure induced fromg ab , which in M agrees with that induced from g ab . We make the following assumptions: (i) (M, g ab ) obeys the null energy condition.
(ii)M is globally hyperbolic.
(iii) Every null geodesic in (B,g ab ) is a geodesic in (M,g ab ). 20 19 We largely follow the setup and assumptions of section 3 of [14], with two exceptions: we remove the null generic condition and we add the condition that the boundary is totally geodesic for null geodesics (assumption (iii) below). 20 Assumption (iii) is equivalent to the following property of the extrinsic curvature K ab of B inM: for any point p ∈ B and any null vector k a in the tangent space to B at p, K ab k a k b = 0. That it holds for an asymptotically AdS spacetime can be seen by working in Fefferman-Graham coordinates. If we set Ω = 1/z, where z is the standard radial coordinate, then K ab = 0 (so all geodesics in B are geodesics inM, i.e. B is totally geodesic). The property K ab = 0 is not preserved by Weyl transformations, and so does not hold for a general choice of Ω, but the weaker condition K ab k a k b = 0 does (as can be seen either from a direct calculation or from the fact that the set of null geodesics is invariant under Weyl transformations).
JHEP12(2014)162
We begin by showing that B is globally hyperbolic. We omit the proofs, which are very simple, cf., [40]. (For brevity, we will only indicate one time direction for each statement below, but the time-reversed statements are clearly equally valid.)
Congruences of null geodesics
In this subsection, we will study null geodesics inM. Assumption (iii) has the following useful implication: Proof. Given a point p in B and a non-zero null vector in the tangent space to B at p, there exists a null geodesic in B passing through p with that tangent vector. By assumption (iii), it is a geodesic inM, and by the uniqueness of geodesics it is the only one. Therefore no null geodesic passing through M can intersect B tangentially. Finally, since B is the boundary ofM and is smooth, any smooth curve that intersects B at some point without ending there must be tangent to it. Now we constrain the behavior of congruences of null geodesics that pass through M, using the fact that the metric g ab obeys the null energy condition and the fact that a geodesic that reaches B travels an infinite affine parameter. Lemma 6. Consider a codimension-one congrence of future-directed null geodesics inM, each of which lies entirely in M except possibly at its endpoints. Suppose that the part of the congruence in M has the following properties: (1) its expansion with respect to the metric g ab is nowhere positive; (2) at each point, every deviation vector is spacelike and orthogonal to the tangent vector. Then the congruence intersects B on a set of isolated points.
Proof. We begin by working in the metric g ab . Since the deviation vectors are everywhere spacelike, the expansion Θ is finite everywhere. On any geodesic that reaches B, the affine parameter goes to infinity, so, by the null energy condition, Θ is nowhere negative, and therefore vanishes everywhere. Again using the null energy condition, the shear therefore vanishes everywhere also. Therefore, for any one-parameter family of geodesics that reach B, the norm of the deviation vector X a is a positive constant along each geodesic.
We now return toM, and switch to the metricg ab . On B, X a has vanishing norm; being also orthogonal to the geodesic's tangent vector T a , it is proportional to T a (since JHEP12(2014)162 orthogonal null vectors are proportional). Without loss of generality, we choose the affine parameter λ on each geodesic so that it intersects B at λ = 0; hence, at λ = 0, X a is tangent to B. However, by lemma 5, T a is not tangent to B. So X a = 0. Since this holds for every one-parameter family of geodesics, every connected set of geodesics that reach B intersects it at a point.
As a warm-up for our main theorem of this subsection, we will now use lemma 6 to prove a version of the Gao-Wald theorem [14] and a version of the topological censorship theorem [41]. Proof. Clearly J + (p) ⊂J + (p)∩B. Let t be a global time function onM. Then if t(q) < t(p) we have q / ∈J + (p). Therefore, each connected component of B contains some points not inJ + (p). Therefore, ifJ + (p) ∩ B = J + (p), then ∂J + (p) ∩ B includes a hypersurface S in B that is not in J + (p). We will now show that S cannot exist.
∂J + (p) consists of future-directed null geodesics starting at p on which, except at the endpoints, every deviation vector is spacelike and orthogonal to the tangent vector. By lemma 5, each such geodesic either lies entirely in B or lies entirely in M except at its endpoints. In particular, the points in S must lie on geodesics that are entirely in M except at their endpoints. We thus consider the congruence of geodesics in M starting at p. Reversing its direction, every geodesic in this congruence reaches B (at p), so the expansion is nowhere negative. Therefore, in the forward direction, its expansion is nowhere positive. Thus the conditions of lemma 6 apply. Hence S consists of isolated points, contradicting the fact that it is a hypersurface in B. Corollary 8 rules out traversable wormholes through the bulk connecting different boundary components, and is thus closely related to topological censorship. (A simple argument establishing this can be found in [42].) Our goal for the rest of this subsection is generalize Theorem 7 to codimension-two surfaces that are extremal with respect to g ab . First, we need two lemmas: Lemma 9. Let E be a compact codimension-two submanifold-with-boundary ofM, with boundary N . Then every point p ∈ ∂J + [E] is on a future-directed null geodesic lying entirely in ∂J + [E] that either (1) starts orthogonally from E and has no point conjugate to E between E and p, or (2) starts orthogonally from N , moving away from E (i.e. U a T a > 0, where T a is the tangent vector to the geodesic at its starting point, and U a is a vector at the same point that is tangent to E, normal to N , and outward-directed from E).
Proof. This is a generalization of theorem 9.3.11 in [25]. Every p ∈ ∂J[E] lies on a null geodesic starting from E. If neither condition (1) nor (2) is met, then it can be deformed to a timelike curve and therefore p ∈Ĩ + [E].
JHEP12(2014)162
Lemma 10. Let E be a spacelike submanifold-with-boundary ofM whose restriction to M is extremal with respect to the metric g ab . Then E intersects B orthogonally, i.e., every normal vector to E is tangent to B.
Proof. A short calculation shows that, in M, the mean curvatureK a of E with respect tõ g ab is related to that with respect to g ab , K a , as follows: whereQ ab := Q a cg bc and Q a c is the projector normal to E. Since E is extremal, K a = 0. Sõ Since E is smooth,K 2 remains finite on B, where ln Ω → −∞. This requires that every normal vector to E be tangent to B.
Theorem 11. Let E be a compact smooth spacelike codimension-two submanifold-withboundary inM, whose only boundary is where it intersects B, and whose restriction to M is extremal with respect to the metric g ab . ThenJ Proof. The proof is largely a repetition of that of Theorem 7. Clearly J + [E ∩B] ⊂J + [E]∩B. Let t be a global time function onM. Since E is compact, it has a minimum time t min . Clearly if for some point q ∈ B, t(q) < t min , then q / ∈J + [E]. Therefore, each connected component of B contains some points not inJ We will now show that S cannot exist.
By lemma 10, E intersects B orthogonally. Therefore, in lemma 9, the second type of null geodesic in ∂J + [E] does not exist. The first type of geodesic forms a codimension-two congruence starting orthogonally from E on which, except possibly at the endpoints, every deviation vector is spacelike and orthogonal to the tangent vector. By lemma 5, each such geodesic either lies entirely in B or lies entirely in M except at its endpoints. In particular, the points in S must lie on geodesics that are entirely in M except where they end. We thus consider the congruence of geodesics in M starting orthogonally from E ∩ M. Since E ∩ M is extremal, its expansion (with respect to g ab ) is initially zero. By the null energy condition, its expansion is nowhere positive. Thus the conditions of lemma 6 apply. Hence S consists of isolated points, contradicting the fact that it is a hypersurface in B.
Note that theorem 7 is a special case of theorem 11, in which we take E to be a small (in the metricg ab ) hemisphere centered on p and take the limit in which its radius goes to 0.
Spatial regions and causal decompositions
Let Σ be a Cauchy slice of B. Given a codimension-zero submanifold of Σ, let A be its interior, ∂A its boundary, and A c its complement; these three sets do not overlap and cover Σ. They naturally induce a causal decomposition of the spacetime B into four nonoverlapping regions (except that J ± [∂A] both include ∂A):
JHEP12(2014)162
Theorem 12. . So some must intersect Σ in A and others in A c . Let λ 1 be in the first set and λ 2 in the second. Join λ 1 and λ 2 at p to make a continuous curve λ from A to A c . Now, in any globally hyperbolic spacetime there exists a global timelike vector field; its integral curves can be used to construct a continuous map f from J + (Σ) to Σ. f (λ) is a continuous curve in Σ from A to A c . There therefore exists a point q ∈ λ such that f (q) ∈ ∂A, and therefore q ∈ I + [∂A]. Since p ∈ J + (q), p ∈ J + [∂A], which is a contradiction. Now let E A be a surface inM that satisfies the conditions of theorem 11 and is spacelike-homologous to A. The precise meaning of the latter condition is as follows: there exists a Cauchy sliceΣ forM such thatΣ ∩ B = Σ, containing a codimension-zero submanifold with boundary A ∪ E A ; we call its interior R A . SinceΣ is itself a manifoldwith-boundary (namelyΣ∩B), one has to be careful about the definitions of "interior" and "boundary" for a submanifold. We mean "interior" in the sense of point-set topology; thus R A includes A but not E A . The "boundary" can be either in the sense of "submanifoldwith-boundary" (which is what we call ∂R A ), or in the sense of point-set topology. In the latter sense, the boundary is just E A . 22 As with A, we define R c A :=Σ \ (R A ∩ E A ). To summarize, in parallel to the decomposition of Σ into A, A c , and ∂A, we have a decomposition ofΣ into R A , R c A , and E A . Furthermore, We can now apply theorem 12 to obtain a decomposition ofM into the four spacetime The central result of this section is that this decomposition reduces on the boundary precisely to its decomposition into D[A], D[A c ], and J ± [∂A]: Proof. Equation (4.6c) is Theorem 11 (and its time reverse). Using Theorem 12 both in B and inM to take the complement of both sides, we have Lemma 1 then implies (4.6a), (4.6b). 22 The point-set-topology boundary can be shown to equal the "edge" of the submanifold, in the sense used in the general-relativity literature (see e.g. [25]).
JHEP12(2014)162
Theorem 13 immediately implies that E A is outside of causal contact with D[A] and D[A c ], as required by field-theory causality.
The spacelike-homology condition raises the following practical question: given a codimension-one submanifold ofM with boundary A ∪ E A , under what circumstances is it contained in a Cauchy slice? Obviously, it must be acausal. However, this is not sufficient; for example, a spacelike hypersurface in Minkowski space that approaces null infinity is not contained in a Cauchy slice. The following lemma, which will also be needed in theorem 15, shows that compactness is a sufficient additional condition. (This lemma applies in any globally hyperbolic spacetime.) Lemma 14. If R is a compact acausal set, then there exists a Cauchy slice containing it.
Proof. Let t ∈ R be a global time function, and define t max := max R (t), t min := min R (t) (these exist since R is compact). Define Υ := {p : t > t max } and Υ := Υ ∪ I + [R]. Define (4.8) Next we show that Σ is achronal. The maximum value of t on Σ is t max , so there can be no future-directed timelike curve from ∂Υ to Σ. Further ∂I + [R] is itself achronal. Finally, if there is a future-directed timelike curve from p ∈ ∂I + [R] to q ∈ ∂Υ, then q ∈ I + [R] and hence q ∈ Σ. So Σ is achronal. Next, we show that every inextendible future-directed timelike curve intersects Σ. On such a curve, t increases monotonically and continuously from −∞ to +∞. For t ≤ t min , the curve is not in Υ ; for t > t max , it is. Therefore for some value of t it intersects Σ.
While Σ is achronal, it is not quite a Cauchy slice (in the sense used in this paper) because it is not acausal. However, since R is acausal, Σ can be deformed outside of R to be acausal. Proof. Since E A and A ∪ ∂A are both compact, E A ∪ A is compact as well. (Recall that ∂A = ∂A ⊂ E A .) E A and A are acausal, since each sits on a Cauchy slice. Furthermore, by theorems 12 and 11, there are no causal curves connecting them; hence E A ∪ A is acausal. Therefore, by theorem 14, there is a Cauchy sliceΣ containing both E A and A .
Choosing a global timelike vector field onM, its integral curves define a diffeomorphism f : is a region inΣ with ∂R A = A ∪ E A . (Strictly speaking, we also need to define a new Cauchy slice for B, Σ :=Σ ∩ B, and to consider A to be a region in Σ , since the equality Σ =Σ ∩ B is part of the definition of the spacelike homology condition.) Theorem 15 shows that the HRT formula gives the same value for the entanglement entropy of A and A , as required by field-theory causality.
Discussion
The main result of this paper, Theorem 13, shows that the HRT prescription for computing holographic entanglement entropy [32] is consistent with the requirements of field theory causality. As we have explained with various simple examples and gedanken experiments in section 2.4, the result was in no way a priori obvious, since there are several marginal cases where arbitrarily small deformation of the bulk extremal surface would place it in causal future of a boundary deformation which however cannot affect the entanglement entropy. With the primary result at hand, we now take stock of the various physical consequences it implies for holographic field theories.
Causality constraints on holography: let us start by asking what we can learn about holography from causality considerations. Recall that we proved our result for extremal surfaces in the context of two-derivative theories of gravity satisfying the null energy condition. This was crucial for us to be able to use the Raychaudhuri equation in order to ascertain properties of null geodesic congruences. Thus the domain of validity of our statements was strong coupling in a planar (large-N ) field theory. This translates to demanding a macroscopic spacetime with s AdS in a perturbative string (g s 1) regime. Lets see what happens as we move away from this corner of moduli space.
Firstly, consider classical stringy corrections which we can encapsulate in an effective higher-derivative theory of gravity. In such a theory, as long as higher-derivative operators are suppressed by powers of s , our conclusions will hold, since the dominant effect will come from the leading two-derivative Einstein-Hilbert term in the bulk. When the higher-derivative operators are unsuppressed we have little to say for two reasons: (a) the holographic entanglement prescription so far is only given for static situations (or with time reversal symmetry) [43,44] and (b) even assuming the covariant generalizations, one is stymied by the absence of clean statements regarding dynamics of null geodesic congruences (even for example in Lovelock theories). 23 One could, however, use the causality constraint to rule out certain higher-derivative theories from having unitary relativistic QFT duals (see e.g. [45]); this is similar in spirit to the recent discussions on causality constraints on the three-graviton vertex [46]. Turning next to 1/N , or bulk quantum corrections, while we have less control in general, we can make some observations about the leading 1/N correction which has been proposed to be given by the entanglement of bulk perturbative quantum fields across E A [47]. Since the bulk theory itself is causal, it follows that entanglement across the extremal surface satisfies the desired causality conditions. Does causality prove the HRT conjecture? One intriguing possibility given, the importance of the causality, is whether we can use it to constrain the location of the extremal surface in the bulk, and thus prove the HRT conjecture. 24 Unfortunately, causality 23 The family of f (R) theories can be brought to heel, since here we can map the theory to Einstein-Hilbert via a suitable Weyl transformation. Causality constraints can be discerned here so long as the Weyl transformation (which is non-linear in the curvature) is well-behaved. 24 We thank Vladimir Rosenhaus for inspiring us to think through this possibility.
JHEP12(2014)162
alone is not strong enough to pin down the location of the extremal surface. What we can say is that the extremal surface E A has to lie inside the causal shadow Q ∂A . In a generic asymptotic AdS spacetime, for a generic region A, the casual shadow is a codimension-zero volume of the bulk spacetime M. It is only in some very special cases that we zero in on a single bulk codimension-two surface uniquely (e.g., spherical regions in pure AdS or in the eternal Schwarzschild-AdS black hole). 25 Causality constraints on other CFT observables: our discussion has exclusively focused on the causality properties of a particular non-local quantity in the field theory, namely the entanglement entropy. However, causality places restrictions on other physical observables we can consider on the boundary as well. For instance, correlation functions of (time-ordered) local operators, Wilson loop expectation values, etc., should all obey appropriate constraints which we can infer from basic principles. Indeed, this can be shown to be the case, for example, for correlation functions, by considering the fact that the bulk computation involves solving a suitable boundary initial value problem for fields in the bulk, which can be checked to manifestly satisfy causality. However, this is less clear when we approximate, say, two point functions of heavy local operators using the geodesic approximation [48]. Similar issues arise for the semi-classical computation of Wilson loop expectation values [49,50] using the string worldsheet area. In these cases, one generically encounters some tension between the use of extremal surfaces -geodesics, two-dimensional worldsheets, etc.-for the bulk computation, and field theory expectations regarding causality (cf., [51] for an earlier discussion of this issue). Indeed, it appears that codimension-two extremal surfaces are special in this regard, for we can rely on the boundary of the entanglement wedge being generated by a codimension-one null congruence, and thus apply the Raychaudhuri equation. Understanding the proper application of the WKB approximation for other observables is an interesting question; we hope to report upon in the near future [52].
Entanglement wedges: one of the key constructs in our presentation, naturally associated with a given boundary region A, has been the entanglement wedge W E [A]. This is the domain of dependence of the homology surface R A (recall that R A forms a part of a Cauchy surface which interpolates between A and E A ). Equivalently, it comprises the set of spacelike-separated points from E A which is connected to A, one of the four regions in the natural decomposition of the bulk spacetime.
Given A, one might ask how unique this decomposition is. Since W E [A] is a causallydefined set, its specification only requires the specification of the (oriented) extremal surface E A (possibly consisting of multiple components when so required by the homology constraint). The prescription for constructing the null boundary of W E [A] is unambiguous: simply to follow all null normals (emanating from E A in the requisite direction, towards D[A]) until they encounter another generator (i.e. a crossover seam) or a caustic. However, there is a possibility that the extremal surface itself is not uniquely determined from A. 25 The examples are all cases where, by a suitable choice of conformal frame, the extremal surface can be mapped onto the bifurcation surface of a static black hole. The black funnel and droplet solutions (see [30] for a review) provide nontrivial examples, cf., [23]. This happens when multiple (sets of) extremal surfaces satisfy (2.4) but have the same area. Since entanglement entropy itself cares only about the area, the HRT (as well as RT and maximin) prescription is to take any of these. However, which we take does matter for the entanglement wedge. We propose that, just as for the extremal surfaces, in such cases we may have multiple entanglement wedges W E [A] associated to the same boundary region A. The most "obvious" class of examples where this can happen is the case of A consisting of multiple regions or in higher dimensions where the entangling surface ∂A consists of multiple disjoint components. As we vary the parameters describing the configuration, the extremal surfaces involved typically exchange dominance, so at some point their areas must agree. Applying continuity from both sides, at the transition point, both entanglement wedges should be naturally associated with A. However, in complicated states, there can actually be multiple extremal surfaces even for when A and ∂A are both connected. In such cases, we could have candidate entanglement wedges which are proper subsets of (rather than merely overlapping with) other candidate entanglement wedges.
JHEP12(2014)162
It is also interesting to note that the decomposition of the bulk into four spacetime regions causally defined from E A need not coincide with the bulk decomposition defined from E A c , despite there being a unique boundary decomposition defined from ∂A. For pure states, where the homology constraint trivializes and we have E A = E A c , we can write the bulk decomposition equivalently with respect to both A and A c , Dual of ρ A ? Within the class of CFTs and states with a geometrical holographic dual, it has often been asked, 27 for a given region A, what is the bulk "dual" of the reduced density matrix ρ A . One way to formulate what one means by this is as follows: suppose we fix ρ A and vary over all compatible density matrices for the full state ρ. What is the maximal bulk spacetime region which coincides for all such ρ's? By "coinciding bulk regions" one means having the same geometry, i.e. the same bulk metric modulo diffeomorphisms. Another way to define the dual of ρ A is to ask what is the maximal bulk region wherein we can uniquely determine the bulk metric (again modulo diffeomorphisms). In fact there are several (generally distinct) bulk regions that might be naturally associated with the density matrix; in nested order: • The bulk region that ρ A is sensitive to; in other words, regions wherein a deformation of the metric affects ρ A . 28 • The bulk region that ρ A determines, i.e. where we can uniquely reconstruct all the components of the metric (up to diffeomorphisms).
• The bulk region that ρ A affects, i.e. where by changing ρ A one can change the bulk metric.
Here we focus on the second case, following [53,54]. Based on lightsheet arguments, the authors of [53] proposed the causal wedge as the correct dual. On the other hand, [54], as well as [12,22], argued that the requisite region should contain more than the causal wedge. In particular, [54] presented a number of criteria that such a region should satisfy, and explored several possibilities, most notably the region they denotedŵ(D A ) which corresponds to the bulk domain of dependence of the spacetime region spanned by all codimension-two extremal surfaces anchored within D[A]. If every point of R A lies on at least one of these, then this region coincides with our entanglement wedge W E [A]. On the other hand, as [54] pointed out, there may be "holes" in such a set, i.e., regions of R A which do not lie along any least-area extremal surface anchored on a given region A ⊂ A. 29 We propose that, since the most "natural" causal set associated with ρ A from the bulk point of view is the entanglement wedge, this is indeed the most appropriate region to be 26 Note however that if we purify a mixed state by additional boundaries, such as in the deformed eternal black hole example illustrated in figure 10, then the decomposition (5.1) does hold. 27 In recent years this question has been invigorated by e.g. [53,54]. 28 In fact there is a further subdivision here based on whether any geometrical deformation of the metric should change ρA or merely whether there should exist some deformation of the metric which changes ρA. We thank Mark Van Raamsdonk for discussions on this issue. 29 The example given in [54] involves a region through which traversing surfaces are not the smallest-area ones anchored on the given region, but a simpler physical example would be a point sufficiently close to an event horizon of an eternal spherical black hole, with A = Σ of one side as considered in section 2.4.
JHEP12(2014)162
identified with the "dual" of the reduced density matrix ρ A (even in the presence of such entanglement "holes"). In this context, we should note that we can strip away the rest of the boundary spacetime, and consider the field theory just on D[A], which is a globally hyperbolic spacetime in its own right, in the state ρ A . Whether this state in general admits a holographic description is not known, but, if it does, then a natural candidate would seem to be the entanglement wedge: this is, in its own right, a globally hyperbolic, asymptotically AdS spacetime, whose conformal boundary (according to theorem 13) is precisely D[A], and the area of whose edge E A gives the entropy of ρ A . Here the word "natural" should be qualified, especially in light of the arguments in [22] that the causal wedge W C [A] is a natural bulk codimension-zero region associated with A. The latter can be obtained more minimally: it suffices to know the causal structure of the bulk to define W C [A]. On the other hand, the density matrix clearly encodes much more than the bulk causal structure, since at least it knows the entanglement entropy (as well as entanglement entropies of all subregions, apart from other observables). Since, in the bulk, the corresponding extremal surface is defined only once we know the bulk geometry, the entanglement wedge W E [A] it defines is a less minimal construct that the causal wedge W C [A]. Nevertheless, once E A is identified, the rest of the bulk construction of the entanglement wedge is purely causal, and therefore defined fully robustly for any time-dependent asymptotically AdS spacetime.
The statement that the entanglement wedge is the natural dual of the reduced density matrix (which implies that the boundary observer in D[A] can learn about the bulk geometry in the entire W E [A]) has a profound consequence. We have shown that the extremal surface E A has to lie in the causal shadow. This set can however be quite large, and so E A can lie very deep inside the bulk (as indicated by the shaded region in figure 10). In fact, a simple example supports the idea that the entanglement wedge represents the state in such a case (see figure 11). We start with a deconfined thermal state at t = 0 on a single S d−1 , represented holographically by the exterior Schwarzschild-AdS solution. We add an outgoing null shell that reaches the boundary at t < 0 and an ingoing one that leaves it at t > 0. At t = 0 we still have the thermal state. The bulk solution is also unchanged between the past and future shells. However, these shells move the singularity and therefore have the effect of bringing the future and past event horizons closer to the boundary, leaving the previous bifurcation surface hidden behind both horizons. While this surface is no longer the bifurcation surface of a global Killing vector, it remains the extremal surface whose area gives the entropy of the state of the field theory on the right boundary. Presumably the holographic description of the state extends all the way down to this extremal surface, as it does in the absence of the shells, and thus consists of the entire entanglement wedge. Another (related) example where the separation between entanglement wedge and causal wedge is particularly striking is the eternal (two-sided) black hole deformed by many shocks considered in [35,55]. The Einstein-Rosen bridge is highly elongated and the extremal surface probably lies somewhere in the middle of it -so that the entanglement wedge for the entire right boundary is substantially larger than the causal wedge, which in this case is simply the right exterior (domain of outer communication) of the JHEP12(2014)162 Figure 11. Left: exterior AdS-Schwarzschild solution, dual to a deconfined thermal state on S d−1 .
The extremal surface for the entire boundary (red dot) coincides with the bifurcation surface and the causal information surface. Right: Vaidya solution with an ingoing null shell that reaches the boundary at t < 0 and an outgoing one that leaves it at t > 0 (brown); the geometry between the shells is unchanged, but the past and future event horizons (blue) have moved closer to the boundary, leaving the extremal surface (red dot) hidden behind them. The entanglement wedge in both cases is the entire spacetime (with a homology surface shown in green), while the causal wedge in the right figure Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. | 18,828.6 | 2014-12-01T00:00:00.000 | [
"Physics"
] |
Sulfur Ice Astrochemistry: A Review of Laboratory Studies
Sulfur is the tenth most abundant element in the universe and is known to play a significant role in biological systems. Accordingly, in recent years there has been increased interest in the role of sulfur in astrochemical reactions and planetary geology and geochemistry. Among the many avenues of research currently being explored is the laboratory processing of astrophysical ice analogues. Such research involves the synthesis of an ice of specific morphology and chemical composition at temperatures and pressures relevant to a selected astrophysical setting (such as the interstellar medium or the surfaces of icy moons). Subsequent processing of the ice under conditions that simulate the selected astrophysical setting commonly involves radiolysis, photolysis, thermal processing, neutral-neutral fragment chemistry, or any combination of these, and has been the subject of several studies. The in-situ changes in ice morphology and chemistry occurring during such processing has been monitored via spectroscopic or spectrometric techniques. In this paper, we have reviewed the results of laboratory investigations concerned with sulfur chemistry in several astrophysical ice analogues. Specifically, we review (i) the spectroscopy of sulfur-containing astrochemical molecules in the condensed phase, (ii) atom and radical addition reactions, (iii) the thermal processing of sulfur-bearing ices, (iv) photochemical experiments, (v) the non-reactive charged particle radiolysis of sulfur-bearing ices, and (vi) sulfur ion bombardment of and implantation in ice analogues. Potential future studies in the field of solid phase sulfur astrochemistry are also discussed in the context of forthcoming space missions, such as the NASA James Webb Space Telescope and the ESA Jupiter Icy Moons Explorer mission.
Introduction
Sulfur is the tenth most abundant element in the universe. Atomic sulfur has an abundance of 1.32×10 -5 relative to hydrogen (Asplund et al. 2009), while the unipositively charged ion has a relative abundance of 1.66×10 -5 (Esteban et al. 2004). The most common isotope, 32 S, accounts for ~95% of all sulfur in the universe and is produced via silicon burning within stars at temperatures of > 2.5×10 9 K (McSween and Huss 2010). This nuclear reaction forms part of the so-called alpha ladder which produces elements in abundance, thus explaining the natural relative ubiquity of sulfur. Nuclear fusion of oxygen atoms may also account for the formation of 32 S.
ices by other projectiles. Given the theorised presence of subsurface liquid oceans on these moons, the products of such chemical reactions and processes are of great interest to the astrobiology community. The global flux of incoming energetic charged projectiles for each of the Galilean moons, as given by Johnson et al. (2004) is shown in Table 1. The chemical and energetic compositions of the magnetospheric charged particles bombarding the icy moons has also been studied using data collected by the NASA Galileo mission (Paranicas et al. 2002;Mauk et al. 2004). Briefly summarised, a plethora of species can be found in the Jovian magnetosphere, but at distances of > 7 RJ (where RJ is taken to be the radius of Jupiter at 71,492 km), sulfur and oxygen ions dominate the energetic (> 50 keV) ion density while at distances of 20-25 RJ protons dominate both the integral number and energy densities (Mauk et al. 2004). Indicative energy spectra for protons, oxygen ions, and sulfur ions near the orbits of Europa and Ganymede are provided below (Fig. 2).
The sulfur radiochemistry of the Galilean moons is thus of astrochemical and astrobiological significance, particularly in light of the presence of liquid oceans beneath their icy surfaces. As such, this astrophysical setting is often simulated during laboratory investigations so as to elucidate radiolytic products and likely reaction mechanisms.
The Need for Laboratory Experiments
Laboratory astrochemistry studies offer planetary and space scientists the opportunity to simulate the conditions at a particular astrophysical setting (such as the interstellar medium or planetary surfaces) and investigate chemical reactions induced via some processing technique.
In the case of sulfur ice astrochemistry, this generally involves the formation of an ice of known chemical composition which is deposited onto a cold (5-200 K) substrate transparent to certain wavelengths of light, thus allowing for spectroscopic monitoring of ice composition and morphology. The ice is then processed thermally, photochemically, radiochemically, or by some other means and reaction products are deduced. A fuller explanation of the range of experimental techniques used is available in the review by Allodi et al. (2013).
The data generated from such laboratory investigations is necessary in the interpretation of data collected by both past and future space missions. Two upcoming missions which may be of particular interest to sulfur astrochemistry are the NASA James Webb Space Telescope (JWST) and the ESA Jupiter Icy Moons Explorer (JUICE) mission (Gardner et al. 2006;Grasset et al. 2013). JWST will allow infrared (IR) observations of both gas phase and solid phase sulfur molecules in several extra-terrestrial environments, including interstellar and extragalactic space, outer Solar System objects (including the Galilean moons of Jupiter), and exoplanetary atmospheres. JUICE will focus on a comparative characterisation of Europa, Ganymede, and Callisto and will delve into compositional mapping of their surfaces. In the case of Europa, there will be a real focus on ice chemistry and organic molecules relevant to prebiotic chemistry.
It must be noted, however, that contemporary laboratory astrochemistry experiments are largely non-systematic, often reporting the results of very specific temperature, pressure, and processing conditions. Although such experiments contribute to our scientific understanding of likely astrochemical reactions, it does raise some questions as to how widely applicable the results of such studies are. For instance, a study looking into the photolytic or radiolytic processing of an ice at 20 K is applicable to the interstellar medium, but perhaps not so to the surfaces of the Galilean moons where temperatures are often much higher.
In reality, there is a need for systematic studies in which a single experimental factor (e.g. ice chemical composition) is held constant while other factors (e.g. ice morphology, temperature, processing type, processing energy, etc.) are varied. Such studies will provide more detailed information on the dependence of reaction products on such parameters, and will be useful in assessing the applicability of previous experiments to different astrophysical settings. Additionally, there is also a need to increase characterisation of reaction kinetics and yield analysis so as to determine whether or not reaction products may accumulate in the studied astrophysical setting over geological or astronomical time-scales.
An Overview of this Review
This paper will concern itself with reviewing the results of laboratory investigations in the field of sulfur ice astrochemistry. Though it is important to mention that sulfur astrochemistry is also an explored theme in mineralogy, isotope geochemistry, cosmochemistry, and gas phase studies, these subjects will not be tackled here. Furthermore, a complete review of the laboratory techniques employed during experiments on sulfur ice astrochemistry goes beyond the scope of this review, although an extensive summary was provided by Allodi et al. (2013).
The motivation behind this review is to serve as a repository for solid phase experiments in sulfur astrochemistry. A thorough understanding of this subject is important, particularly in light of the forthcoming JWST and JUICE missions, as data from laboratory experiments would assist these missions in addressing some major questions in sulfur space chemistry, such as the depleted sulfur problem in dense interstellar clouds and the potential of the Galilean moons to host some form of life.
This review describes and summarises the results of studies in six parts according to experiment type: (i) spectroscopic studies of candidate solid phase interstellar sulfur molecules, (ii) atom (or radical) addition reactions, (iii) thermally-induced chemistry, (iv) photochemical processing, (v) radiolytic processing using chemically inert charged projectiles, and (vi) reactive sulfur ion bombardment and implantation. The ordering of these subjects follows an energy gradient, starting at one end with atom and radical addition reactions which have no activation energy requirement and finishing at the other with sulfur ion radiolysis which can involve high energy (> 1 MeV) projectiles.
The reader will appreciate that there may be some overlap between spectroscopic and photochemical studies (given in Sections 2 and 5, respectively). However, an effort has been made such that each section may be viewed as a stand-alone review of a particular aspect of sulfur ice astrochemistry.
Spectroscopy of Candidate Sulfur-Bearing Molecules
Of the 23 sulfur-bearing molecules known to exist in interstellar and circumstellar settings, many have thiol (S-H), thioketone (C=S), and sulfinyl (S=O) functional groups. This provides valuable information when deciding on which species are likely candidates for future detection in interstellar space. Positive detections of molecules in interstellar environments have largely been made via radio astronomy; a technique which requires the target molecule to possess a sufficiently large dipole moment. As such, several molecules which lack this property cannot be detected by radio astronomy, and spectroscopic methods are more appropriate in this regard.
However, for successful detections of interstellar molecules to be made spectroscopically, it is necessary to have their corresponding laboratory-generated spectra at relevant temperatures. For example, recent studies have explored in great detail the IR and ultraviolet (UV) spectra of thiol compounds, including methanethiol, ethanethiol, 1-propanethiol, and 2-propanethiol (Hudson 2016;Pavithraa et al. 2017a;Hudson and Gerakines 2018). Although detections of the two smaller molecules in interstellar environments have been confirmed (Linke et al. 1979;Kolesniková et al. 2014), the latter two remain candidate molecules (Gorai et al. 2017).
These studies have provided additional information related to different solid phases encountered at low temperatures as well as the influence of conformational isomer effects. For instance, methanethiol exists as an amorphous ice at temperatures < 65 K. However, upon heating to just above this temperature, crystallisation occurs and two phases are produced (Hudson and Gerakines 2018): a thermodynamic product (α-phase) and a kinetic product (βphase). These phases produce slightly different IR spectra (Fig. 3), so knowledge of their absorbance peaks would aid in the identification of this molecule when observed in an astrophysical setting and in the determination of which phase is present.
In the case of ethanethiol, Pavithraa et al. (2017a) showed that warming of a low-temperature ice resulted in a phase change from amorphous to crystalline at 110 K. Interestingly, further warming of the ice caused the phase to switch back to amorphous at 125 K. These phase changes were found to be reversible, also occurring during cooling of the ice.
When the length of an aliphatic carbon chain increases sufficiently (as in the case of 1propanethiol), there exists the possibility that molecules of the same species may differ in structure only by rotation around a C-C bond at very low temperatures. These individual structures are referred to as conformational isomers or rotamers. As an amorphous ice containing such isomers is warmed, less stable isomers rotate so as to adopt a more stable structure. This re-orientation results in changes in the corresponding IR spectra. Such spectral changes have been reported for 1-propanethiol (Hayashi et al. 1966;Torgrimsen and Klaeboe 1970).
Computational analysis has also been used in deciphering the spectral signatures of molecules of astrophysical relevance. To continue using methanethiol as an example: microwave, IR, and UV spectra have been extensively studied by modelling rotational, vibrational, and electronic excitement of this molecule (May and Pace 1968;Schlegel et al. 1977;Mouflih et al. 1988;Zakharenko et al. 2019;etc.). Furthermore, computational studies have also been used to understand conformational isomerism in thiols and thioesters (Fausto et al. 1987).
Aside from the abovementioned thiols, other candidate molecules containing sulfur atoms have been studied using spectroscopic techniques. These molecules include ethenethiol, ethynethiol, ethanethial, ethenethione, thiirane, and thiirene. They are of particular interest either because their oxygen analogues have already been identified in interstellar environments, or because they produce highly abundant molecular fragments .
Further spectroscopic studies of a variety of sulfur-bearing molecular species will thus increase our spectral repository, and will assist space-based telescopes in positive identifications of new extra-terrestrial molecules. However, future studies should emphasise the importance of mid-IR characterisations, especially in light of the forthcoming JWST mission (λ = 0.6-28.3 μm) and the recently retired Spitzer Space Telescope (λ = 3.6-160 μm), both of which included the mid-IR in their operational spectra.
Neutral-Neutral Addition Experiments
Neutral-neutral additions refer to the process of bond formation between a target molecule and another neutral species, generally an atom or radical. Such additions do not require the input of energy to overcome an activation energy barrier, and so can occur efficiently at very low temperatures (~10 K). These reactions are most evident within the interiors of dense molecular clouds in the interstellar medium, which are cold (10-20 K) and dark as they are shielded from impinging light or radiation (Linnartz et al. 2015). As such, thermochemistry, photochemistry, and radiochemistry are disfavoured compared to neutral-neutral additions.
In their review, Linnartz et al. (2015) highlighted the findings of some laboratory atom addition experiments (particularly hydrogenation reactions), as well as three processes by which these reactions are thought to occur: the Langmuir-Hinshelwood, Eley-Rideal, and Harris-Kasemo (or hot atom) mechanisms. Briefly explained, the Langmuir-Hinshelwood mechanism involves two atoms adsorbing onto an ice grain, reaching thermal equilibrium, and then migrating towards each other and reacting.
The Eley-Rideal mechanism involves an atom impacting an already-adsorbed atom and reaction occurring before thermal equilibrium can take place. Finally, the Harris-Kasemo mechanism involves an atom adsorbing onto an ice grain and migrating towards a second atom which is in thermal equilibrium and subsequently reacting with that atom before it itself can reach equilibrium.
It appears that, although sulfur neutral-neutral reactions in the gas phase have been studied rigorously, analogous solid phase experiments are extremely scarce. As the focus of this paper is the solid phase, we will not review the results of gas phase experiments, but rather we direct the interested reader to a series of papers on this subject, the first three of which are cited herein (Strausz and Gunning 1962;Knight et al. 1963a;1963b). Other papers (Becker et al. 1974;Prasad and Huntress 1980;Kaiser et al. 1999;etc.) are also available.
When it comes to solid phase neutral-neutral reactions involving sulfur, the exiguity of laboratory-based studies means that, in order to provide an adequately thorough overview of the subject, our review must also extend to modelling experiments. To avoid any ambiguity, the results of computational work will be labelled as such in the following review. The recent review by Cuppen et al. (2017) provides a good overview on the treatment of astrochemical reactions, including neutral-neutral reactions, on dust grains by computational simulations.
Computational assessments by Laas and Caselli (2019) showed that sulfur atom additions on grains at interstellar temperatures can result in catenation reactions to produce Sn (n = 2, 3, 4), which in turn can partake in diradical ring-closure reactions to form cyclic Sm (m = 5, 6, 7, 8).
Although the activation energy barriers for such reactions are essentially zero, the binding energies to the grain for the product allotropes are assumed to increase with the number of sulfur atoms involved. As such, the heavier allotropes have a higher binding energy which does not permit efficient thermal roaming and so they can be destroyed by alternative methods before any further reaction can take place.
In their experimental investigation, Jiménez-Escobar and Muñoz-Caro (2011) discussed the formation of S8 as a result of cryogenic elongation reactions mediated by the HS radical (Barnes et al. 1974). Addition of two HS radicals to each other was proposed to explain the observed presence of H2S2. Results from modelling have shown that this formation mechanism is possible (Zhou et al. 2008), but unlikely as it would need to compete with the hydrogenation of HS to H2S, which is known to be highly efficient. Instead, Zhou et al. (2008) proposed that sulfur atom addition to H2S could yield H2S2 or its isomer H2SS, however such a reaction is probably inefficient due to the high binding energies of the reactants. Deeyamulla and Husain (2006) showed that a neutral-neutral insertion reaction between atomic carbon and H2S could result in the formation of HCS and a hydrogen atom. A further hydrogenation reaction between the products would result in the formation of H2CS, although this reaction is limited by the amount of HCS within the ice available for reaction. Atomic carbon is also known to react with HS radicals to yield CS, which is abundant in both the gas and solid phases in interstellar space. However, within the solid phase, the presence of CS is also thought to be dependent upon the direct accretion of gas phase CS onto dust grains.
Solid phase CS adsorbed onto icy interstellar dust grains may go on to react with CH to form C2S and a hydrogen atom, the reaction benefitting from the high abundance of each reactant (Kaiser 2002). Longer carbon-sulfur chains can also be formed through neutral-neutral reactions, with modelling work by Laas and Caselli (2019) showing that carbon atom addition to C2S results in C3S, while sulfur atom addition to C4H yields C4S and a hydrogen atom.
The sulfur-analogue of methanoic acid, dithiomethanoic acid, may also be produced via solid phase neutral-neutral reactions. In fact, its synthesis in the solid phase has been computationally predicted to be more efficient than that for the gas phase (Laas and Caselli). This process involves the combination of CS and HS radicals to yield CSSH, which can then be hydrogenated to give the final acid product.
The neutral-neutral combination formation mechanism for methanethiol is interesting since the solid phase formation route for the analogous alcohol, methanol, has been extremely well-described (Kaiser 2002;Linnartz et al. 2015). The formation of methanethiol within astrophysical ices is thought to be similar, proceeding through barrierless radical-radical reactions between H and either CH2SH or CH3S (Gorai et al. 2017). An alternative formation mechanism involving the hydrogenation of H2CS through a series of high activation energy reactions has also been proposed (Vidal et al. 2017).
Other thiol molecules have also been investigated: Gorai et al. (2017) and Gorai (2018) studied the solid phase neutral-neutral reactions that lead up to the formation of ethanethiol, 1propanethiol, and 2-propanethiol via computational means. These reactions were observed to often involve the addition of S, H, CH3, and (less commonly) C2H5 fragments and were previously considered in the modelling and observational work of Hasegawa and Herbst (1993) and Müller et al. (2016).
The addition of a nitrogen atom to HS results in the formation of NS and a hydrogen atom. This reaction is interesting for two reasons: firstly, it is thought to be an important sink for solid phase sulfur, accounting for up to 10% of the total sulfur budget (Laas and Caselli 2019). Secondly, polymeric compounds comprised of sulfur-nitrogen bonds at very low temperatures have been noted to display a wide variety of useful and interesting properties, such as superconductivity and metallicity (Chivers 2005). The ubiquity of NS in space environments, however, somewhat contrasts the sparsity of laboratory studies on nitrogen-sulfur bond formation in astrophysical ice analogues.
With respect to SO, which is more commonly found in the icy mantles of dust grains than in the gas phase, this molecule may be formed via direct combination of a sulfur atom and an oxygen atom, or alternatively via the reaction of a sulfur atom with OH or via the reaction between an oxygen atom and SH (Laas and Caselli 2019). SO may be oxygenated to yield SO2, however this reaction is known to be inefficient if SO reacts with an oxygen atom due to the high binding energy of the latter species. Instead, this reaction is more likely to occur via the reaction between SO and O2. The addition of excited atomic oxygen ( 1 D) to SO2 was observed to yield SO3 in the laboratory work by Schriver-Mazzuoli et al. (2003).
OCS can also be formed through atom addition reactions in solid ices, mainly through the reaction of sulfur atoms with CO (Laas and Caselli 2019). Although primarily destroyed via photo-dissociation reactions, OCS can also be destroyed by scavenging sulfur atoms which react with it to form S2 and CO. It is important to note that such a formation mechanism is applicable only to the solid phase, as gas phase formation of OCS via atom addition usually occurs as a result of the addition of an oxygen atom to HCS or the addition of a sulfur atom to HCO.
As can be seen, neutral-neutral sulfur chemistry in astrophysical settings is potentially rich and varied, yet laboratory-based experiments in this field are hard to come by. Future research should thus attempt to fill this gap in knowledge, especially since a better understanding of low temperature and pressure sulfur atom (or radical) reactions could go some way in helping to answer the question of depleted sulfur in dense interstellar clouds as the products of such reactions are often refractory. Furthermore, additional experimental data would be invaluable to the modelling community as it would allow for the results of computational simulations to be tested against empirical evidence.
Thermal Processing
Thermal studies in astrophysical ice analogues constitute an important aspect of research. For example, thermal desorption studies are key to understanding the chemical availability of species which partake in star and planet formation, as well as in the synthesis of prebiotic molecules. A review of such studies was provided by Burke and Brown (2010). Aside from such desorption studies, thermally-induced chemistry has also been investigated as such reactions are known to occur in various astrophysical environments where temperatures are high enough to overcome the relevant activation energy barriers (for a review, see Theulé et al. 2013). The results of those experiments which are pertinent to sulfur ice astrochemistry will be discussed in this section. Kaňuchová et al. (2017) investigated the thermochemistry of both pure SO2 and SO2:H2O mixed ices in the temperature range 16-160 K. Their findings showed that pure SO2 ice begins to sublimate very efficiently at 120 K. Like Collings et al. (2004) and Jiménez-Escobar and Muñoz-Caro (2011) before them, Kaňuchová et al. (2017) also noticed that SO2 sublimates at a higher temperature when included in a H2O matrix.
When studying SO2:H2O ice mixtures, it was noted that thermochemical reactions took place and the main reaction product was HSO3 -, with smaller amounts of S2O5 2also being produced (Kaňuchová et al. 2017). These results complemented the previous findings of Loeffler and Hudson (2010;2013;, who also showed that in the presence of H2O2 (which is the main irradiation product of H2O ice and thus of significance to many astrophysical contexts), HSO3goes on to form HSO4and H3O + . Subsequent deprotonation of HSO4by H2O yields SO4 2and H3O + ions.
Recent work by Bang et al. (2017) has also shown that thermally-driven reactions between H2O ice and gas phase SO2 are possible. Their study showed that SO2 molecules adsorbed at the surface of a H2O ice can react with the ice at temperatures of > 90 K. The primary reaction products are SO2 -, HSO3 -, and OH -. Programmed heating of the physisorbed gas to 120 K causes desorption from the ice surface and thus separates it from the chemisorbed hydrolysis products. Quantum chemical calculations suggest that the mechanism of formation of these products is similar to the equations outlined above (Bang et al. 2017).
Building on their previous work in which they showed that thermally-driven reactions occur in mixed ices of H2O:H2O2:SO2, Loeffler and Hudson (2016) went on to show that such reactions are also possible if the oxidant is changed from H2O2 to O3, which is also a radiolysis product of H2O. The results of this study showed that O3 ice is consumed producing HSO4 -, although the reaction sequence begins with a reaction between SO2 and H2O which is not too dissimilar to that which occurs in the Earth's atmosphere (Erickson et al. 1977;Penkett et al. 1979).
H2O + SO2 → H + + HSO3 -2 HSO3 -→ H2O + S2O5 2-HSO3 -+ O3 → HSO4 -+ O2 Given that the experiments of Loeffler and Hudson (2016) were conducted in the temperature range 50-120 K, they are of direct relevance to the Galilean satellites Europa, Ganymede, and Callisto, where mean surface temperatures are ~100 K and where sulfur-bearing ices are mixed with H2O and O3 (or its precursor O2). Detections of O3 have been made on the trailing side of Ganymede (Noll et al. 1997). This is to be expected, as this side contains surface O2 and is subject to preferential bombardment by Jovian magnetospheric ions.
SO2 has also been detected on the Galilean moons, although its distribution is somewhat more elusive (McCord et al. 1998a). The results of the study by Loeffler and Hudson (2016) suggest that SO2 should be depleted on the trailing side of Ganymede if mixed in ices along with both H2O and O3. However, given that the consumption of O3 is dependent on a prior reaction between SO2 and H2O (as outlined in the reaction sequence above), it is possible that pure frosts of SO2 may co-exist alongside pure O3 ices.
In the case of Europa, SO2 has been detected on the trailing hemisphere, but seems to be absent on the leading hemisphere (Hendrix and Johnson 2008;Hendrix et al. 2011). The fact that O2 is present on both hemispheres should lead one to believe that O3 should only be detected on the leading side. However, to date, no detections of this molecule have been made on the surface of Europa, meaning that there may be some unknown reaction consuming O3 in this hemisphere.
The case of Callisto is somewhat more difficult to interpret. Although O2 has been detected in the trailing hemisphere (Spencer and Calvin 2002), no O3 has been identified. SO2 has only been detected in the leading hemisphere, which is unexpected as this side is less susceptible to magnetospheric ion bombardment. The identification of SO2 on Callisto has, however, been challenged, with carbonate species being suggested as an alternative for the spectral observation (Johnson et al. 2004).
There is also the potential for these results to be applied to cometary chemistry. The Rosetta mission identified the presence of SO2 in the coma of comet of 67P/Churyumov-Gerasimenko, but did not detect O3 (Bieler et al. 2015). It is possible that the formation of sulfur oxyanions via the reaction of cometary SO2 and H2O ices are responsible for the absence of O3.
Other sulfur-bearing molecules have also been the focus of research with regards to thermallydriven reactions in astrophysical environments. The thermal reaction between oxygen atoms and CS2 to produce OCS is one example (Ward et al. 2012). Mahjoub and Hodyss (2018) investigated the reaction between OCS and methylamine over temperatures of 12-300 K. These molecules have been identified in comets, which are known to experience thermal processing as they orbit around the sun. Their results showed that, at temperatures exceeding 100 K, methylammonium methylthiocarbamate is formed as a result of the nucleophilic attack of methylamine on OCS (Fig. 5). This product molecule may be an intermediate in the formation of peptides, and thus relevant to studies in astrobiology and prebiotic chemistry.
Photochemical Processing
Photochemical reactions are among the most important in astrochemistry and planetary science, and their investigation has enabled atmospheric chemistry and dynamics both within the interstellar medium and many planetary systems to be quantified. Several studies and reviews on sulfur chemistry within the atmospheres of Earth, Solar System planets and moons, and exoplanets are available (Sze and Ko 1980;Moses et al. 1995;2002;Colman and Trogler 1997;Zahnle et al. 2009;Tian et al. 2010;Whitehall and Ono 2012;Hu et al. 2013;Hickson et al. 2014;Ono 2017;etc.).
Although of great importance to our understanding of planetary science, atmospheric studies largely deal with gas phase chemistry. In this review, we are more concerned with the solid phase and so this section will limit itself to the discussion of sulfur ice photochemistry and the results of laboratory investigations.
The Importance of Spectroscopic Studies in Photochemistry
In Section 2, reference was made to the fact that detailed knowledge of the absorption spectra of candidate interstellar molecules would aid in their search and identification. Molecular photochemistry usually begins via the absorption of a photon which results in excitation of the absorbing species to higher electronic energy states, or in the dissociation of the molecule through the process of bond fission (Wells 1972).
As such, characterisation of the absorption spectra of astrochemically relevant molecules is required because it provides a starting point for understanding their photochemical reactivity. This is especially true in the case of extreme-and vacuum-UV photons, as well as X-rays, which initiate much of the photochemistry in interstellar environments and planetary surfaces (Pilling and Bergantini 2015; Öberg 2016).
As a brief example, the vacuum-UV absorption spectrum of SO2 ice has been studied in some detail ) and distinct spectral signatures have been detected which allow for phase discrimination between amorphous and crystalline ices. These features may be used to glean further information as to the structure of the ice as a function of substrate temperature and rate of deposition: for example, rapid deposition rates at low temperatures are amenable to the formation of amorphous ices ).
This has important astrochemical implications, as deposition time-scales on dust grains in the interstellar medium are likely to be much longer than those which can be reproduced in the laboratory. Thus, there is good reason to suggest that SO2 ice in the interstellar medium is in fact crystalline, or at least a mixture of crystalline and amorphous ice. Reflectance spectra of astrophysical SO2 do not show any evidence of crystallinity (Nash et al. 1980;Hapke et al. 1981), however they only include few data points and limited resolution may not allow for the detection of such crystalline features.
Given that radiolysis induced by X-ray absorption can occur in interstellar and circumstellar environments, such as near T-Tauri phase stars (Gullikson and Henke 1989;Pilling and Bergantini 2015;Pilling 2017), there is also some scope for recording the X-ray absorbance spectra of astrophysical ice analogues. X-ray absorption studies have been carried out in the past in order to determine the speciation of sulfur in coal and petroleum (Spiro et al. 1984;George and Gorbaty 1989;Huffmann et al. 1991;Waldo et al. 1991), soils (Morra et al. 1997;Xia et al. 1999), microbial biochemical products (Pickering et al. 1998;2001;Prange et al. 2002), and batteries (Cuisinier et al. 2013;Pascal et al. 2014).
These studies largely made use of either X-ray absorption near edge structure (XANES) spectroscopy or X-ray absorption fine structure (XAFS) spectroscopy. Despite the established use of X-ray spectroscopy in other fields, to the best of the authors' knowledge such a technique has yet to be used to study laboratory generated astrophysical sulfur ice analogues and so there is some potential for future studies in this regard. A review on the use of XANES in the determination of sulfur oxidation states and functionality in complex molecules was provided by Vairavamurthy (1998).
Laboratory Photochemistry Experiments
With respect to investigating the photochemical reactivity of sulfur-bearing ice analogues, SO2 and H2S ices have received the most attention (Cassidy et al. 2010). UV photolysis of solid phase SO2 was examined in detail by Schriver-Mazzuoli et al. (2003) who considered the irradiation of pure SO2 ice, as well as SO2 ice trapped in an excess of amorphous H2O ice, with photons of λ = 156, 165, and 193 nm.
Results showed that photolysis of the pure ice resulted in the formation of SO3, while the major photolysis product of the SO2:H2O ice mixture was H2SO4. In the case of pure SO2 ice photolysis, there is also some evidence to support the photo-reaction occurring after the formation of SO2 dimers (Sodeau and Lee 1980).
Photolysis of SO2:H2O ice mixtures using far-UV photons of λ = 147, 206, 254, and 284 nm has been recently performed and results showed that the main photolysis products were the sulfur oxyanions SO4 2-, HSO4 -, and HSO3 - (Hodyss et al. 2019). Interestingly, although photons with λ > 219 nm are not energetic enough to cause the dissociation of SO2, these products were also observed for both the λ = 254 and 284 nm photolysis experiments. These observations were explained through the reaction of electronically excited SO2 reacting with a ground-state SO2 molecule (Hodyss et al. 2019). This reaction mechanism is thought to be similar to that for the gas phase where the reaction products are SO and SO3 (Chung et al. 1975), which may then go on to react with H2O to produce the sulfur oxyanions observed. Alternatively, the excited-state SO2 molecule may react directly with ground-state H2O.
These studies are of great importance in the context of planetary science, particularly in the case of the icy Galilean satellites, upon the surfaces of which SO2 has been detected (Lane et al. 1981). Hence, photolytic reactions represent a method of SO2 depletion and H2SO4 production at the surfaces of these moons. Although there is good evidence to suggest that the major formation mechanism of H2SO4 is via magnetospheric sulfur ion implantation (discussed in more detail in Section 7), these photochemical results are nonetheless important and may be extended to other icy Solar System bodies.
Experiments have also been carried out in order to determine the chemical effect of soft X-rays (< 2 keV) on SO2 ices. Such experiments are entirely appropriate in the context of interstellar astrochemistry, as SO2 ices have been detected near young stellar objects where X-ray intensities are higher (Boogert et al. 1997). Laboratory results showed that SO3 is the product of such radiolysis (de Souza Bonfim et al. 2017), with the major formation mechanisms shown below: Soft X-ray radiolysis experiments of SO2 mixed ices have also been performed. Irradiation of H2O:CO2:NH3:SO2 ices at 50 K and 90 K resulted in the formation of SO3, H2SO4, and associated sulfur oxyanions (Pilling and Bergantini 2015). Interestingly, OCNwas also detected among the radiolysis products, and is likely to be formed as a result of the dissociation of CO2 and NH3 and the recombination of the resultant fragments. However, radiolytic dissociation and fragment recombination did not result in sulfur bonding to any new elements.
The photolysis of H2S ices represents another major aspect of sulfur astrochemistry. Comprehensive work in this regard was performed by Jiménez-Escobar and Muñoz-Caro (2011), who irradiated H2S ice with UV light at 7 K. The results of this experiment showed that H2S photolysis is fairly rapid and produces a large variety of species including HS, H2S2, HS2, and S2. This work complemented previous findings which showed that photolysis of H2S adsorbed on LiF at 110 K resulted in the dissociation to H and HS radicals and the formation of H2 gas (Harrison et al. 1988;Liu et al. 1999;Cook et al. 2001).
The UV irradiation of H2S ice has most recently been revisited by Zhou et al. (2020), who wished to address the apparent depletion of HS/H2S ratios observed in the interstellar medium relative to those predicted by contemporary models. Their experimental results revealed a wavelength dependence for the quantum yield of HS formation via H2S photo-dissociation. Also taking into account H2S parent molecule absorption and the interstellar radiation field, such a result implies that just ~26% of interstellar photo-excitations result in the successful production of HS radicals. It is therefore necessary to reconsider some of the outcomes of computational sulfur astrochemistry studies.
When in a matrix of H2O ice, the UV irradiation of H2S yielded products such as SO2, SO4 2-, HSO3 -, HSO4 -, H2SO2, H2SO4, and H2S2 (Jiménez-Escobar and Muñoz-Caro 2011). An interesting observation made during this study was that, although the sublimation temperature of pure H2S was noted to be 82 K this value rose significantly when mixed in a matrix of H2O molecules. The reason for this is the trapping of H2S molecules by less volatile H2O molecules in an analogous fashion to clathrates. Thus, H2S only co-sublimates at higher temperatures with H2O. Such results are in agreement with previous findings (Collings et al. 2004).
Photo-irradiation of H2S ices mixed with other species have also been studied. Chen et al. (2015) irradiated H2S:CO and H2S:CO2 ices at 14 K with UV and extreme-UV photons and determined the nature of the sulfur-bearing product molecules. They found that both ice mixtures produce OCS, with the formation efficiencies being greater for the H2S:CO mixture and when lower starting concentrations of H2S were used. Other sulfur-bearing molecules were detected among the photolysis products: CS2 was produced after the photo-processing of the H2S:CO mixture, and SO2 was produced after the photo-processing of the H2S:CO2 ice.
An interesting study of vacuum-UV photolysis of gas phase H2S mixed with either ethene or 1,3-butadiene found that the main products are refractory thiol compounds (Kasparek et al. 2016). Though not an investigation into sulfur ice chemistry, this result carries interesting implications for interstellar sulfur chemistry, as it reveals a method of locking sulfur-bearing material away as refractories and thus aids in our understanding of the sulfur depletion problem discussed in Section 1 (Jenkins 2009;Laas and Caselli 2019).
The photo-processing of more complex H2S mixed ices has also been performed. UV irradiation of a H2O: Aside from H2S and SO2, investigations in other simple astrochemically relevant molecules, namely OCS and CS2, have been carried out. Photo-irradiation of monolayers of these species adsorbed on both amorphous and polycrystalline H2O ices were carried out by Ikeda et al. (2008) at 90 K, who showed that the photolysis of these species by UV photons of λ = 193 nm led to the formation of S2 via the combination of atomic sulfur with OCS or CS2, or alternatively via the combination of dissociated sulfur atoms. The work of Ikeda et al. (2008) built upon previous efforts by and Dixon-Warren et al. (1990), who also observed S2 formation when irradiating OCS adsorbed to LiF at 166 K using λ = 222 nm photons.
Cryogenic photochemistry between OCS and halogen species has also been investigated: the reactions with Cl2, Br2, and BrI have been performed in a solid Ar matrix at 15 K using broadband UV photons (Romano et al. 2001;Tobón et al. 2006). Various reaction products were formed, the most chemically interesting of which were syn-halogenocarbonylsulfenyl halides based on X-C(=O)-S-Y backbones, where X and Y are the constituent atoms of the dihalide originally incorporated into the ice (Fig. 6). The corresponding anti-rotamer products were not detected, and are known to be less stable (Romano et al. 2001;Tobón et al. 2006).
When similar experiments were used to investigate the photochemistry between CS2 and the dihalides Cl2, Br2, and ClBr in a Ar ice matrix, both syn-and anti-halogenothiocarbonylsulfenyl halides based on X-C(=S)-S-Y backbones were detected (X and Y once again being the constituent atoms of the original dihalide) among other products ( Fig. 6; Tobón et al. 2007).
Solid phase photochemistry of more complex, exotic molecules may yield significant insights into the chemistry of different functional groups found in the interstellar medium. For instance, Zapała et al. (2019) recently examined the photolysis of 2-sulfanylethanenitrile in a Ar matrix at 6 K. A related compound, sulfanylmethanenitrile, has already been detected in the interstellar medium (Halfen et al. 2009). Their results showed that, among some photo-dissociation products formed via the loss of -CN and -SH groups, several isocyano compounds were produced as a result of photo-isomerisation processes (Zapała et al. 2019; Fig. 7). Pharr et al. (2012) investigated the individual, solid phase 10 K UV photolysis of diazo(2thienyl)methane and diazo(3-thienyl)methane in a N2 or Ar ice matrix. Experiments using photons of different wavelengths were carried out, and resulted in a wealth of product molecules containing several interesting structural features and functional groups (Fig. 8), such as: carbenes, sulfur heterocycles, cyclopropenes, alkynes, S=C-C=C bond systems, and S=C=C bond systems. Several of these functionalities may be of astrochemical interest.
Non-Reactive Charged Particle Radiolysis
In discussing the role of non-reactive charged particles in astrochemistry, it is largely protons and electrons which are considered, although noble gas ions have also been studied. Protonand electron-irradiation studies are perhaps some of the most explored areas within astrophysical radiochemistry due to their applicability to a wide variety of contexts and scenarios within the space sciences. These include the interaction of interstellar ices with cosmic rays as well as the processing of Solar System ices by the solar wind or magnetospheric ions. With regards to sulfur radiochemistry, early experiments by Moore (1984) found that proton-irradiation of cryogenic SO2 produced several different compounds, and showed that there is a correlation between the colour of the resultant product mixture (which is dependent on its composition) and the temperature of the reaction.
Electron-irradiation of native sulfur in H2O ice was observed to produce H2SO4 (Carlson et al. 2002), mirroring results previously obtained by Johnston and Donaldson (1971) and Della Guardia and Johnston (1980), who had irradiated sulfur grains in liquid H2O. Continued processing results in the radiolysis of SO4 2to form SO2 (Hochanadel et al. 1955), among other compounds. Moore et al. (2007a) later showed that proton-irradiation of SO2 in H2O ice also resulted in the formation of H2SO4 which, when subsequently warmed, yielded a variety of hydrates of the acid.
Extensive efforts have been made to understand proton-and electron-irradiation of SO2, either as a pure ice or mixed with H2O, due to the fact that SO2 has been recognised (along with OCS) as a dominant sulfur-bearing molecule in interstellar ices (Maity and Kaiser 2013). Seminal work by Moore et al. (2007b) investigated the 800 keV proton-irradiation of pure SO2 ice and SO2:H2O mixed ices of varying compositions at 86-132 K. When the pure SO2 ice was irradiated, SO3 was observed just like in the photolysis experiments of Schriver-Mazzuoli et al. (2003), but this was not the case when a mixed SO2:H2O ice underwent radiolysis. Instead, other molecules such as HSO3 -, HSO4 -, SO4 2-(possibly from an acid molecule), and H2O2 were observed. These results are somewhat analogous to those obtained during the thermal processing of this mixed ice (Loeffler and Hudson 2010;2013;Kaňuchová et al. 2017).
An interesting interpretation of these results obtained by Moore et al. (2007b) is the fact that proton implantation in pure SO2 ice does not result in the formation of S-H bonds. Garozzo et al. (2008) confirmed these results by implanting 30 keV protons at 16 K and 50 keV protons at 80 K in pure SO2 ice. The main products of such radiolysis were observed to be SO3 (both as a single molecule and in polymeric form) and O3. Their study also involved the irradiation of SO2 ice by 30 keV He + ions at 16 K, with similar results being reported.
By showing that no S-H bonds are formed in this way, these studies challenged a previous hypothesis which was made by Voegele et al. (2004), who suggested that the proton-irradiation of SO2 may be a viable method of synthesising H2SO3 in a manner analogous to the protonirradiation of CO2 forming H2CO3 (Brucato et al. 1997). A good summary of these results may also be found in the work of Strazzulla (2011).
Work by Ferrante et al. (2008) and Garozzo et al. (2010) elucidated OCS as the main product of the proton-irradiation of ices containing both sulfur (SO2 or H2S) and carbon (CO or CO2) source molecules. Several scenarios were considered, including water-dominated and waterfree ices as well as mixed and layered structures. The overall yield of OCS is determined by the mixing ratio of the ice, as well as the nature of the sulfur and carbon source molecules, with H2S and CO being the most amenable to OCS formation.
The mechanism of formation of OCS is believed to involve fragmentation of parent source molecules, after which sulfur atoms combine with CO directly (Ferrante et al. 2008). An interesting result of these investigations is the fact that, although OCS forms readily, it is destroyed fairly easily upon prolonged irradiation. It should also be noted that, during the radiolysis of CO:H2S mixed ices, CS2 is also a relatively abundant product (Garozzo et al. 2010).
The irradiation of CS2:O2 mixed ices with high-energy electrons has also been studied. Maity and Kaiser (2013) found that, at 12 K, this radiolysis readily converts CS2 to OCS much in the same way as the irradiation of CS2 mixed with H2O, CO2, or CH3OH. Other sulfur-bearing products were also observed, including SO2 and SO3. Loeffler et al. (2011) performed radiolysis experiments in which H2SO4, H2SO4 • H2O, and H2SO4 • 4H2O were irradiated with 0.8 MeV protons. Such experiments are of importance given that sulfuric acid hydrates are known products of the radiolysis of SO2:H2O mixed ices and are distributed widely on Europa (Carlson et al. 1999;2002;2005). The main irradiation products were SO2, S2O3, H3O + , HSO4 -, and SO4 2-. An interesting result of this study, however, was the observed radiolytic stability of the monohydrate acid as compared to that of the pure acid.
Furthermore, destruction of the tetrahydrate acid was noted to be strongly correlated to temperature: increased losses at lower temperatures occurred due to a combination of radiolysis and amorphisation which resulted in a change in the number of water molecules associated with an acid molecule. This poses an interesting result in the context of Europa and the other Galilean satellites, where it is hypothesised that the monohydrate acid is stable over geological time, while the tetrahydrate will only be stable in warmer regions (Loeffler et al. 2011).
The irradiation of H2S has also been investigated, perhaps as a result of the abundance of this compound in cometary materials (Rodgers and Charnley 2006). Irradiation of H2S poses a new challenge due to the fact that it sublimates at a relatively low temperature of 86 K at pressures of interstellar relevance. Experiments by Moore et al. (2007b) investigated the irradiation of H2S as a pure ice and in a mixture with H2O by 0.8 MeV protons at temperatures between 86-132 K. Irradiation of the pure ice resulted in the formation of H2S2. This result is of some consequence in the context of astrobiology, as it reveals a fairly facile method of disulfide bond (S-S) formation. Such bonds are important in several protein structures. When H2O:H2S mixed ices were irradiated H2S2 was still observed, but in lower abundances. This is primarily due to the fact that SO2 is also a product of this radiolysis and so competes for sulfur atoms (Moore et al. 2007b).
Electron-irradiation of complex ice mixtures containing H2S have also been performed recently. Mahjoub et al. (2016; irradiated a mixture of H2S:CH3OH:NH3:H2O (in an ice of compositional ratio 7:35:17:41) at 50 K with 10 keV electrons for 19 hours, and then warmed the ice to 120 K at which temperature it was maintained for another hour, all the while being irradiated. After the completion of electron-irradiation, the ice was then warmed to 300 K. Various sulfur-bearing molecules were detected as products of this combined thermal and radiolytic processing, including: OCS, CS, CS2, SO, SO2, S2, S3, S4, CH3SCH3, CH3S2CH3, CH3S(O)OCH3 and possibly SO3, S2O, and H2CSO.
The ice composition, processing methodology used, and sulfur chemistry observed in the experiments by Mahjoub et al. (2016; are of direct relevance to Jovian Trojan asteroids. These bodies display a distinct colour bimodality, which is thought to be the result of combined radiolytic and thermal processing of sulfur compounds, particularly short-chain sulfur allotropes. These allotropes are known to be formed by the radiolysis of native sulfur (among other sulfur-bearing compounds) and are highly coloured, exhibiting strong absorption at long wavelengths in the visible spectrum (Meyer 1976;Brabson et al. 1991).
However, such allotropes are also known to be unstable with regards to thermal processing, and combine to form cyclical geometries at higher temperatures (e.g. S8). As the processing methodology used in these experiments is analogous to the temperature fluxes and irradiation regimes to which Jovian Trojans are subjected as they traverse around the sun (Mahjoub et al. 2016;, these results are suggestive of the fact that the observed colour bimodality of these asteroids is at least partly the result of the thermal processing of sulfur allotropes formed radiolytically through interaction with the solar wind. An interesting set of studies looked into the irradiation of H2O ice deposited over a refractory sulfurous residue with 200 keV He + at 80 K in order to determine whether or not this is a possible source of SO2 Strazzulla et al. 2009). The residue was obtained via the prior irradiation of SO2 ice at 16 K, and was used as an approximation for sulfur-bearing solid materials in astrophysical environments, such as on the surfaces of the Galilean satellites. The studies did not find any evidence of efficient SO2 production, and thus concluded that the radiolysis of mixtures of H2O ice and refractory sulfurous materials cannot be the primary formation mechanism of the SO2 detected at the surface of the Galilean moons.
Electron-irradiation studies of ionic solids, which may be representative of mineral assemblages at the surfaces of planets and moons, have also been conducted. Sasaki et al. (1978) irradiated Li2SO4 with 0.3-1.6 keV electrons and detected Li2SO3, Li2S, Li2O, and elemental sulfur as products. Prolonged irradiation showed that the final radiolysis products were Li2S and Li2O. Johnson (2001) considered the irradiation of hydrated Na2SO4 and MgSO4, both minerals that have been suggested to be present at the geologically younger surfaces of Europa as a result of tidal flexing or volcanism (Kargel 1991;McCord et al. 1998b;1998c). Ion bombardment of hydrated MgSO4 is thought to produce MgO, MgS, and Mg(OH)2, as well as O2 and SO2 (Sieveka and Johnson 1985;Johnson et al. 1998;Johnson 2001). In the case of hydrated Na2SO4, irradiation is thought to represent a facile method of sodium loss (Benninghoven 1969;Wiens et al. 1997), and indeed some atomic sodium is a component of the tenuous atmosphere on Europa (Brown and Hill 1996). The net products of irradiation are thought to be NaOH, Na2O2, SO2, and H2SO4 (Johnson et al. 1998;Johnson 2001).
Irradiation of these hydrated sulfates also represents a potential formation mechanism for H2SO4 and SO2 on the surfaces of the icy Galilean satellites. This is of significant consequence because, as will be described in Section 7, there is currently some debate as to whether the latter is formed by magnetospheric sulfur ion implantation of from sulfur-bearing compounds already present on the moons (possibly even a non-radiolytic formation mechanism). However, reports of laboratory irradiations of ionic solids and minerals remain sparse, and so there is a need for more of these experiments to be performed.
Sulfur Ion Bombardment and Implantation in Ice Analogues
Sulfur astrochemistry and molecular astrophysics plays an important role in understanding the chemistry and surface processes of the icy Galilean satellites of Jupiter. Sulfur-bearing ices on these satellites were detected by the International Ultraviolet Explorer (IUE) mission (Lane et al. 1981) and since the reporting of these early findings, much work has been devoted to understanding the origin and chemistry of these ices.
Such work led to the proposed existence of a radiolytic sulfur cycle (Carlson et al. 1999;2002;2005) which includes chemical alteration of the surface by energetic ions and electrons from Jovian magnetospheric plasma (Fig. 9). Laboratory experiments have shown that H2SO4 and its hydrates may be produced through energetic particle bombardment of mixed ices at the surface of the satellites (Strazzulla et al. 2007;2009). Indeed, maps of acid production rate show a clear bulls-eye pattern which would be expected as a result of Iogenic sulfur ion bombardment (Carlson et al. 2005;2009). Other sulfur-bearing compounds are also thought to arise from the radiolytic sulfur cycle, with Hendrix et al. (2011) finding a strong correlation between surface SO2 abundance and sulfur ion implantation. A review of the radiolytic sulfur cycle may be found in Dalton et al. (2013).
Because such chemical complexity arises from sulfur ion impacts and implantations, many experiments have been undertaken to further understand the products of the implantation of sulfur ions of various energies and charge states, as well as the mechanisms by which they form. The simplest (and perhaps most investigated) experiments involve sulfur ion implantation in H2O ice. Such experiments have not found much in the way of evidence to support the formation of H2S or SO2 from these impacts, despite the findings of Hendrix et al. (2011). Instead, these implantations have largely resulted in the formation of H2SO4 and its hydrates.
Irradiation of H2O ice with 200 keV S + ions at 80 K resulted in the formation of H2O2 as well as hydrated H2SO4 (Strazzulla 2011). This acid hydrate was produced with a very high yield of ~0.65 molecules per ion (Strazzulla et al. 2007). These results lead to a new question with regards to the H2SO4 hydrates present on the surfaces of the icy Galilean satellites (especially Europa): are these acid hydrates the result of Iogenic ion implantation into H2O ice (i.e. an exogenic sulfur source), or are they the result of some other chemical process involving sulfur compounds native to the icy moon (i.e. and endogenic sulfur source), which forms SO3 which in turn rapidly produces the observed acid hydrates?
Discriminating between exogenic and endogenic sulfur sources is possible by observing the spatial distribution of the observed H2SO4 hydrates. On Europa, the global distribution of such hydrates is such that there is an enhancement on the trailing side, which is suggestive of an ion implantation (exogenic) source. It is interesting to note that SO2 ice also displays a similar distribution. However, since the concentration of the acid hydrate at the surface is much greater than that of SO2 and because calculations have shown that radiolysis can produce the observed amount of acid hydrate during ~10 4 years, it is likely that the overall H2SO4 hydrate sulfur source on Europa is an exogenic one (Strazzulla 2011).
Further investigations into the irradiation of H2O ice at 80 K were made using multiply charged S n+ ions (n = 7, 9, 11) over an energy range of 35-176 keV (Ding et al. 2013). Results showed that the dominant products formed were H2SO4 as a pure molecule, as well as in the monohydrate and tetrahydrate forms. The nature of the products was noted to be independent of the charge state of the projectile ion.
This is consistent with the fact that such projectiles abstract electrons from the target surface upon approaching within several tens of angstroms. This so-called resonant neutralisation process allows for the formation of transient, neutral hollow atoms, in which inner electron shells are not occupied (Arnau et al. 1997;Winter and Aumayr 1999). Electron abstraction leaves a positively charged region at the surface of the solid ice which can explosively expand leading to so-called potential sputtering and the formation of surface hillocks (Wilhelm et al. 2015). Decay of the transient hollow atoms can occur via radiative electron cascade of Auger electron emission, though time constraints mean that it is often the case where only a few electrons are able to de-excite to the inner shells before impact with the target ice (Herrmann et al. 1994). Once the projectile ion enters the bulk ice, it achieves an effective charge state dependent only upon its impact velocity within a single monolayer, thus losing memory of the original, incoming charge state (Herrmann et al. 1994;Ding et al. 2013).
The yields of the acid products formed were dependent upon the energies of the incoming sulfur ions, with higher energies being responsible for higher yields. Interestingly, no SO2 or H2S was detected during post-irradiative spectroscopic analysis. Hence, the results of Ding et al. (2013) confirm and extend those obtained by Strazzulla et al. (2007). Ding et al. (2013) also state that their results, combined with the fact that there is a clear correlation between H2SO4 hydrate concentration and magnetospheric sulfur ion flux, support an exogenic source for sulfur in these molecules on Europa.
The implantation of multiply-charged sulfur ions in other ices has also been studied. It has been shown that the 176 keV S 11+ ion implantation in CO at 15 K results in the formation of SO2 and OCS, while the 90 keV S 9+ ion implantation in CO2 ice covered with a thin layer of H2O ice at the same temperature yields SO2 and CS2 (Lv et al. 2014a). Given the known presence of CO2 on Europa, Lv et al. (2014a) calculated that a time-scale of ~10 4 years is required to produce the amount of SO2 present at the surface via sulfur ion bombardment.
However, this calculation runs on the assumption that this yield calculated at 15 K is applicable to temperatures more relevant to the icy surface of Europa (60-100 K). Lv et al. (2014a) also showed that the radiolysis of the CO ice resulted in the formation of a large number of carbonbased chains, while chemical reactions and mixing at the interface of the CO2 and H2O ices caused the formation of H2CO3.
Follow-up studies by Boduch et al. (2016), who made use of UV spectroscopy rather than IR spectroscopy as a means of product identification, showed that, when pure CO2, O2, and H2O ices, as well as mixed CO2 (or O2):H2O ices were irradiated with 144 keV S 9+ ions at 16 K, no SO2 was detected among the radiolysis products, in contrast to the results reported by Lv et al. (2014a). Boduch et al. (2016) rationalised that the ion fluence used in their experiments may have meant that the amount of SO2 produced (if any) was less than the detection limit of their spectroscopic instruments.
Additionally, the UV features of SO2 would have been hidden by the strong bands of HSO3and SO3 -, which were detected. These results, coupled with the fact that no detections of SO2 were made when irradiating H2O ice deposited above refractory sulfurous materials , led Boduch et al. (2016) to propose the possibility of an endogenic sulfur source for the SO2 observed at the surface of Europa, rather than it being the result of magnetospheric sulfur ion bombardment.
Laboratory irradiations of a binary 1:1 CO2:NH3 ice mixture with 144 keV S 9+ ions have been performed, considering scenarios in which the projectile ion was implanted and when it travelled through the ice (Lv et al. 2014b). Results showed that in both cases, molecules of astrobiological relevance such as ammonium methanoate and dimeric carbamic acid were formed, as well as simple oxides such as N2O and CO. Similar qualitative results were obtained when considering the irradiation of a ternary ice mixture containing NH3, CO2, and H2O (Lv et al. 2014b).
Computational studies have also been used to simulate sulfur ion bombardments of astrophysical ice analogues. Molecular dynamics simulations of a 20 MeV S + ion impacting a complex multi-component ice whose composition is relevant to the Europan surface revealed that this collision results in a net loss of SO2 molecules (which were initially present in the ice mixture) due to their oxidation to HSO3and SO3 -(Anders and Urbassek 2019a).
These results therefore parallel the laboratory findings of Boduch et al. (2016) and their sulfur ion radiolysis of simpler ices, as well as those of Kaňuchová et al. (2017) who considered the ion irradiation and thermal processing of SO2:H2O mixed ices. However, this computational simulation did not consider the implantation of the sulfur ion into the ice (Anders and Urbassek 2019a), and so any chemistry resulting from this implantation which could potentially lead to the formation of novel sulfur-bearing compounds was not considered.
The implantation of a 20 MeV S + ion into an ice containing H2O, CO2, NH3, and CH3OH was considered in a separate computational study (Anders and Urbassek 2019b). Results showed that the energy imparted by the projectile ion as it traverses through the ice causes fragmentation of the original molecular constituents, which go on to form a wealth of organic species including methanal, methane, methanoic acid, and methoxymethane. More exotic species, such as cyclopropenone and cyclopropanetrione, were also detected.
An interesting result of this simulation was that the projectile sulfur ion did not react after coming to rest within the ice (Anders and Urbassek 2019b). The authors suggest that the lack of sulfur-bearing product molecules may be due to their use of a low fluence of 1 ion per simulation. Thus, the possibility of SO2 observed at the surface of Europa and Ganymede having an exogenic (Iogenic) sulfur source remains a somewhat open question.
Although complex ice mixtures such as that considered by Anders and Urbassek (2019a) are most easily (and cost-effectively) studied via computational means, experimental attempts have also been made. Ruf et al. (2019) irradiated a mixture of 2:1:1 H2O:NH3:CH3OH with 105 keV S 7+ ions at 9 K with the aim of characterising organosulfur molecules formed as a result of the radiolysis.
Overall, they identified over 1,100 organosulfur compounds (12% of all assigned signals) through a combination of IR spectroscopic and mass spectrometric techniques. Though perhaps not directly related to the presence of SO2 on the icy surfaces of the Galilean satellites of Jupiter, this finding suggests that sulfur ion implantation could be an impetus for a rich organic chemistry which is significant to several astrophysical contexts, and is undoubtedly of interest from the perspectives of astrobiology and prebiotic chemistry.
Future Directions
The work reviewed in Sections 2-7 provides a basis for future investigations, and there are several worthwhile and interesting routes that these investigations may follow. For instance, further investigations into the low-temperature IR and UV absorption spectra of relevant molecules will reveal important diagnostic features, and thus aid with their identification in interstellar and circumstellar media. This is perhaps most relevant now that the planned launch date for the JWST mission is approaching (Gardner et al. 2006), as the data collected by this telescope will no doubt increase our repository of known astrochemical molecules.
Laboratory experiments at purpose-built facilities (such as the new Ice Chamber for Astrophysics-Astrochemistry at the ATOMKI Institute for Nuclear Research in Debrecen, Hungary) will also be useful in addressing problems in astrophysical chemistry. The paucity of empirical sulfur neutral-neutral surface reaction experiments, for instance, is one which should be tackled. As previously discussed in Section 3, neutral-neutral reactions are most significant in the context of dense molecular clouds into which UV photons or cosmic rays cannot penetrate to induce chemistry. Several molecular species have been detected in the cores of such environments, and such molecules are most likely formed via surface reactions in ice grains, with a lesser amount formed through gas phase chemistry. Sulfur is likely an important factor here, but is not often considered in laboratory investigations due to the fact that it is a known pollutant in ultra-high vacuum systems. Thus, dedicated laboratory experiments are required to fill this gap in knowledge since the results of such investigations would contribute greatly to our understanding of sulfur depletion in dense interstellar clouds.
Further experiments in solid phase thermal chemistry and photochemistry should also be performed. Such reactions are near ubiquitous in space environments and can occur in any region which is warm enough or is subject to sufficient photon irradiation. Future experiments could therefore reveal much with regards to the sulfur chemistry occurring within diffuse interstellar media and on the surfaces of icy worlds, especially when combined with other processing types such as electron-irradiation. In the case of thermal processing, several interesting prebiotic molecules could be produced via relatively simple chemical steps, such as nucleophilic or electrophilic additions and substitutions. In spite of this, solid phase sulfur thermal chemistry remains poorly characterised.
As it stands, there is contrasting evidence as to whether sulfur ion implantation in CO2 ice can even form SO2 (Lv et al. 2014a;Boduch et al. 2016), with those studies arguing in favour using too low a temperature regime to be directly applicable to the Europan surface. Dedicated experiments could yield more information as to whether SO2 is a product of sulfur ion implantation in CO or CO2 ice, and also whether it is produced with sufficient yields to explain the observed quantities of SO2 on Europa. Such experiments should be systematic in nature, observing the results of sulfur ion implantations of various projectile charge states and energies in CO and CO2 ices at a range of temperatures.
As discussed in Section 1, there is a severe lack of systematic investigations in the field of astrochemistry, with many studies simply reporting the results of a very specific set of reaction conditions (particular temperatures, projectile natures and energy states, ice morphology and composition, etc.). Although such studies surely contribute to our knowledge of chemical reactivity in extra-terrestrial environments, it is difficult to gauge the applicability of their results to different astrophysical settings.
For instance, the results of a study conducted at 20 K may be relevant to ices in the interstellar medium, but not so to icy planetary and lunar surfaces where temperatures are higher. Another example is ice thickness: experiments which look into the astrochemical processing of thin ices may not be applicable to icy worlds, where ices are kilometres thick. Thus, radiolysis of thin ices would likely allow for the projectile to travel through the ice, whereas implantation would certainly occur on an icy world. Thus, for the reasons outlined above, there is a genuine need for systematic investigations of sulfur chemistry in space and planetary environments. Furthermore, such studies will be of enormous benefit is assessing the data collected by upcoming space missions, including JUICE and JWST.
Another potential avenue of investigation involves aligning more closely the fields of astrochemistry and cosmochemistry. Isotope studies are important in the space sciences and underpin many meteoritic and Solar System studies (McSween and Huss 2010). An example of a potential astrochemical study with cosmochemical implications involves the vacuum-UV photo-dissociation of H2S to yield elemental sulfur. In the gas phase, this is known to be accompanied by sulfur isotope fractionation which is dependent upon the irradiation wavelength used (Chakraborty et al. 2013). However, to the best of the authors' knowledge no analogous studies have been conducted in the solid phase. Thus, a future investigation may well look into any sulfur isotope fractionations occurring during the photolysis of H2S ice at interstellar temperatures. The radiolysis of mineral assemblies could also be another interesting avenue of investigation.
Finally, we note that that we have not discussed the sputtering of material that often occurs concomitantly during radiolysis and photolysis of astrophysical ice analogues (Muntean et al. 2015;. Nevertheless, such a phenomenon is important as it occurs in most laboratory experiments and is thought to contribute to the transient atmospheres on the Galilean moons (Shematovich et al. 2005;Plainaki et al. 2012), and so an effort at furthering our understanding of the topic should be made. Given that sputtering is a well-established subject within the field of astrophysical chemistry and molecular astrophysics, we direct the interested reader to the reviews by Baragiola et al. (2003) and Famá et al. (2008).
Conclusions
This review has highlighted the major and recent findings by laboratory and (to a lesser extent) computational studies in condensed phase sulfur astrochemistry. Potential future directions have also been discussed. Although perhaps not a complete survey of the field, several important points have been communicated, in particular: • Neutral-neutral reactions relevant to extra-terrestrial sulfur chemistry have yet to be rigorously explored and are sorely lacking in experimental data compared to analogous gas phase studies.
• The cryogenic thermal chemistry of mixed ices containing sulfur-bearing molecules is potentially rich and has not yet been fully explored.
• Photolysis of pure SO2 ice yields SO3, while that of pure H2S yields products such as HS, HS2, H2S2, and S2.
• Photolysis of mixed SO2:H2O ice results in the formation of H2SO4 and related sulfur oxyanions.
• When mixed with CO or CO2, photolysis of H2S yields OCS. CS2 is also produced if the carbon-bearing molecule is CO, while SO2 is a by-product when it is CO2.
• Proton-irradiation of SO2 has not been shown to form S-H bonds, rather SO3 is formed in both molecular and polymeric form.
• Proton-irradiation of mixed SO2:H2O ices results in the formation of H2SO4 acid hydrates and related molecules and fragments.
• Proton-irradiation of ices containing both sulfur and carbon source molecules allows for the formation of OCS and CS2.
• SO2, HSO4 -, and SO4 2are the major radiolysis products of the proton-irradiation of H2SO4 and its hydrates.
• The main product of sulfur ion radiolysis of H2O ice is H2SO4 and its hydrates. The charge state of the projectile ion does not influence the resultant chemistry, but the projectile energy is correlated with the yield of the acid product.
• There is evidence for an exogenic (Iogenic) sulfur source for H2SO4 on the surface of Europa and possibly also the other icy Galilean satellites.
• There is still a need for further investigation into the sulfur source for SO2 on the surface of Europa (i.e. whether it is endogenic or exogenic).
• It is important that future studies make use of a systematic experimental design in which multiple factors (e.g. different temperatures, ice morphologies, projectile ion charge states and energies, etc.) are considered, as this would allow experimental results to be applied to several astrophysical environments.
It is also important to once again draw attention to the fact that sulfur astrochemistry is by no means limited to the condensed phase, and that gas phase chemistry is likely to be a major contributor in settings within and beyond the Solar System. Furthermore, the space chemistry of sulfur may also be investigated mineralogically and isotopically through cosmochemical experiments, and such experiments may also have implications for geochemical processes on Earth. Table 2 summarises in brief the laboratory solid phase sulfur astrochemistry investigations reviewed in this paper. Although full details are found in text, this summary may be used as a quick reference guide. Ip et al. (1997;1998). Copyrighted AGU. Reproduced with permission Fig. 3 IR absorbance spectra for methanethiol: peaks at lower wavenumbers (~2550 cm -1 ) correspond to S-H modes while peaks at higher wavenumbers (~2850-3000 cm -1 ) correspond to C-H stretching modes. Individual spectra shifted vertically for clarity. Data originally from Hudson and Gerakines (2018). Copyrighted AAS. Reproduced with permission Fig. 4 Conformational isomerism exhibited using 1-propanethiol as an example. Note that, when looking down the C1-C2 single bond, the thiol and methyl functional groups are on opposite sides (the anti-periplanar rotamer) in structure A. In structure B, however, they face each other directly (the syn-periplanar rotamer). The other rotamers in this system, the gauche and eclipsed rotamers, are not shown for clarity
Fig. 5
Nucleophilic attack occurs during which the lone electron pair on the amine nitrogen bonds with the electron-deficient carbonyl carbon. This causes electron re-distribution from the π-electrons and the development of a formal negative charge on the oxygen atom. Proton abstraction by a second methylamine molecule furnishes the final ionic product Fig. 6 The cryogenic solid-phase reaction of OCS with Cl2, Br2, or BrI yields synhalogenocarbonylsulfenyl halides (Romano et al. 2001;Tobón et al. 2006), while the reaction of CS2 with Cl2, Br2, or ClBr under similar conditions yields both syn-and antihalogenothiocarbonylsulfenyl halides (Tobón et al. 2007). Note that, in the former reaction, iodine atoms bond directly to the carbonyl group only Fig. 8 The solid phase photolysis of diazo(3-thienyl)methane results in the formation of molecules with functional groups which may be of astrochemical interest (Pharr et al. 2012) Fig. 9 A qualitative representation of the Europan radiolytic sulfur cycle, wherein arrows indicate radiolysis pathways. The full cycle is completed in ~4000 years. Further details on process rates and species lifetimes may be found in Carlson et al. (2002) | 16,640.4 | 2021-01-13T00:00:00.000 | [
"Physics",
"Chemistry"
] |
Separation of Molybdenum Isotopes at Supercritical Fluid Extraction with Carbon Dioxide in a Vertical Gradient Field of Temperatures
Separation of molybdenum isotope complexes by supercritical fluid extraction (SFE) with carbon dioxide was studied experimentally. The extraction of molybdenum isotope complexes was carried out in the updated extraction chamber (reactor) of the SFE-U installation, which provided an initial pressure of P ≤ 20 MPa at constant temperatures of the upper T 1 = 35˚C and bottom T 2 = 45˚C flanges. The device, through which the eluent was discharged, involved a set of four thin tubes of different lengths located inside the reactor. The axes of the tubes and the reactor are parallel and the tubes are equally spaced circumferentially inside the reactor. The extract was removed from each tube through channels isolated from each other and located in the bottom flange with cylindrical expansion, in which several layers of filter paper were placed. After passing through the filters the extract entered a restrictor designed to remove the eluent from the reactor. The initial pressure of carbon dioxide and the holding time of the extract were specified in the experiments. The level of the eluent sampling was set by the lengths of the tubes depending on the reactor height. A method of producing molybdenum complexes was described. It was experimentally shown that at an initial pressure of 20 MPa and a given holding time a difference from the natural content of Mo isotopes for given heights of extract sampling depending on the reactor height was observed in extracts removed through filters. The ranges of deviation of the content of molybdenum isotopes in extracts from natural one were deter-mined.
Introduction
In view of various practical applications, there is a problem of separation and extraction of isotopes of chemical elements. A large number of scientific and technical literature is devoted to separation of iotopes of chemical elements. First of all, we should mention the problem of separation of natural uranium isotopes. It is of great practical interest for nuclear fuel production [1]- [7].
Recently, along with the problem of separation of uranium isotopes there is a problem of extraction of isotopes of chemical elements for use in nuclear medicine which deals with application of radionuclide pharmaceuticals in diagnosis and treatment of various diseases [8].
To assess the scale of the global use of radioisotopes we give the information on the United States [9] where about 130 radiodiagnostic methods in vivo and about 60 radiodiagnostic methods in vitro are used, which exceeds the domestic capabilities by several times. The number of radionuclide studies per 1000 people per year in Russia is 7, while in Austria-19, in Japan-25 and in the USA-40. In Ukraine, the number of studies per 1000 people per year is up to 20 [10] and this is only the lower border of the European level (20 -50).
Thus, with a growing number of radionuclide studies by 7% -14% per year [11] there is a need to increase the production of radioisotope 99 Tc m obtained from 99 Mo [12] [13].
The unstable isotope 99 Mo occupies one of the first places in the list of radioisotopes and its use in medical practice is well studied [12]. The isotope 99 Tc m produced from 99 Mo as a result of β and γ decay is included in various pharmaceuticals used in medical diagnostics to visualize the human internal organs: thyroid and salivary glands; heart and large vessels; skeleton; brain tumors; genitourinary organs, etc.
The start of production and organizing the application of 99 Tc m isotope in Ukraine are relevant objectives due to the large number of cancers compared to world standards. [14]. Therefore, the isotope 99 Tc m is an extremely necessary means for detecting cancer at an early stage.
Along with traditional methods for producing the isotope 99 Tc m in Ukraine [13], there is an alternative method based on new physical principles. The basis of this method is the recently developed technology for producing molybdenum complexes and its isotope complexes by supercritical fluid extraction with carbon dioxide (SFE-СО 2 ) [15] [16]. In this technology, molybdenum was extracted with tributyl phosphate from nitric acid solutions [15] [16]. It shows a rather low efficiency of molybdenum extraction due to its low solubility in dilute nitric ac-id, as well as due to availability of molybdenum in nitric acid solutions in the form of molybdenum acids of various compositions.
In [17], the redistribution of the isotopic composition of natural uranium 235 U in natural in supercritical carbon dioxide (SC-CO 2 ) experimentally is investigated. A diagram and description of the principle work of the experimental setup reactor are shown. A method for sample preparation of samples of granite, containing natural uranium, and the order of extraction are described. The conclusion that the spatial redistribution of isotopes 235 U in SC-CO 2 based on the analysis of gamma spectra of extracts is made. It is shown that for certain parameters of SC-CO 2 the concentration of the isotope 235 U is distributed unevenly in the reactor height: its maximum is near the heated lower flange and decreases with approaching to the top, the colder the flange of the reactor. The separation factor 235 U isotope in SC-CO 2 could be a value near 1.2 is concluded.
In [18], the models and analytical solutions extraction of 235 U and 238 U isotopes complexes in the presence of microdroplets of water in the homogeneous temperature field or in the heated from bellow layer of the SC-CO 2 are suggested. A homogeneous temperature field shows that the effectiveness of SFE with carbon dioxide complexes of uranium isotopes reaches a maximum at the limit of water solubility in SC-CO 2 . Theoretical calculations have shown that the maximum concentration of 235 U isotope complexes in the heated from below SC-CO 2 layer reaches 1.2% and observed near the heated bottom of layer. The order of the fulfillment of the SFE with carbon dioxide, in which the concentration of the 235 U isotope complexes is higher than the natural value, is described.
In [19], the extraction of molybdenum acetylacetonate and its isotopes by the In this paper, we continued the study of the extraction of molybdenum acetylacetonate and its isotopes by the SFE-СО 2 method with a new extract sampling device allowing us to obtain test samples differentially according to the reactor height.
The Idea, Materials and Updated Reactor for Extraction of Molybdenum Isotopes
The idea of the experiments is to establish the fact of an uneven distribution of molybdenum isotopes in a vertical gradient temperature field at SFE-СО 2 which is set in the reactor after holding the fluid for 0.5 hours. Materials. Molybdenum acid (LRW) was used as the original source of molybdenum. The extraction solution was obtained in a Soxhlet apparatus using 100 ml of acetylacetone (CP) and 10 g of molybdenum acid powder. Molybdenum acid powder was poured into a filter paper bag and placed in a glass sleeve. The sleeve was located in the center of the apparatus. After 20 cycles of Soxhlet apparatus operation, the upper vessel was completely filled with distilled acetylacetone and the extract remaining in the lower flask was extracted for further experiments. The concentration of molybdenum solution in acetylacetone obtained for SFE-СО 2 was about 25 mg/ml.
Reactor for Extraction of Molybdenum Isotopes
The experiments on study of SFE-СО 2 of complexes of molybdenum isotopes were carried out on a laboratory facility for supercritical fluid extraction SFE-U in which, unlike [17], another type of device was used for extract removing. The general schematic arrangement of reactor elements is shown in Figure 1. The general view of the bottom flange and the device for removing the extract from the reactor screwed to the flange is shown in Figure 2.
The pressure of supercritical carbon dioxide in the installation could raise up to 20.0 MPa and the temperature of the bottom and upper flanges of the reactor could be maintained at a given level within the range 30˚C -50˚C using adjustable heaters. In the experiments, the temperature of the bottom flange was set above the temperature of the upper one.
The device for removing the extract from the reactor (14) involves a set of four thin tubes of different lengths (6 in Figure 2) located inside the reactor. The tubes are made of stainless steel of a diameter of 2 mm and a wall thickness of 0.5 mm. The axes of the tubes are parallel to the axis of the reactor and evenly spaced around a circle of a radius of 6.4 mm (see cross-section В in Figure 1).
The extract was removed from each tube through cylindrical extensions 4 ( Figure 2) isolated from each other and located at the bottom of the device for extract removing 2 ( Figure 2). Several layers of filter paper were placed in each of these extensions. After passing through the filters the extract entered the restrictor hole 5 through the radial channels 7 and then into the receiving tank of pressure corresponded to atmospheric.
In the experiments, the initial pressure of carbon dioxide was 20 MPa and the extract holding time was of the order of 0.5 hours. The level of the eluent sampling according to the height of the reactor was set by the lengths of the tubes a, b, c, d at a reactor height of L = 21.8 cm.
Procedure for Extraction of Molybdenum Complexes by SFE-СО 2 Method
The extraction at the SFE-U installation with a reactor, the schematic arrangement of which is shown in Figure 1 and Figure 2, has been carried as follows: -A filter paper with the applied and dried original extract in the amount of 0.2 ml or more was placed in the reactor; -The upper flange of the reactor had a temperature T 1 = 35˚C, the bottom flange-T 2 = 45˚C; -The pressure of carbon dioxide in the reactor was set at a level of 20.0 MPa; -The extract holding in the reactor lasted for 0.5 hours; -The extract discharge was carried out for about 1 min from the initial pressure to atmospheric. The extract was removed through a device for extract removing from the reactor 14, the inlet holes of the tubes of which were located at specified distances from the bottom flange of the reactor.
In the first series of experiments, the lengths of the tubes were a-30 mm, b-60 mm, c-90 mm, d-120 mm.
In the second series of experiments: a-60 mm, b-90 mm, c-120 mm, and d-150 mm.
Experimental Results of SFE -СО 2 of Molybdenum Isotopes
To assess the accuracy of measuring the content of molybdenum isotopes in experimental samples the test studies of the molybdenum isotope content in the initial solution of molybdenum in acetylacetone were carried out. Table 1 shows the data on the natural content of Mo isotopes: column 2reference data; column 3-data obtained by the mass spectrometer ICP-MS ELEMENT 2 for the initial solution of molybdenum in acetylacetone.
A comparison of the data in Table 1 indicates that the reference data and the data obtained by the mass spectrometer completely correspond to the content of molybdenum isotopes in the initial solution.
The samples obtained as a result of SFE-СО 2 extraction were analyzed for the content of molybdenum isotopes using the spectra of a high-resolution mass spectrometer with ionization in inductively coupled plasma ICP-MS ELEMENT 2 [20].
Study of the Content of Molybdenum Isotopes
As a result of SFE-СО 2 of the initial solution 4 sets of filter paper in the form of 5 circles of the diameter of 3 mm were obtained. Each set of filter paper contained trace amounts of extract. The content of molybdenum isotopes in each set of filter paper corresponded to that established during the extract holding for 0.5 hours at the reactor heights corresponding to the length of the tubes for extract sampling a, b, с, d (see 6, Figure 2).
Four SFE-СО 2 of the initial solution were carried out in the first series of experiments. The lengths of the tubes of the device for extract removing from the reactor were: а-30 mm, b-60 mm, c-90 mm, and d-120 mm.
Two SFE-СО 2 of the initial solution were carried out in the second series of experiments. One series (Experiment 5) was not considered due to the large measurement error. In this series of experiments the lengths of the tubes of the device for extract removing from the reactor were: a-60 mm, b-90 mm, c-120 mm, and d-150 mm.
To analyze the content of molybdenum isotopes using the mass spectrometer ICP-MS ELEMENT 2 the obtained filter paper samples were dissolved in 2% HNO 3 and mixed for 15 hours on a vibrating mixer. 16 measurements were carried out for each sample. The results obtained after statistical processing are presented in Tables 2-6. 2) The numbers with strokes correspond to the number of the experiment; 3) '-the area of application the molybdenum solution on the filter paper; 4) "-the area without application the molybdenum solution on the filter paper. The data on the content of molybdenum isotopes in experiments 1, 2, 3, 4 are presented in Tables 3-6. RSD, %-relative standard deviation indicates the relative standard deviation of the measured value from its average value.
To determine the content of molybdenum isotopes near the bottom flange of the reactor in experiment 4 the molybdenum content on filter paper placed in the reactor containing the initial extract was analyzed after extraction. We analyzed the content of molybdenum isotopes in the area where the molybdenum solution was not applied to the filter paper (column 4' of Table 7) and in the area of applying the initial molybdenum solution to the filter paper (column 4'' of Table 7).
Diagrams of the dependence of the deviation of the average molybdenum isotope content from the natural one depending on the height of the extract sampling are presented in Figures 3(a)-(g). The measured points are connected using cubic spline interpolation of the embedded MathCAD program [21].
When plotting the graphs we averaged the values of isotopes distribution at levels а-30 mm, b-60 mm, c-90 mm, d-120 mm and added to them the data of isotopes distribution near the bottom flange of the reactor and at heights of 12 cm (experiment 4) and 15 cm (experiment 6) from it.
In the figures, the confidence interval of measuring the isotope content is specified by two horizontal segments located above and below the measured point.
Verification of the spectral method for determining the content of Mo isotopes using mass spectrometer ICP-MS ELEMENT 2 for the initial solution of molybdenum in acetylacetone showed its applicability.
The amount of molybdenum in the extract was sufficient to evaluate its isotope content.
From the analysis of Figure 3, it follows that during the holding of the extract in the reactor for 0.5 hours at a temperature gradient of ( ) This is confirmed by the data in graphs presented in Figure 3.
The content of the isotope 92 (Figure 3(a)) is depleted in comparison with the natural content at the reactor height 0 cm ≤ z ≤ 15 cm. With increasing the sampling height the deviation of the content of the isotope 92 from the natural one decreases from 1% to −0.66%.
In Figure 3(b), the content of the isotope 94 relative to the natural one is enriched, except for the heights of 0 cm and 6 cm, where its content corresponds to the natural one. The enrichment value increases with the sampling height from +0.17% at a height of 3 cm to +0.27% at a height of 15 cm. In Figure 3(c), the content of the isotope 95 corresponds to the natural one, except for the mark of 3 cm depending on the reactor height, where its content is 0.24% lower than the natural one.
In Figure 3(d), the content of the isotope 96 at the bottom of the reactor is enriched (+0.1%). At other points depending on the reactor height the content of the isotope 96 corresponds to the natural one.
In Figure 3(e), the content of the isotope 97 at the bottom of the reactor is enriched (+0.21%) and at other points depending on the reactor height its content corresponds to the natural one.
In Figure 3(f), the content of the isotope 98 at the bottom of the reactor is enriched (+0.5%). With an increase in the height of the reactor at a level of 3 cm its enrichment is +0.29% of the natural one. Then the content of the isotope 98 depending on the reactor height corresponds to the natural one and at a height of 15 cm it increases to a value of +0.66%.
In Figure 3 It should be noted that such a distribution will not be preserved at a longer holding. Therefore, determination of a holding time is important to achieve maximum enrichment at target isotope extraction. Red and blue markers correspond respectively to increased or decreased content of the molybdenum isotope relative to the natural content. A green marker indicates that there is no deviation of molybdenum isotope content from the natural one. Thus, the conducted experimental studies of supercritical fluid extraction with carbon dioxide in a vertical gradient temperature field of the molybdenum complex showed a deviation of its isotopic content from the natural one depending on the sampling height for a given extract holding time.
Conclusions
The extraction of isotopes of molybdenum complexes by the SFE-СО 2 method was studied in this paper. A series of experiments were carried out on the extraction of molybdenum isotope complexes in the updated SFE-U installation at a constant initial pressure of P = 20 MPa and constant temperatures of the bottom T 2 = 45˚C and upper T 1 = 35˚C flanges of the extraction chamber. In each of the experiments, the content of molybdenum isotopes varied depending on the height of eluent sampling from the extraction chamber.
Based on the experiments the following conclusions can be drawn: A deviation of the content of Mo isotopes from natural one in the extract at a temperature gradient between the bottom and upper flanges of the reactor of 0.46˚C/cm at the extract holding for 0.5 hours and the pressure relief in the reactor from 20 MPa to atmospheric within 1 min was detected; The content of the isotope 92 is depleted in comparison with the natural content at the height of the reactor 0 cm ≤ z ≤ 15 cm. With increasing the sampling height the deviation of the content of the isotope 92 from the natural one decreases from 1% to −0.66% The content of the isotope 94 is enriched relatively natural except for a height of 0 cm and 6 cm where its content corresponds to the natural. The enrichment value increases with the sampling height from +0.17% at a height of 3 cm to +0.27% at a height of 15 cm.
The content of the isotope 95 corresponds to the natural one except for the height mark of the reactor 3 cm where its content is 0.24% lower than the natural one.
The content of isotope 96 at the bottom of the reactor is +0.1% higher than natural one. At other points, the content of the isotope 96 corresponds to the natural one depending on the reactor height.
The content of the isotope 97 at the bottom of the reactor is higher than natural by +0.21% and at other points depending on the reactor height its content corresponds to the natural one.
The content of the isotope 98 at the bottom of the reactor is higher than natural by +0.5%. With an increase in the height of the reactor at a level of 3 cm, its enrichment is +0.29% of the natural. With increasing the height of sampling the content of the isotope 98 corresponds to the natural one and at a height of 15 cm, it increases to a value of +0.66%.
At reactor heights of 0 cm ≤ z ≤ 15 cm the content of the isotope 100 is higher than the natural one by an average of +0.2%. Its value changes in comparison with the natural content oscillating at the level of +0.2%. Moreover, it is higher than the natural content by a value of +0.24% at the bottom of the reactor and by +0.27% at a height of 15 cm.
Analysis of the content of molybdenum isotopes at the entire height of the reactor has shown that the lightest isotope 92 is in an enriched state near the upper flange z = 21.8 cm and the heaviest 100 and presumably 98 are enriched and occupy a space of 0 cm ≤ z ≤ 15 cm.
The isotope 94 in the enriched state is concentrated in the range of 3 cm ≤ z ≤ 15 cm.
Isotope 95 as isotope 92 is located near the upper flange at z = 21.8 cm.
Isotopes 96, 97 are enriched near the bottom flange of the reactor at z = 0 cm. | 4,912.6 | 2020-03-18T00:00:00.000 | [
"Chemistry",
"Environmental Science",
"Engineering"
] |
The Enhancement of the Thermal Conductivity of Epoxy Resin Reinforced by Bromo-Oxybismuth
With the gradual miniaturization of electronic devices, the thermal conductivity of electronic components is increasingly required. Epoxy (EP) resins are easy to process, exhibit excellent electrical insulation properties, and are light in weight and low in cost, making them the preferred material for thermal management applications. In order to endow EPs with better dielectric and thermal conductivity properties, bromo-oxygen-bismuth (BiOBr) prepared using the hydrothermal method was used as a filler to obtain BiOBr/EP composites, and the effect of BiOBr addition on the properties of the BiOBr/EP composites was also studied. The results showed that the addition of a small amount of BiOBr could greatly optimize the dielectric properties and thermal conductivity of EP resin, and when the content of BiOBr was 0.75 wt% and 1.00 wt%, the dielectric properties and thermal conductivity of the composite could reach the optimum, respectively. The high dielectric constant and excellent thermal conductivity of BiOBr/EP composites are mainly due to the good layered structure of BiOBr, which can provide good interfacial polarization and thermal conductivity.
Introduction
With the rapid development of electronic technology, all kinds of electronic products are developing towards high integration, high frequency, and diversified functions; there is a growing demand for microelectronic capacitors, which not only need a low thickness, but also put forward very strict requirements for a low dielectric loss and high thermal conductivity [1,2].Most of the traditional capacitors are made of inorganic materials.Although capacitors made of inorganic materials have the advantages of a very high dielectric constant and good thermal stability, their preparation process is complex, the materials are too brittle, and the dielectric loss is too large, which greatly limits their application in the field of high-tech electronics [3].In addition, microelectronic capacitors also put forward higher requirements for the miniaturization and integration of dielectric materials, and the rapid decline of their thickness even approaches the physical limit of materials.Therefore, traditional inorganic materials cannot meet the requirements of capacitors in the field of high-tech electronics [4].Therefore, it is necessary to find new materials to give them a good thermal conductivity on the basis of meeting a high dielectric constant and low dielectric loss.
Polymers have the advantages of easy processing, excellent electrical insulation performance, and being light weight and low cost; thus, it is imperative to use polymers as the matrix of dielectric materials [5].Among them, epoxy resin (EP) has been widely used in the preparation of various electronic and electrical equipment because of its excellent electrical characteristics, good thermal stability and low production cost [6,7].However, the low dielectric properties of polymer materials limit their application in microelectronic capacitors [8], so it is necessary to add nano-fillers with a high dielectric constant or easy Polymers 2023, 15, 4616 2 of 13 polarization into EP [9], so as to increase the dielectric constant without increasing the dielectric loss.In actual use, the heat resistance of epoxy resin is also very important for its use as an electronic component.The heat resistance of epoxy resin is affected by its molecular skeleton structure, the curing agent used, and the curing process, etc. [10].At present, there are many methods to improve the heat resistance of epoxy resin at home and abroad, such as: developing epoxy resin with a new heat resistant skeleton structure; synthesising the curing agent of epoxy resin with a new structure; and blending or copolymerization with inorganic nanomaterials [11][12][13].In addition, the high integration degree of electronic products will cause the heat inside the material to not easily emit, and heat accumulation in the circuit board will reduce the service life of the circuit board to a certain extent.Therefore, while ensuring the excellent dielectric properties of EP composites, it is also necessary to consider the optimization of their thermal conductivity [14].There are many methods to improve the overall performance of EP, and the simplest option is to directly add high-performance fillers to the polymer matrix [15].A large number of studies show that, for composites meeting the above requirements, it is necessary to consider the relevant properties of the fillers, among which the structure, morphology, and dielectric properties of the filler are very important [16].Inorganic materials with excellent thermal conductivity, such as boron nitride, aluminium nitride, and alumina, etc., have already been used in EP to enhance its thermal conductivity.Nano-ceramic fillers have a high thermal conductivity and electrical conductivity and low cost, which can effectively increase the thermal conductivity of epoxy resin [17].However, nano ceramic filler has limited compatibility with epoxy resin, which easily agglomerates in the epoxy resin matrix and affects its mechanical properties, and its different crystal structures will greatly affect its thermal conductivity [18].Carbon materials are light in weight, have a high strength and good corrosion resistance, and are widely used in the field of aerospace [19].Due to the large number of carbon isomers, different crystal structures will affect the thermal conductivity of carbon materials.Metal and its oxides have a good electronic thermal conductivity, so they are often used as epoxy resin modifiers.However, when the amount is too large, it will affect the viscosity of EP to a certain extent.Therefore, it is necessary to find new inorganic fillers to modify EP [20,21].Compared with circular fillers, linear and flake fillers with a higher specific surface area can better improve the thermal conductivity of composites [22].However, the problems such as being fragile, easy aggregation, and poor fluidity would limit the application effect of this kind of fillers [23]; therefore, how to apply linear and flaky thermal conductive fillers to polymer matrix composites has become one of the most concerned problems.
BiOBr is a PbCl-type structure in which internal atoms are connected by strong covalent bonds and the atomic layers are held together by weak van der Waals forces [24], forming a layered structure that is separated along the (001) direction [25].This opencrystal structure creates enough space to facilitate the polarization of atoms and related orbitals [26].Because the dielectric property of a material refers to its polarization property under the action of an applied electric field, the higher the polarization strength, the greater the corresponding dielectric constant [27].Therefore, the high polarization effect of BiOBr is beneficial for improving the dielectric constant of the resin composite.At the same time, its good lamellar structure can greatly reduce the percolation threshold of composite materials, which can ensure that the thermal conductivity of composite materials can be greatly improved when the filling amount is low [28].In addition, BiOBr has very stable chemical properties, enabling it to maintain its original electronic structure and physical properties during the preparation process [29,30].Thus it can be predicted that the addition of BiOBr to EP resin can effectively improve the thermal conductivity of epoxy resin while improving its dielectric constant.In addition, BiOBr with a closed structure can not only maintain the original layered structure and original excellent properties of BiOBr, but also endow it with more stable physical properties, so as to ensure that BiOBr can maintain the original morphology during the preparation of EP composites.At the same time, the excellent properties of BiOBr itself enable the BiOBr addition of a small amount in the EP matrix to greatly optimize the dielectric properties and thermal conductivity of the composites, without causing aggregation in the composites or affecting the properties of the EP itself.Therefore, this study intends to choose bismuth bromo-oxybromide as a filler to optimize the thermal conductivity of EP.
In summary, this study firstly prepared BiOBr with a closed structure via the hydrothermal method and added it into EP resin as an inorganic modification component.BiOBr/EP composites with different contents of BiOBr were then obtained using the curing method.The effects of the BiOBr addition amount on the dielectric constant, dielectric loss, and thermal conductivity of EP resin were also studied.The results showed that, when the content of BiOBr was 0.75 wt% and 1.00 wt%, the dielectric constant and thermal conductivity of the BiOBr/EP composites reached their maximum values, respectively, and the dielectric loss was still within the acceptable range.Meanwhile, the heat resistance of the BiOBr/EP composite was optimized to some extent by adding BiOBr.This is due to the good polarizability of BiOBr under an applied electric field and the high thermal conductivity and layered structure of BiOBr.This study lays a theoretical foundation for the application of BiOBr in the preparation of resin EP composites with a high dielectric constant and high thermal conductivity.
Preparation of the BiOBr
In total, 0.9675 g of Bi(NO 3 ) 3 •5H 2 O was dissolved in 100 mL of glycol and stirred with magnetic stirrer for 30 min to obtain a 0.02 mol/L Bi(NO 3 ) 3 solution.A total of 0.0714 g of KBr was dissolved in a 30 mL Bi(NO 3 ) 3 solution, and stirred at room temperature for 20 min, the mixture was then poured into a teflon-lined reactor, sealed, and placed in an oven at 160 • C for 12 h.After natural cooling to room temperature, the obtained precipitation was transferred into a 50 mL centrifuge tube and centrifuged with deionized water and anhydrous ethanol 3 times each.The centrifuge rate was set at 8000 r/min and the centrifuge time was set at 10 min.The centrifuged sample was placed in a vacuum drying oven and dried at 60 • C for 12 h.The powder obtained was BiOBr.
Preparation of the BiOBr/EP Composites
The BiOBr obtained above was used as a filler to add into the EP matrix with different additions to prepare the BiOBr/EP composites via a casting method.The specific preparation process was as follows: the EPs were stirred in a glass beaker with isophorone diamine at 40 • C for 10 min to prepare theEP matrix, and the BiOBr (the mass ratios of BiOBr to epoxy resin were 0.00 wt%, 0.25 wt%, 0.50 wt%, 0.75 wt%, and 1.00 wt%, respectively) was added into the EP matrix.After 15 min of ultrasonic dispersion of these mixtures, the BiOBr/EP pre-polymer with well-dispersed BiOBr was obtained, which was then poured into the preheated mould.The mould was placed in a vacuum drying oven at 50 • C and bubbles were pumped for about 30 min to ensure that there were no bubbles or defects inside the composites.Finally, the mould containing the BiOBr/EP pre-polymer was put into the air blast oven, and the BiOBr/EP composites were obtained through the process of segmenting curing.The curing process was as follows: 80 • C/1 h + 100 • C/1 h + 120 • C/2 h.The preparation process is shown in Figure 1.The size of the BiOBr/EP composites was about 52 ± 1 mm in diameter and about 4 ± 1 mm in thickness.
bubbles were pumped for about 30 min to ensure that there were no inside the composites.Finally, the mould containing the BiOBr/EP pr into the air blast oven, and the BiOBr/EP composites were obtained th of segmenting curing.The curing process was as follows: 80 °C/1 h + 10 h.The preparation process is shown in Figure 1.The size of the BiOBr/ about 52 ± 1 mm in diameter and about 4 ± 1 mm in thickness.
The X-ray Diffraction (XRD)
The crystal structure of BiOBr was researched using X-ray diffrac D8, Karlsruhe, Germany) at room temperature.
Scanning Electron Microscopy (SEM)
The morphology of BiOBr and the fractured surface features of the sites were observed using a scanning electron microscope (SEM, HITAC at room temperature.
Dielectric Properties
The dielectric properties of the BiOBr/EP composites were measur constant dielectric loss tester (ZJD-A type, China Aviation Times Comp at room temperature.Five samples were selected for each different B sample was tested three times, and the average was selected after mea
Thermal Conductivity
The thermal conductivity coefficient of the BiOBr/EP composites w a thermal conductivity tester (KDRX-Ⅱ, Xiangtan Xiangyi Instrument C China) via the transient fast hot wire method.Five samples were selecte BiOBr content, and the average was selected after measurement.
Thermal Resistant Properties
The thermogravimetric analysis (TGA) of BiOBr/EP was researche atmosphere with the TGAQ50, and the heating rate was 20 °C min −1 .
Measurements 2.4.1. The X-ray Diffraction (XRD)
The crystal structure of BiOBr was researched using X-ray diffraction (XRD, Bruker D8, Karlsruhe, Germany) at room temperature.
Scanning Electron Microscopy (SEM)
The morphology of BiOBr and the fractured surface features of the BiOBr/EP composites were observed using a scanning electron microscope (SEM, HITACHI, Tokyo, Japan) at room temperature.
Dielectric Properties
The dielectric properties of the BiOBr/EP composites were measured with a dielectric constant dielectric loss tester (ZJD-A type, China Aviation Times Company, Beijing, China) at room temperature.Five samples were selected for each different BiOBr content, each sample was tested three times, and the average was selected after measurement.
Thermal Conductivity
The thermal conductivity coefficient of the BiOBr/EP composites was measured with a thermal conductivity tester (KDRX-II, Xiangtan Xiangyi Instrument Co., Ltd., Xiangtan, China) via the transient fast hot wire method.Five samples were selected for each different BiOBr content, and the average was selected after measurement.
Thermal Resistant Properties
The thermogravimetric analysis (TGA) of BiOBr/EP was researched under a nitrogen atmosphere with the TGAQ50, and the heating rate was 20 • C min −1 .
Morphology Structure of the BiOBr
The crystal structure of BiOBr was researched using X-ray diffraction (XRD), and the surface morphology of BiOBr was observed using scanning electron microscopy (SEM); the results are shown in Figures 2 and 3, respectively.It can be seen from Figure 2 that the crystallinity of BiOBr obtained with the solvothermal treatment was strong.The characteristic peaks of BiOBr were located at 10.9, 25.1, 31.7,32.2 45.7, and 56.8 • , corresponding to the (001), ( 101), ( 102), ( 110), (200), and (212) crystalline facets, which were in good agreement with the tetragonal phase of BiOBr (JCPDS93-0393).Figure 3 shows the micrograph of the BiOBr microspheres with 3-5 µm diameter that were successfully obtained on a large scale after the solvothermal treatment at 160 °C for 12 h.The exterior surfaces of the microspheres are not clearly smooth, but contain an extensive growth of sheet-like structures with thickness around 100 nm.
Dielectric Properties of BiOBr/EP
The dielectric constant of the BiOBr/EP composites with different contents and struc tures of BiOBr were researched, and the result is shown in Figure 4.As can be seen from Figure 4, the dielectric constant of the BiOBr/EP composite increased to a greater exten than that of EP, even with a small amount of BiOBr being added to EP, and the dielectri constant of the BiOBr/EP composites gradually and steadily increased with the increas in BiOBr content in the range from 0.25 wt% to 0.75 wt%.The reason can be attributed t the semiconductivity of BiOBr, while the dielectric constant of the interface layer is smal and the volume of the interface region is larger than that of the material; thus, the interfac layer plays a dominant role in the reduction in the dielectric constant of the material.How ever, when some conductive particles were introduced into the EP matrix, polarizatio occurred in two phases, resulting in a significant increase in the dielectric constant of th BiOBr/EP composites.Therefore, with the increase in the BiOBr amount, the number o interfaces between the fillers and the resin matrix increased, and the polarization of th interface was further enhanced, resulting in an increase in the dielectric constant of th BiOBr/EP composites.In addition, when the content of BiOBr reached 1.00 wt%, the die lectric constant of the BiOBr/EP composites decreased slightly compared to that of Bi OBr/EP with 0.75 wt% BiOBr.The main reason may have been that the leakage current o the BiOBr/EP composites under a high filling volume was larger, which made the charg
Dielectric Properties of BiOBr/EP
The dielectric constant of the BiOBr/EP composites with different contents an tures of BiOBr were researched, and the result is shown in Figure 4.As can be se Figure 4, the dielectric constant of the BiOBr/EP composite increased to a greate than that of EP, even with a small amount of BiOBr being added to EP, and the d constant of the BiOBr/EP composites gradually and steadily increased with the in BiOBr content in the range from 0.25 wt% to 0.75 wt%.The reason can be attri the semiconductivity of BiOBr, while the dielectric constant of the interface layer and the volume of the interface region is larger than that of the material; thus, the i layer plays a dominant role in the reduction in the dielectric constant of the materi ever, when some conductive particles were introduced into the EP matrix, pola occurred in two phases, resulting in a significant increase in the dielectric consta BiOBr/EP composites.Therefore, with the increase in the BiOBr amount, the nu interfaces between the fillers and the resin matrix increased, and the polarizatio interface was further enhanced, resulting in an increase in the dielectric constan BiOBr/EP composites.In addition, when the content of BiOBr reached 1.00 wt%, lectric constant of the BiOBr/EP composites decreased slightly compared to th OBr/EP with 0.75 wt% BiOBr.The main reason may have been that the leakage cu the BiOBr/EP composites under a high filling volume was larger, which made th storage capacity of the composite begin to decline.At the same time, when the co Figure 3 shows the micrograph of the BiOBr microspheres with 3-5 µm diameters that were successfully obtained on a large scale after the solvothermal treatment at 160 • C for 12 h.The exterior surfaces of the microspheres are not clearly smooth, but contain an extensive growth of sheet-like structures with thickness around 100 nm.
Dielectric Properties of BiOBr/EP
The dielectric constant of the BiOBr/EP composites with different contents and structures of BiOBr were researched, and the result is shown in Figure 4.As can be seen from Figure 4, the dielectric constant of the BiOBr/EP composite increased to a greater extent than that of EP, even with a small amount of BiOBr being added to EP, and the dielectric constant of the BiOBr/EP composites gradually and steadily increased with the increase in BiOBr content in the range from 0.25 wt% to 0.75 wt%.The reason can be attributed to the semiconductivity of BiOBr, while the dielectric constant of the interface layer is small, and the volume of the interface region is larger than that of the material; thus, the interface layer plays a dominant role in the reduction in the dielectric constant of the material.However, when some conductive particles were introduced into the EP matrix, polarization occurred in two phases, resulting in a significant increase in the dielectric constant of the BiOBr/EP composites.Therefore, with the increase in the BiOBr amount, the number of interfaces between the fillers and the resin matrix increased, and the polarization of the interface was further enhanced, resulting in an increase in the dielectric constant of the BiOBr/EP composites.In addition, when the content of BiOBr reached 1.00 wt%, the dielectric constant of the BiOBr/EP composites decreased slightly compared to that of BiOBr/EP with Polymers 2023, 15, 4616 6 of 13 0.75 wt% BiOBr.The main reason may have been that the leakage current of the BiOBr/EP composites under a high filling volume was larger, which made the charge storage capacity of the composite begin to decline.At the same time, when the content of BiOBr in the BiOBr/EP composites was very high, part of the BiOBr was reunited together, which could not achieve the effect of enhancing the dielectric properties of the BiOBr/EP composites, thus increasing the polar group matrix and steric hindrance, resulting in a decrease in the dielectric constant of the BiOBr/EP composites.It can also be observed from the figure that the dielectric constants of BiOBr/EP composites with different contents of BiOBr all showed a decreasing trend with an increase in the frequency, showing the law of a high dielectric constant at a low frequency.This phenomenon can be attributed to an increase in frequency and an increase in the electric field change speed.The polarization of polar groups in the EP composites lagged behind the change in the electric field, and the polarization order was as follows: the first was the surface polarization process, then the orientation polarization process, and finally, the displacement polarization process.Thus, the effect of polarization on the dielectric constant of the BiOBr/EP composites decreased significantly.
which could not achieve the effect of enhancing the dielectric composites, thus increasing the polar group matrix and ster decrease in the dielectric constant of the BiOBr/EP composit from the figure that the dielectric constants of BiOBr/EP com tents of BiOBr all showed a decreasing trend with an increase the law of a high dielectric constant at a low frequency.Th tributed to an increase in frequency and an increase in the elec polarization of polar groups in the EP composites lagged behi field, and the polarization order was as follows: the first was t cess, then the orientation polarization process, and finally, the process.Thus, the effect of polarization on the dielectric const sites decreased significantly.Figures 5 and 6 show that the dielectric constants of BiO tents of BiOBr under 500 kHz and 10 MHz, respectively.As under the low-frequency electric field, the dielectric constant material increased first and then decreased with an increase in the maximum (4.333) when the content was 0.75 wt%, and th further BiOBr content being added.This is because, under a l the addition of an appropriate content of BiOBr can effectiv property of the material, but when the content is too high, ex gregate, and the contact between them will affect the surface p in a decrease in the dielectric constant.Under the high-frequ as shown in Figure 6), although the dielectric constant of the increased to a certain extent after the addition of the BiOBr, the Figures 5 and 6 show that the dielectric constants of BiOBr/EP varied with the contents of BiOBr under 500 kHz and 10 MHz, respectively.As can be seen from Figure 5, under the low-frequency electric field, the dielectric constant of the BiOBr/EP composite material increased first and then decreased with an increase in BiOBr content, and reached the maximum (4.333) when the content was 0.75 wt%, and then began to decrease with further BiOBr content being added.This is because, under a low-frequency electric field, the addition of an appropriate content of BiOBr can effectively improve the dielectric property of the material, but when the content is too high, excessive BiOBr begins to aggregate, and the contact between them will affect the surface polarization effect, resulting in a decrease in the dielectric constant.Under the high-frequency electric field (10 MHz, as shown in Figure 6), although the dielectric constant of the BiOBr/EP composites also increased to a certain extent after the addition of the BiOBr, there was no obvious dependence on the amount of BiOBr added.This was because the relaxation polarization, such as the interface polarization of the BiOBr, was not established in time under the high-frequency electric field, so the effect of the BiOBr addition amount on the dielectric constant of the BiOBr/EP composites was not obvious.The dielectric loss of the BiOBr/EP composites changing with the frequency is shown in Figure 7.As can be seen from the figure, compared to pure EP, the dielectric loss of the BiOBr/EP resin composites showed an increasing trend.This was because, after adding BiOBr into the resin, the interface polarization introduced into the material led to a certain increase in dielectric loss.When the frequency was low, the polarization velocity of the dipole inside the composite material followed the change velocity of the upper electric field, and this process consumed less energy.However, when the frequency was too high, the polarization of the dipole could not keep up with the change in the electric field.At this time, part of the energy needed to be absorbed to overcome the friction resistance.Therefore, more losses were generated at a high frequency.However, in general, the maximum dielectric loss was also within the controllable range (≤0.05), which can meet the requirements of high dielectric properties in the field of microelectronic components.The dielectric loss of the BiOBr/EP composites changing with the frequency is shown in Figure 7.As can be seen from the figure, compared to pure EP, the dielectric loss of th BiOBr/EP resin composites showed an increasing trend.This was because, after adding BiOBr into the resin, the interface polarization introduced into the material led to a certain increase in dielectric loss.When the frequency was low, the polarization velocity of th dipole inside the composite material followed the change velocity of the upper electri field, and this process consumed less energy.However, when the frequency was too high the polarization of the dipole could not keep up with the change in the electric field.A this time, part of the energy needed to be absorbed to overcome the friction resistance Therefore, more losses were generated at a high frequency.However, in general, the max imum dielectric loss was also within the controllable range (≤0.05), which can meet th requirements of high dielectric properties in the field of microelectronic components.The dielectric loss of the BiOBr/EP composites changing with the frequency is shown in Figure 7.As can be seen from the figure, compared to pure EP, the dielectric loss of the BiOBr/EP resin composites showed an increasing trend.This was because, after adding BiOBr into the resin, the interface polarization introduced into the material led to a certain increase in dielectric loss.When the frequency was low, the polarization velocity of the dipole inside the composite material followed the change velocity of the upper electric field, and this process consumed less energy.However, when the frequency was too high, the polarization of the dipole could not keep up with the change in the electric field.At this time, part of the energy needed to be absorbed to overcome the friction resistance.Therefore, more losses were generated at a high frequency.However, in general, the maximum dielectric loss was also within the controllable range (≤0.05), which can meet the requirements of high dielectric properties in the field of microelectronic components.The dielectric loss of the BiOBr/EP composites changing with the frequency is shown in Figure 7.As can be seen from the figure, compared to pure EP, the dielectric loss of the BiOBr/EP resin composites showed an increasing trend.This was because, after adding BiOBr into the resin, the interface polarization introduced into the material led to a certain increase in dielectric loss.When the frequency was low, the polarization velocity of the dipole inside the composite material followed the change velocity of the upper electric field, and this process consumed less energy.However, when the frequency was too high, the polarization of the dipole could not keep up with the change in the electric field.At this time, part of the energy needed to be absorbed to overcome the friction resistance.Therefore, more losses were generated at a high frequency.However, in general, the maximum dielectric loss was also within the controllable range (≤0.05), which can meet the requirements of high dielectric properties in the field of microelectronic components.the low-frequency electric field (500 kHz), with a gradual increase in BiOBr content, the dielectric loss of the BiOBr/EP composites presented a state of first a sharp increase and then stable, and the maximum dielectric loss was 0.02045 with 0.75 wt% BiOBr.However, under the high-frequency electric field (10 MHz), the dielectric loss of the BiOBr/EP composite increased after adding BiOBr, but the increase in the dielectric loss had little dependence on the content of the BiOBr, and the highest dielectric loss was 0.03082 with 0.75 wt% BiOBr.In addition, although the dielectric loss increased, this was still within the usable range.This is because, in a low-frequency electric field, the addition of BiOBr can enhance the interface polarization, so its content had a great effect on the dielectric loss of the BiOBr/EP composite.When BiOBr content was too high, part of the BiOBr would aggregate, its polarization effect decreased, and the dielectric loss gradually became stable.However, in the case of the high-frequency electric field, when the polarization in the medium could not keep up with the change in the external electric field, the dielectric loss increased, but its dependence on the BiOBr content was weakened.Figures 8 and 9 are the dielectric losses of the BiOBr/EP composites with different contents of BiOBr at 500 kHz and 10 MHz, respectively.As can be seen from Figure 9, under the low-frequency electric field (500 kHz), with a gradual increase in BiOBr content, the dielectric loss of the BiOBr/EP composites presented a state of first a sharp increase and then stable, and the maximum dielectric loss was 0.02045 with 0.75 wt% BiOBr.However, under the high-frequency electric field (10 MHz), the dielectric loss of the BiOBr/EP composite increased after adding BiOBr, but the increase in the dielectric loss had little dependence on the content of the BiOBr, and the highest dielectric loss was 0.03082 with 0.75 wt% BiOBr.In addition, although the dielectric loss increased, this was still within the usable range.This is because, in a low-frequency electric field, the addition of BiOBr can enhance the interface polarization, so its content had a great effect on the dielectric loss of the BiOBr/EP composite.When BiOBr content was too high, part of the BiOBr would aggregate, its polarization effect decreased, and the dielectric loss gradually became stable.However, in the case of the high-frequency electric field, when the polarization in the medium could not keep up with the change in the external electric field, the dielectric loss increased, but its dependence on the BiOBr content was weakened.contents of BiOBr at 500 kHz and 10 MHz, respectively.As can be seen from Figure 9, under the low-frequency electric field (500 kHz), with a gradual increase in BiOBr content, the dielectric loss of the BiOBr/EP composites presented a state of first a sharp increase and then stable, and the maximum dielectric loss was 0.02045 with 0.75 wt% BiOBr.However, under the high-frequency electric field (10 MHz), the dielectric loss of the BiOBr/EP composite increased after adding BiOBr, but the increase in the dielectric loss had little dependence on the content of the BiOBr, and the highest dielectric loss was 0.03082 with 0.75 wt% BiOBr.In addition, although the dielectric loss increased, this was still within the usable range.This is because, in a low-frequency electric field, the addition of BiOBr can enhance the interface polarization, so its content had a great effect on the dielectric loss of the BiOBr/EP composite.When BiOBr content was too high, part of the BiOBr would aggregate, its polarization effect decreased, and the dielectric loss gradually became stable.However, in the case of the high-frequency electric field, when the polarization in the medium could not keep up with the change in the external electric field, the dielectric loss increased, but its dependence on the BiOBr content was weakened.
Thermal Conductivities of the Materials
Figure 10 shows the thermal conductivity of the BiOBr/EP composites with different contents of BiOBr.As can be seen from the figure, when the BiOBr/EP composites containing BiOBr were added, their thermal conductivity was higher than that of pure EP (0.1705 W/mk), and increased with an increase in BiOBr content.This was because, when the BiOBr content was small, the filler was isolated in the polymer, resulting in a large spacing between the fillers and no contact with each other, and it was difficult to form a continuous thermal conductivity channel.This is equivalent to the packing particles being coated by polymer, and the fillers being bridged by polymer; thus, the improvement in the thermal conductivity of the composite materials was limited.With the increase Polymers 2023, 15, 4616 9 of 13 in the BiOBr amount, the thermal resistance of the interface between the BiOBr and the EP matrix was improved, the thermal conductivity pathway was established to form the thermal conductivity network and improve the heat transfer efficiency, and the thermal conductivity increased accordingly.When the content of BiOBr was 1.00 wt%, the thermal conductivity of BiOBr/EP could be up to 0.2190 W/mk, which was 28.45% higher than that of pure EP (0.1705 W/mk).Our team has prepared two kinds of spherical MoS 2 with different structures to enhance the thermal conductivity of EP; the results show that, when the addition amount of molybdenum disulfide was 3.0 wt%, the thermal conductivity of these two MoS 2 /EP composites reached the maximum value, which was 0.3061 W/mK and 0.3105 W/mK, respectively [31].However, when the addition amount of BiOBr prepared in this study was only 1.0 wt%, the thermal conductivity of EP could be increased to 0.2190 W/mK, indicating that the thermal conductivity of EP could be better optimized even if a small amount of BiOBr was added, which can be owed to the layered structure of BiOBr that can better perform thermal conductivity in EP resin.
tivity increased accordingly.When the content of BiOBr was ductivity of BiOBr/EP could be up to 0.2190 W/mk, which wa pure EP (0.1705 W/mk).Our team has prepared two kinds of s structures to enhance the thermal conductivity of EP; the resu dition amount of molybdenum disulfide was 3.0 wt%, the th two MoS2/EP composites reached the maximum value, wh 0.3105 W/mK, respectively [31].However, when the addition in this study was only 1.0 wt%, the thermal conductivity of EP W/mK, indicating that the thermal conductivity of EP could b small amount of BiOBr was added, which can be owed to th that can better perform thermal conductivity in EP resin.In order to further study the thermal conductivity mech and the fracture surface of the BiOBr/EP composites were obse in Figure 11.As can be seen from Figure 11A, the surface o although some bumps appeared, but this was due to the inte erated by the molecular chains inside the EP resin, which prov As for the BiOBr/EP composite, there were very obvious sm Most of these particles were isolated from each other, and a with each other, which provided a good thermal conduction p fore, the transfer of heat inside the BiOBr/EP composite not o ular chain of the EP resin, but also through the BiOBr.BiOBr a high thermal conductivity, so a small amount of addition c thermal conductivity of the composite.However, after excess In order to further study the thermal conductivity mechanism in the composites, EP and the fracture surface of the BiOBr/EP composites were observed.The results are shown in Figure 11.As can be seen from Figure 11A, the surface of the EP resin was smooth, although some bumps appeared, but this was due to the interpenetrating structure generated by the molecular chains inside the EP resin, which provided a way for heat transfer.As for the BiOBr/EP composite, there were very obvious small particles on its surface.Most of these particles were isolated from each other, and a few of them were in contact with each other, which provided a good thermal conduction path for heat transfer.Therefore, the transfer of heat inside the BiOBr/EP composite not only depended on the molecular chain of the EP resin, but also through the BiOBr.BiOBr particles are small and have a high thermal conductivity, so a small amount of addition could effectively improve the thermal conductivity of the composite.However, after excessive BiOBr was added to the EP resin, agglomeration would occur, resulting in the formation of a cavity in the material, resulting in the partial aggregation of heat; therefore, the appropriate content of BiOBr was beneficial for the thermal conductivity of the BiOBr/EP composites (as shown in Figure 12).
Thermal Resistant of the Materials
Figure 13 shows the TGA curve of pure EP and the BiOBr/EP (BiOBr: 0.75 wt%) posites.As can be seen from the figure, the initial decomposition temperature (the perature when the mass loss rate is 5%) of the BiOBr/EP composite was 302 °C, and of pure EP was 310 °C, indicating that the addition of BiOBr improved the initial th stability of EP to some extent.When the temperature continued to rise to 800 °C, t sidual carbon rate of BiOBr/EP was 6.49%, which was not much different from that of EP resin (6.41%).However, within the temperature range of 400-700 °C, the residua bon rate of the BiOBr/EP composite was always much higher than that of the pur indicating that BiOBr/EP had a better thermal stability.The improved thermal stabi the BiOBr/EP composite was not only because the Bi and halogen Br atoms in BiO structure can greatly improve the heat resistance of materials, but also because of th ergistic effect between these atoms.In addition, the structure of BiOBr can provide of the molecular cavity to meet the heat transfer inside the resin.At the same time, b the BiOBr/EP composites and the EP could form a uniform and stable interpenetr network structure after copolymerization; thus, excellent resistance to external heat age.In addition, isoflurone diamine was selected as the curing agent in the preparat BiOBr/EP material, and there was also a heat-resistant synergistic effect between the ecules, which made BiOBr/EP have a better heat resistance.
Thermal Resistant of the Materials
Figure 13 shows the TGA curve of pure EP and th posites.As can be seen from the figure, the initial de perature when the mass loss rate is 5%) of the BiOBr/ of pure EP was 310 °C, indicating that the addition of stability of EP to some extent.When the temperature sidual carbon rate of BiOBr/EP was 6.49%, which was n EP resin (6.41%).However, within the temperature ra bon rate of the BiOBr/EP composite was always muc indicating that BiOBr/EP had a better thermal stability the BiOBr/EP composite was not only because the Bi structure can greatly improve the heat resistance of m ergistic effect between these atoms.In addition, the st of the molecular cavity to meet the heat transfer inside
Thermal Resistant of the Materials
Figure 13 shows the TGA curve of pure EP and the BiOBr/EP (BiOBr: 0.75 wt%) composites.As can be seen from the figure, the initial decomposition temperature (the temperature when the mass loss rate is 5%) of the BiOBr/EP composite was 302 • C, and that of pure EP was 310 • C, indicating that the addition of BiOBr improved the initial thermal stability of EP to some extent.When the temperature continued to rise to 800 • C, the residual carbon rate of BiOBr/EP was 6.49%, which was not much different from that of pure EP resin (6.41%).However, within the temperature range of 400-700 • C, the residual carbon rate of the BiOBr/EP composite was always much higher than that of the pure EP, indicating that BiOBr/EP had a better thermal stability.The improved thermal stability of the BiOBr/EP composite was not only because the Bi and halogen Br atoms in BiOBr/EP structure can greatly improve the heat resistance of materials, but also because of the synergistic effect between these atoms.In addition, the structure of BiOBr can provide a part of the molecular cavity to meet the heat transfer inside the resin.At the same time, both of the BiOBr/EP composites and the EP could form a uniform and stable interpenetrating network structure after copolymerization; thus, excellent resistance to external heat damage.In addition, isoflurone diamine was selected as the curing agent in the preparation of BiOBr/EP material, and there was also a heat-resistant synergistic effect between the molecules, which made BiOBr/EP have a better heat resistance.Figure 14 shows the DTG curves of the pure EP and BiOBr/EP (BiOBr: 0 composites.It can be seen from Figure 14 that the curves of the pure EP and composites were roughly the same, which indicates that the addition of BiOB change its decomposition mechanism.
Conclusions
In this study, spheroidal BiOBr with a layered structure was prepared via t thermal method and used as filler to modify EP resin.The results showed that t tric constant and dielectric loss of the BiOBr/EP composites exhibited a good sta applicability under a low-frequency electric field when the BiOBr addition am 0.75 wt%, and the thermal conductivity of the composite could be as high as 0.21 when the BiOBr additive amount was 1.00 wt%, which was 28.45% higher tha pure EP (0.1705 W/mk).The improvement in the dielectric and thermal condu the BiOBr/EP composites can be attributed not only to the good size effect and d of BiOBr, but also to the excellent thermal conductivity and interfacial polarizat OBr itself.Meanwhile, the heat resistance of the BiOBr/EP composite was also o and this was because the existence of Br atom itself would improve the heat res the composites, while isoflurone diamine was selected as the curing agent, and
Conclusions
In this study, spheroidal BiOBr with a layered structure was prepared thermal method and used as filler to modify EP resin.The results showed t tric constant and dielectric loss of the BiOBr/EP composites exhibited a good applicability under a low-frequency electric field when the BiOBr addition 0.75 wt%, and the thermal conductivity of the composite could be as high as when the BiOBr additive amount was 1.00 wt%, which was 28.45% highe pure EP (0.1705 W/mk).The improvement in the dielectric and thermal co the BiOBr/EP composites can be attributed not only to the good size effect a of BiOBr, but also to the excellent thermal conductivity and interfacial pola OBr itself.Meanwhile, the heat resistance of the BiOBr/EP composite was a and this was because the existence of Br atom itself would improve the hea the composites, while isoflurone diamine was selected as the curing agent, also a good heat resistance synergistic effect between molecules.This stu new idea for the preparation of EP resin with a high thermal conductivity.
Conclusions
In this study, spheroidal BiOBr with a layered structure was prepared via the hydrothermal method and used as filler to modify EP resin.The results showed that the dielectric constant and dielectric loss of the BiOBr/EP composites exhibited a good stability and applicability under a low-frequency electric field when the BiOBr addition amount was 0.75 wt%, and the thermal conductivity of the composite could be as high as 0.2190 W/mk when the BiOBr additive amount was 1.00 wt%, which was 28.45% higher than that of pure EP (0.1705 W/mk).The improvement in the dielectric and thermal conductivity of the BiOBr/EP composites can be attributed not only to the good size effect and dispersion of BiOBr, but also to the excellent thermal conductivity and interfacial polarization of BiOBr itself.Meanwhile, the heat resistance of the BiOBr/EP composite was also optimized, and this was because the existence of Br atom itself would improve the heat resistance of the composites, while isoflurone diamine was selected as the curing agent, and there was also a good heat resistance synergistic effect between molecules.This study provides a new idea for the preparation of EP resin with a high thermal conductivity.
Figure 3 .
Figure 3.The SEM results of BiOBr ((A) Surface structure of BiOBr microspheres; (B) Surface structure of BiOBr microspheres 2.5 times larger than A).
Figure 3
Figure3shows the micrograph of the BiOBr microspheres with 3-5 µm d that were successfully obtained on a large scale after the solvothermal treatment a for 12 h.The exterior surfaces of the microspheres are not clearly smooth, but co extensive growth of sheet-like structures with thickness around 100 nm.
Figure 3 .
Figure 3.The SEM results of BiOBr ((A) Surface structure of BiOBr microspheres; (B) Surf structure of BiOBr microspheres 2.5 times larger than A).
Figure 3 .
Figure 3.The SEM results of BiOBr ((A) Surface structure of BiOBr microspheres; (B) Surface structure of BiOBr microspheres 2.5 times larger than A).
Figure 4 .
Figure 4.The dielectric permittivity of BiOBr/EP with different conten
Figure 4 .
Figure 4.The dielectric permittivity of BiOBr/EP with different content of BiOBr varies by frequency.
Figure 5 .
Figure 5.The dielectric constants of BiOBr/EP varies with the contents of S-BiOBr at 5 MHz.
Figure 5 .
Figure 5.The dielectric constants of BiOBr/EP varies with the contents of S-BiOBr at 5 MHz.
Figure 5 .
Figure 5.The dielectric constants of BiOBr/EP varies with the contents of S-BiOBr at 5 MHz.
Polymers 2023 , 13 Figure 5 .
Figure 5.The dielectric constants of BiOBr/EP varies with the contents of S-BiOBr at 5 MHz.
Figure 7 .
Figure 7.The dielectric loss of BiOBr/EP with different content of BiOBr varies with frequency.
Figures 8 and 9
Figures 8 and 9 are the dielectric losses of the BiOBr/EP composites with different contents of BiOBr at 500 kHz and 10 MHz, respectively.As can be seen from Figure9, under
Figure 7 .
Figure 7.The dielectric loss of BiOBr/EP with different content of BiOBr varies with frequency.
Figure 10 Figure 8 .
Figure 10 shows the thermal conductivity of the BiOBr/EP composites with different contents of BiOBr.As can be seen from the figure, when the BiOBr/EP composites
Figure 10
Figure 10 shows the thermal conductivity of the BiOBr/EP composites with different contents of BiOBr.As can be seen from the figure, when the BiOBr/EP composites
Figure 10 .
Figure 10.The thermal conductivities of BiOBr/EP with different con
Figure 10 .
Figure 10.The thermal conductivities of BiOBr/EP with different content of BiOBr.
Figure 13 .
Figure 13.The TGA curve of the EP and BiOBr/EP composites with 0.75 wt% BiOBr.
Figure 14 .
Figure 14.The DTG curve of the EP and BiOBr/EP composites with 0.75 wt% BiOBr.
Figure 13 .
Figure 13.The TGA curve of the EP and BiOBr/EP composites with 0.75 wt% BiOBr.
Figure 14
Figure14shows the DTG curves of the pure EP and BiOBr/EP (BiOBr: 0.75 wt%) composites.It can be seen from Figure14that the curves of the pure EP and BiOBr/EP composites were roughly the same, which indicates that the addition of BiOBr did not change its decomposition mechanism.
Figure 13 .
Figure 13.The TGA curve of the EP and BiOBr/EP composites with 0.75 wt% BiOBr
Figure 14 .
Figure 14.The DTG curve of the EP and BiOBr/EP composites with 0.75 wt% BiOBr
Figure 14 .
Figure 14.The DTG curve of the EP and BiOBr/EP composites with 0.75 wt% BiOBr. | 10,584 | 2023-12-01T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
The defensive strike of five species of lanceheads of the genus Bothrops ( Viperidae )
We studied the defensive strike of one species of each of five recognized lineages within the genus Bothrops, namely, B. alternatus, B. jararaca, B. jararacussu, B. moojeni and B. pauloensis. The defensive strike of the studied species was in general similar to that of Crotalus viridis and C. atrox, but some important differences were observed. Bothrops alternatus and B. pauloensis struck preferentially from a tight body posture, whereas B. jararaca and B. moojeni from a loose body posture. Defensive strikes were either true or false (during the latter, the mouth remains closed or partially open). Almost all strikes were successful; only on a few occasions snakes missed their target (flawed strikes). Strike variables were very conservative among the five species, especially strike distance and height, and one possible explanation may be related to constraints imposed on strike variables as a way of increasing strike accuracy.
Introduction
Most knowledge of the strike of viperids is based on and limited to North-American rattlesnakes (e.g.Van Ripper, 1955;Kardong, 1986;LaDuc, 2002).Kardong (1986) compared the predatory and defensive strikes of Crotalus viridis oreganus and observed some differences between them.In the defensive strike, the snake's jaws make contact with the aggressor in a wide angle (about 180°) and the arching of the neck typical of the predatory strike was not observed.Moreover, predatory strikes were flawed sometimes, which was not observed in the defensive strike (Kardong, 1986).In another species, C. atrox, defensive strikes were reported to be faster than predatory strikes (LaDuc, 2002).
In the case of the genus Bothrops, the only reports on strike behavior are restricted to a single species, B. jararaca (Sazima, 1988(Sazima, , 1992)).According to Sazima (1992, p. 210), "the defensive strike of B. jararaca seems similar to that reported by Kardong (1986) for Crotalus viridis", although the author does not provide any details on those similarities.In spite of similarities, defensive strikes launched during head-hiding, commonly observed in C. viridis (Kardong, 1986), were rarely observed in B. jararaca (Sazima, 1992).In comparing the defensive strike of B. jararaca to those of other congeneric species, Sazima (1988) suggested that the strike of B. jararaca is slower than that of B. moojeni and BIOLOGY B. neuwiedi urutu, but his suggestion remains untested until the time of writing.
Phylogenetic relationships within the Brazilian species of the genus Bothrops revealed the existence of six distinct lineages, namely, the alternatus, atrox, jararaca, jararacussu, neuwiedi and taeniatus species groups (Salomão, et al. 1997, Vidal et al., 1997, Wüster et al., 2002).As part of a study on the evolution of defensive behavior in each of five lineages within the genus Bothrops, we compared features of the defensive strike between one species of each
Material and Methods
Test subjects were species of Bothrops from several localities of southeastern (45)(46)(47)(48)(49)(50)B. alternatus,B. jararaca,B. jararacussu,B. pauloensis,B. moojeni) brought to the Instituto Butantan from April 1998 through February 1999.Ten individuals of each species were tested as they arrived at the Instituto Butantan (Table 1).The time between the arrival of the snakes at the Instituto Butantan and the tests varied from zero (tests on the same day of arrival) to 16 days for all individuals tested, except for an individual of B. jararacussu that was kept for 33 days at the Instituto Butantan before tests were performed.
Each individual snake was tested only once.Snakes were held at the Instituto Butantan in wooden or plastic boxes until the actual tests.They were taken in wooden boxes to a temperature-controlled laboratory (25 °C ± 2) where the trials were conducted.The snakes were taken to the laboratory during daytime and each trial was carried out on the same day from 1800 hours to 0000 hours.
The trials were carried out in an arena set on the ground of the laboratory (Figure 1).The laboratory wall formed one of the sides of the arena; the other three sides were made of wood and glass (Figure 1).One of the sides adjacent to the wall was opaque and the other two sides were transparent.During trials, we stayed behind the opaque side of the arena to minimize possible disturbance.Two Panasonic NVRJ PR VHS cameras were used, one over the arena set on a tripod and facing the ground, and the other on the ground, lateral to the arena and facing the wall.The ground was covered with a black plastic sheet; both the plastic sheet and the wall had gridlines of 1 and 2 cm, respectively, for distance estimates.The light sources were two 60-Watt bulbs set on the main axis of the arena, one at each side.During the tests, we stayed behind the opaque side, which we believe further reduced the possibility of our being seen by the snake.
Defensive behavior was elicited with the use of a stimulation-object, a plastic bottle (height about 15 cm; diameter about 10 cm; volume 0.5l) covered with a 0.5 cm-thick sheet of soft black rubber to which a 1.5-meter plastic pipe was attached at a 45° angle (Figure 2e, f).The purpose of the rubber was to minimize injuries on the snakes' fangs during strikes.The bottle was filled with warm water (60 °C) shortly before the tests to raise the temperature of the external surface of the rubber to about 37 °C (verified by a Miller and Weber Inc. quick-reading thermometer with accuracy of 0.1 °C), in an attempt to simulate the body temperature of a mammal, a putative predator of lanceheads (Sazima, 1992).
Before each test, the internal surfaces of the arena as well as the stimulation-object were cleaned with ethanol.The snake was then put in the center of the arena and a ) with the open side facing down, was put over the snake with the use of a hook.The acrylic box was also cleaned with ethanol before the tests.The arena lights were on prior to introducing snakes into the arena.Snakes were left undisturbed for 10 minutes before the beginning of the tests.Cameras were turned on by remote control and recorded at 30 frames/s.Trials began when the acrylic cover was removed with a hook and the stimulationobject was introduced into the arena, about 0.7 m far from the snake, and moved in the air (c. 1 cm above ground) towards the snake, at approximately 20 cm.s -1 , always by the same person.The stimulation-object was moved in the main axis of the arena and approached the snake frontally, touched the snake's midbody and was withdrawn repeatedly 30 times uninterruptedly for each snake.During trials, we never moved from behind the light bulb.Trials were later analyzed frame-by-frame with a Panasonic NVSD475 PR VHS player.
The following variables of strikes were analyzed: 1) distance, estimated as the distance between the snake's snout at the beginning of the strike and when it made contact with the stimulation-object; 2) height, estimated as the height of the snake's snout in relation to the ground when it made contact with the stimulation-object; 3) angle between the ground and a line crossing the position of the snake's snout at the beginning of the strike and when it made contact with the stimulation-object; 4) duration, estimated as the number of frames from the beginning of the strike until contact with the stimulationobject multiplied by 1/30 s (which corresponds to one frame); and 5) speed, estimated as the strike distance divided by its duration.
On certain occasions, the snake allowed the stimulation-object to touch its body before launching a strike; at this moment the snake opened its mouth and the jaws were brought to bear on the stimulation-object with no projection of the head.Although these attacks by the snake may be considered strikes, we did not include them in the analyses since they did not involve the typical projection of the head observed in the other strikes.Strikes in which jaws were wide open and contact was made with the stimulation-object were recorded as "true strikes", whereas those in which the mouth remained closed or only partially open during strike were recorded as "false strikes" (cf.Greene, 1988;Sazima, 1992).On a few occasions, the snake completely missed the target, which was recorded as a "flawed strike" (cf.Kardong, 1986).At the moment of the strike, the snake's body was either in a tight posture, that is, with more acute body angles (Figure 2a) or in a loose posture, with anterior body angles more open and less acute (Figure 2b).The depicted postures (Figure 2a, b) actually represent extremes of a continuum in which snakes could change from one posture to the other during trials.Therefore, the classification of body posture at the moment of the strike in either tight or loose was totally arbitrary.Sometimes strikes were delivered from head-elevated postures (Figure 2c) and on other occasions from head-hiding postures (Figure 2d).The frequencies of occurrence of strike types, as well as body postures and concomitant behaviors during strike delivery were compared among species with a G test (Zar, 1999).
For comparisons among species, the distances and heights of the strikes were divided by the snout-vent length (SVL) of each individual, because the species studied differ greatly in size (Table 1).When an individual struck more than once during the trial, a mean value of all its strikes was calculated for each variable.These mean values were used in the comparisons among species, so that the independent units of the data were no longer the strikes but the mean values assigned to each individual.The variables were compared among species by Kruskal-Wallis ANOVA (Zar, 1999).Statistical analyses were performed with BIOESTAT 3.0 (G test; Ayres and Ayres, 2003) and STATISTICA 6.1 (Kruskal-Wallis ANOVA; StatSoft, 2003).
Results
The typical defensive strike recorded for the five species was a rapid movement of the snake's head towards the stimulation-object, as the lateral curves of its anterior body straightened, with its jaws wide-open and the posterior part of the body remaining stationary.During this phase, rapid acceleration towards the stimulation-object was observed.On contact with the stimulation-object, the jaws formed an angle of about 180° and no arching of the neck was observed (Figure 2e).We were not able to register the penetration of the snake's fangs into the rubber of the stimulation-object (bite) through the analysis of the films.However, it certainly occurred, because the rubber always presented marks of perforation from which venom drained following the tests.
Sometimes the snake launched what is here called a lateral strike, in which its head rotated around the long axis of the anterior trunk 90° during the strike and the stimulation-object was hit at a lateral position (Figure 2f), instead of being hit frontally, as occurred in the typical strike.This lateral strike was a marked difference between the strike behavior of B. pauloensis and that of the other species (G test, df = 4, G = 83.63,P < 0.001; Table 2).Species differed in the frequencies of false strikes (G test, df = 4, G = 27.01,P < 0.001; Table 2), and flawed strikes (G test, df = 4, G = 12.42, P = 0.01; Table 2), as well as tight/loose body postures during strikes (G test, df = 4, G = 20.82,P < 0.001; Table 3).Bothrops moojeni launched most of its strikes from head-elevated postures (G test, df = 4, G = 82.81,P < 0.001; Table 3), and all five species rarely struck while hiding the head (Table 3).
Strikes were on average short and low when considering either the raw values of distance and height or their values in relation to the snakes' SVL (Table 4).There were no significant differences among species in any of the variables, except in strike angle (H = 13.865;P = 0.008; N = 41; Table 4).
Discussion
The defensive strikes observed in our study had the same overall "stabbing" appearance of those described for C. viridis (Kardong, 1986) andC. atrox (LaDuc, 2002), as opposed to the "biting" appearance of the predatory strike.We agree with LaDuc (2002) that this may result from the large or awkwardly sized targets used in studies of defensive strikes (Kardong, 1986;LaDuc, 2002;this study).This fact may also be responsible for the lack of dorsal neck arching in the five species studied herein.Kardong (1986) suggested that the unarching of the neck in defensive strikes could lead to "dry" bites, with little or no envenomation.This seems not to be true for the five species of Bothrops studied, since no arching of the neck was observed, but still we could always detect the presence of venom draining from the surface of the stimulation-object following trials, sometimes in large amounts, indicating successful envenomation.
The studied Bothrops species very rarely launched strikes from head-hiding postures (Table 3), which seems to be an important difference between the striking behavior of lanceheads and that of C. viridis (Duvall et al., 1985) and C. atrox (M.Martins, pers.obs.).In fact, Sazima (1992) had already noticed the rarity of this behavior in B. jararaca and pointed to this as an important difference between B. jararaca and C. viridis.
The strikes of the five species of Bothrops were on average shorter than 0.20 of the snake's SVL (Table 4) and never longer than one third of it.This is another difference between the studied lanceheads and C. atrox, as the defensive strikes of the latter reached 37% of the snake's total length on average and a maximum of 46% (LaDuc, 2002).The five Bothrops species presented much slower strikes (Table 4) than those of C. viridis and C. atrox; the defensive strike of C. viridis had an average speed of 243 cm.s -1 (Van Riper, 1955), and that of C. atrox an average of 227 cm.s -1 (LaDuc, 2002).The duration of the strikes as defined herein can be compared to that of the extend stage of the defensive strikes of C. atrox as defined by LaDuc (2002); in the five species of Bothrops, the duration was longer ( LaDuc (2002) observed that the defensive strikes were longer and faster than the predatory strikes of C. atrox, but Young et al. (2001;cited in LaDuc, 2002) reports exactly the opposite, which renders this question controversial.The study of the characteristics of the predatory strikes of the five species of Bothrops studied would be of great interest, and could help to clarify this question.The recorded strikes were very similar in general aspect to those described by Sazima (1988Sazima ( , 1992) ) for B. jararaca in the field.The exception was B. pauloensis, whose strikes were frequently launched laterally (Table 2).This is a remarkable difference between B. pauloensis and the other species studied here.Only B. moojeni and B. jararaca did also present this type of strike, but much less frequently than B. pauloensis.It is possible that this strike behavior is also present in other species of the neuwiedi group, which needs further investigation.Most defensive strikes in the five species of Bothrops were successful, and flawed strikes were a relatively rare event (Table 2).Shine et al. (2002) report a much lower accuracy in the defensive strike of another viperid, Gloydius shedaoensis (46.7% of flawed strikes).Hence, there seems to be great variation in the accuracy of strikes between different lineages of vipers.
In spite of the difference between the defensive strike behavior of B. pauloensis and the other studied species, the studied variables were impressively conservative among the five species studied, especially strike distance and height (as related to SVL).Strike speed did not differ significantly among the five species of lanceheads studied, so the suggestion that the strikes of B. jararaca seem to be slower than those of B. moojeni (Sazima, 1992) was not supported by this study (Table 4).The only exception among variables seems to be strike angle, which was significantly different among species (Table 4).The extremely low angles in B. moojeni, in comparison with the other studied species, were certainly due to the fact that the former launched strikes preferentially from elevated body postures.If snakes somehow define the angle of strike as a way of hitting a certain height on the target, individuals of B. moojeni can hit the same height even with low strike-angles, because in a head-elevated posture the head is already high when strikes are delivered.In this sense, the angles recorded for B. moojeni may introduce some noise in the comparisons among species.If B. moojeni is removed from the comparisons, however, there is no difference in the strike angles of the remaining four species (H = 5.666; P = 0.13; N = 31).
According to Kardong and Smith (2002), the predatory success of rattlesnakes depends on an accurate strike that produces no significant errors in fang placement, penetration, and venom injection.In the case of defensive strikes, evolution must have also placed a premium on accuracy, which may have imposed constraints on some of their kinematic features, making strikes resistant to variation.This might be one explanation for the high similarity between the strikes of the five studied species, and further studies on a higher number of species of the different lineages of Bothrops could confirm this pattern.It is possible that strike variables are also very conservative among species of the genus Crotalus, which could also be investigated by a comparative study of the strikes of rattlesnakes.
Figure 2 Figure 1 .
Figure 2. a) Bothrops spp. in tight body posture; b) loose body posture; c) head-elevated posture; d) head-hiding posture; e) typical defensive strike, with stimulation-object being hit frontally; and f) lateral defensive strike, with stimulation-object being hit in a lateral position.
Table 3 .
Body postures or concomitant behaviors at the moment strikes were launched in five species of Bothrops shown as percentages of total number of strikes (number of strikes in parenthesis).*** = difference significant (P < 0.001).
Table 4) than in C. atrox (42-70 msec; mean = 50 msec; LaDuc, 2002), which is surprising in view of the shorter strike-distances in Bothrops spp., and probably due to the lower speed in the five Bothrops species compared to that in C. atrox. | 4,142.6 | 2007-05-01T00:00:00.000 | [
"Biology"
] |
High Frequency Charging Techniques — Grid Connected Power Generation Using Switched Reluctance Generator
Power generation becomes the need of developed, developing and under developed countries to meet their increasing power requirements. When affordability increases their requirement of power increases, this happens when increased per capita consumption. The existing power scenario states that highest power is produced using firing of coals called thermal energy. A high efficiency Switched Reluctance Generator (SRG) based high frequency switching scheme to enhance the output for grid connectivity is designed, fabricated and evaluated. This proposed method generates the output for the low wind speed. It provides output at low speed because of multi-level DC-DC converter and storage system. It is an efficient solution for low wind power generation. The real time readings and results are discussed.
Introduction
Power scenario in India states that more than 70.7% -73.0% of the net power is produced by means of thermal power plant and around 14% -16% is produced by hydro power plant.Approximately 1.9% -2.8% of the net power is contributed by atomic power plants, and renewable energies contribution leads approximately 6% -8%, direct diesel utilization based power generation leads 1% -2% of the net power.The contribution of micro power generations could be approximately 1% -2%.
The scenario to be reconfigured to save coal and disappearing natural inputs by increasing power generation from renewable energy sources like wind, solar, etc.Based on the data given by the Ministry of Power-India, for the year 2016, it is clearly indicates the above statistical values, the 70% of the net energy is polluting the environment and at the earliest coal may be disappeared, because, coal is available for another 80 -90 years provided that the consumption leads to the existing scenario.It is essential now we need to produce energy from renewable sources for conservation of natural resources.
Installation of more wind forms may provide an efficient solution for the present, in existing scenario contribution of the wind forms are very low because of the cost, efficiency, maintenance, grid connectivity and periodic analysis make the system inefficient.It can be solved using generators like SRG used as a wind power generator with appropriate control mechanism and grid connecting circuits [1].A grid connected switched reluctance generator based wind energy harvesting system is designed, constructed and verified for real time application.
Switched Reluctance Machine plays a major role in the industry over 30 years because of its unique features like control methodology, reliable and economical because of wide usage.SRM is unique while working as a generator and motor.
Latest trend in power electronics made easy to use SRG directly as per the requirement to meet the design goals [2].As a result of theoretical comparisons, the Switched Reluctance Generator (SRG) has a lot of advantages with respect to PM machines and AC machines, the controller design for the self-excitation mode of a SRG and the determinations of the variable parameters in a SRG controller.
Analysis of Generators for Wind Energy Applications
There are many kinds of generators used as wind turbine like Permanent Magnet Alternators, Synchronous Generators and Induction generators.Double-fed Induction and Reluctance Generators.Induction generators are used for very high power applications (in terms of large capacity), Permanent Magnet alternators are widely used for small capacity applications.
The Permanent Magnet Alternator has an advantage that a set of permanent magnet mounted on the rotor and rare earth magnets for efficient designs and has lot of disadvantage like domination of cogging torque, huge weight, magnetic circuit issues and high wind speed is required for full load operations.
A Synchronous Generator is an AC rotating machine whose speed is under a steady state condition is proportional to the frequency of the current in its armature.It has certain disadvantages like costly, complicated mechanical design, DC excitation is required and not suitable for variable speed applications.
Though the induction generators used for high capacity installation with few drawbacks like requires strong wind field for full load, during faulty condition response is poor, Rotor winding will have electrical losses.
A DFIG (Double-fed Induction Generator) is widely used in wind power generation.Stator is directly connected with grid and rotor is fed by voltage or current source inverter which has some drawbacks like high maintenance, span of turbine speed limits, rotor side converters operates at lower frequency.
A reliable wind turbine machine should have the features like-operates at lower wind speed with greater grid supports and responds quickly for wind variations.The turbine need to operate without any speed limitations, [3] Eliminates the cogging torque, very rapid response during faulty conditions, low cost manufacturing of generators and control system, eliminates rotor electrical losses and rotor winding, works at lower wind speed, etc. Switched Reluctance Machine will be the solution for the above and solves majority of issues in wind turbine applications.
Switched Reluctance Machine Configuration
Switched Reluctance machine has salient poles on both stator and rotor with concentrated winding on stator and no winding or no magnets on rotor.Due to the absence of winding in rotor of SRM, it can withstand high temperature and can rotate at very high speeds.The number of rotor poles, number of stator pole and number of phases decides the type of SRM as shown in the Figure 1 [4].
The torque ripple characteristics of SRM can be reduced by increasing the number of phases.However it is expensive to create such controllers to operate the SRM.Starting of SRM requires minimum of two phases and three phases the desired direction of rotation.An 8/6 pole, 4 phase SRM is proposed to develop a real time control system.An appropriate power electronic component is selected in accordance with the above motor as described above.
The electromechanical torque can be represented by the following equations.( ) where v is stator voltage, R is resistance in winding, λ leakage magnetic flux, T e electromechanical torque, T L load torque, θ rotor position, ω speed, J momentum of inertia.SRM machine can be completely understood by the torque expression.The exact operation and its features can be identified from the torque expression with a requirement of the relationship between flux linkages and rotor position.
These motors can be operated on all four quadrants [5].Torque Versus speed characteristics can be achieved using inductance Verses rotor position of the SRM and the equivalent circuits can be formulated.The above parameters will be of good aid to design high performance controllers.The single phase controller finds exceptional usage with a disadvantage that its performance cannot match multiphase SRM.
Torque is produced by the movable part which has the tendency to shift to a position, where inductance of the exciting winding is maximized.Due to its simplicity, ruggedness, low cost and high energy conversion efficiency, SRM is used for various general purpose, adjustable speed and servo type applications [6].The speed can be controlled by angle position control, phase chopping control, fixed angle pulse width modulation and variable angle pulse width modulation control.
The working of SRM explained in terms of the current passed through one set of winding of the stator and the torque produced by the tendency of the rotor to align with the excited stator pole.The generated torque direction is a function of the rotor position with respect to the energized phase and is independent of the direction of current flow through the phase winding.Continuous torque can be produced by logically synchronizing each phase's excitation with the rotor position.
Knowing the phase current (i) and voltage (v) of an energized phase of SRM,
The flux linkage can be calculated using the following relationship (1)
The Proposed System
The Figure 3 shows the block diagram of the proposed system.The wind energy
Multi-Level Battery Charging
Multi-level battery charging is implemented in this proposed scheme, as per the voltage from the turbine; automatically the appropriate storage device will be connected by the Embedded Micro controller.The micro controller receives the input from voltage and current sensing circuits and activates the storage connectivity between generator and storage.The storage is mainly used here for improving power quality for grid connectivity, this is essential because some uncertainties are always degrading the power quality issues in wind generation [8].
To electromagnetic attraction type with one potential free NO, NC contact whose resistance is 400 ohms, the coil current is 30 mA, the operating voltage is 12 V and time taken for transition from NO to NC is approximately 60 mS. Figure 5 illustrates the multi-charging battery switching relay circuit.
DC-DC Converter
As per proposed scheme, multilevel batteries charged separately will be connected to multi-level DC/DC converters.The proposed storage system must be selected according to the generator power.Storage devices like 3, 6, 9, 12, 15, 18, 21, 24 V, 32 V are selected for the proposed scheme, all the storage devices then converted in to 24 V from their levels to meet common inverter as shown in the Figure 6.All the outputs of DC/DC converters are connected together with proper isolation devices.The inverter will provide 110V/230V bus for auxiliary applications like metering to calculate various parameters received from the generator [9].
The inverter output will be fed to high voltage conversion using appropriate step up transformers to meet the grid requirements.The proposed scheme will give consistent output to the power grid irrespective of the small uncertainties due to the wind speed because, more concentration is given for power quality issues by implementing multi-level storage system are often called as BESS.
Grid Synchronizer Circuit
Power grids are complex networks in nature and responsible for infrastructure development to the nation.Highly efficient and reliable power grid is essential to the consumers, though more precautionary measures have been taken by opera- tors, beyond that most of the power outages are caused by operators.This can be avoided by automatic control system to ensure reliability on grid networking.
Synchronizer keeps the grid stability and provides excellent load support [10].
Synchronization is very essential to interconnect power generated by SRG to the electricity board common grid.Both the side uncertainty cannot be absolutely removed, but nowadays more concentration is focused towards power quality on the electricity board and the proposed system has the similar power quality improvement system.Synchronization of these two will not cause any problem to the power grid and to the generator.Being power generation is privatized and everyone is called as GENCOS, each GENCOS maintain their own standards for power quality matter, otherwise they will not be entertained for grid connectivity.These types of interconnection between GENCOS are called as distributed power generation.
The term synchronization mechanism works out with the concept of comparing two voltages, frequencies and phase sequence, one set of voltage and frequency from electricity board grid and other become the local generator (SRG -COGEN), in common practice by keeping the EB voltage, frequency and phase sequence as constant and the COGEN inputs will be altered according to it.In general current transformer are used to measure individual current contribution by EB and GENCO, if anyone found not supporting to the grid, then the de-synchronization to be done to avoid over loading.Synchronization makes one common bus and there by possibilities for stepped up and down according to the requirement of transmission and distribution as shown in the Figure 7.
Voltage is measured using potential transformer and fed to appropriate signal conditioning circuit, current is measured using CT and applied to signal conditioning circuit and phase sequence is found using OPAMP based Zero Crossing Detectors (ZCD) with the help of PT.The frequency is measured using PT as an input, applied to appropriate Schmitt triggers, F/V converter and fed to microcontroller.All these data will be fed to embedded controller for further conversion and applied to the computer for accurate comparison to energies switchgear as shown in the Figure 8.
Experimental Setup
The Figure 10 shows the experimental setup of the proposed with hardware and
Results and Discussion
The real time data acquiring system using visual basic software is developed with data logger, the important parameters like generator output, inverter output, relay status, graphical representation of input/output characteristics and animated wind turbine movement is done in the software.Without input/output conditions, with software initialization the output is obtained.
Figure 12 shows voltage output from generator and subsequent inverter output with no load the graph shows the characteristics of input/output and On Screen Data Logger (OSDL).The output relay status shows the 12 V battery is ready to charge because the voltage is greater than 6 V and less than 25 V.The voltage output from generator and subsequent inverter output with no load.The Figure 13 shows the characteristics of input/output and On Screen Data Logger (OSDL).The output relay status shows the 12 V battery is ready to charge.The Figure 14 shows the voltage is greater than 32 V. Now this is with load.OSDL shows the real time values.
Conclusion
In this paper, High Frequency Switching based grid connected SRM generator is analyzed for real time application with necessary hardware and associated soft- The concept is fully evaluated and found suitable for real time applications.The real time parameters are recorded for analysis to improve efficiency of the whole system.
Figure 2
Figure 2 illustrates the flux linkages verses stator current for various rotor positions of 8/6 pole SRM.The voltage of each phase is proportional to the angular velocity and the rate of change of inductance with respect to rotor position can be understood from Equation (3).Electromagnetic torque (T e ) produced by the SRM phase is directly proportional to the rate of change in co energy.It is realized that the motor creates positive torque in the direction of increasing flux linkage and negative torque in the direction of the decreasing flux linkage.Hence, it is essential to choose the proper rotor position to achieve the proper control of the SRM.It is necessary to design a hybrid controller in order to realize better control and speed response in all regions[7].
Figure 3 .
Figure 3. Block diagram of the proposed system.
Figure 9
Figure 9 shows the flow chart for bus synchronization.The synchronizing mechanism clearly indicates that first comparing V1 = V.If true, it compares frequencies F1 and F2.If true, it compares zero cross detectors 1 and 2. If true, it displays both source are synchronized.Else not synchronized.
Figure 11 .
Figure 11.Experimental setup generator running condition with air blower.
Figure 12 .
Figure 12.Real time analyzer without load.
Figure 13 .
Figure 13.Real time analyzer with output voltage Selection relay.
Figure 14.Real time analyzer with load and output relay selection. | 3,347.4 | 2016-12-20T00:00:00.000 | [
"Engineering"
] |
Soybean Oil Bleaching by Adsorption onto Bentonite / Iron Oxide Nanocomposites
The bleaching process of soybean oil using commercial bentonite and bentonite/iron oxide composites has been studied. X-ray diffraction (XRD), BrunauerEmmett-Teller (BET) surface area measurement and scanning electron microscopy (SEM) were used to characterise the composites generated. SEM results show that the porosity of bentonite after alkaline ion exchange process can be enhanced by the opening of the bentonite’s flakes. BET shows that the flakes’ structure was more opened, and the porosity was increased from 179.58 m2 g–1 for bentonite to 202 m2 g–1 for 3 min ion exchanged sample. Changes in basal reflection in XRD peak validated the presence of iron oxide particles. The experimental results indicate that composite prepared for 1 min showed the same efficiency in bleaching crude soybean oil with the bentonite. The greatest reduction in bleaching capacity in soybean oil was achieved using the composite prepared for 3 min. The highest transparency, 1.5-fold in red and 1.25-fold in yellow greater than that of neutralised oil, was obtained with the alkaline ion exchange composite prepared for 3 min. Hence, this process gives a good adsorbent with better bleaching properties than commercial bentonite.
INtrODUctION
Among the criteria of edible oil quality, colour is the most important factor for its commercial value.The colour is due to the presence of pigments in the crude oil such as chlorophyll-a and β-carotene.The bleaching of edible vegetable oils involves removal of a variety of impurities, which include phosphatides, fatty acids, gums and trace metals, etc., followed by decolourisation. 1,2In refinery processing of vegetable oils, adsorbents are used to remove carotene, chlorophyll and other components formed during the refining process.Common adsorbents are hydrated aluminium silicates, commonly known as bleaching clays.They are purified and activated by a mineral acid treatment, resulting in the de-lamination of the structure, thus increasing clay specific surface and adsorption capacity. 3,4mong them, activated bentonites is by far the most common adsorbent for purification and colour improvement of fats and oils. 1,3,5tal oxides have recently been applied to remove heavy metals and dyes from water and wastewater. 6,7][10] The first and most common method in the preparation of these composites is the ion exchange process with heat treatment. 9,11The negative aspect of the ion exchange process is its lengthy processing and multistep prepare procedures.Recently, a number of studies appear in the literature on the application of alkaline ion exchanged clays to prepare antibacterial composites. 12,13n alkaline ion exchange method, iron salt is mixed with bentonite and heated at around the melting point of iron salt.In this study, for the first time, bentonite/ iron oxide composites were prepared by alkaline ion exchanged method used for decolourisation of soybean oil.
Materials
Bentonite clay (Ca 2+ -montmorillonite), used as a solid support for iron oxide particles, was obtained from Kanisaz Jam Company (Rasht, Iran).Prior to the experiments, the bentonite was sieved to give a particle size of roughly 38 μm.
All reagents were of analytical grade and were used as received without further refinement.
clays/Iron Oxide composites
Bentonite was immersed in molten salt, FeCl 2 .xH 2 O, at 100°C for 1 min, 2 min, 3 min and 5 min.This operation was undertaken using 5 g of bentonite and 5 g of FeCl 2 .xH 2 O.After ion exchange, the bentonite was adequately washed with distilled water and sonicated.This step was intended to remove any compounds that were not diffused in the bentonite structure.After filtration, the obtained composites were dried in an oven for 24 h at 25°C.
A micromeritics Brunauer-Emmett-Teller (BET) surface area and porosity analyser (Gemini 2375, Germany) was used to evaluate the products with N 2 adsorption/ desorption at constant temperature of 77 K in the relative pressure range of 0.05-1.00.
Bleaching of Edible Oil
The bleaching process was carried out at a constant temperature of 80°C with a contact time of 30 min.Stirring and heating were carried out by means of a magnet stirrer and an electric heating band.The ratio of the mass of clay to the volume of acid solution was 1:10 (w/v).The hot oil and clay mixture was filtered under vacuum and the colour of the bleached oil was measured spectrophotometrically.
The bleaching capacity percentage of the clays was determined from the following equation: where A 0 and A are the absorbance of neutral oil and bleached oil, respectively, at the maximum absorbance wavelength of the neutral oil (410 nm for uptake of chlorophyll-a, and 460 nm for uptake of b-carotene).
characterisation
Appearance and colour of the parent bentonite was white.Alkaline ion exchange of the bentonite by FeCl 2 changed its colour.By increasing alkaline ion exchange process time at a constant temperature, the colour of bentonite underwent further change.After 1 min, the colour was altered to dark cream, creamy brown after 2 min, and finally after 5 min changed to red.The following equation describes the transition: Consequently, high temperature resulted in changing the replaced iron ions to iron oxide particles.
Bentonite-Fe
It might be concluded that the colour variation is due to the oxidation state of loaded iron to the bentonite.
The morphology of natural bentonite and the prepared composites at different ion exchange times was studied in this paper.The SEM images are shown in Figure 1.In Figure 1(a), the bentonite displays a leafy sheet surface texture with a loose and porous microstructure.This is a typical morphological characteristic of such material.After ion exchange, the structure of the parent bentonite displays some changes.The only difference is that the edges of the leafy sheets of bentonite seem to become thicker and the flakes' structure was more opened.). 12,13These peaks appear in the patterns of alkaline ion exchange composites.After the ion exchange, the original d spacing in the montmorillonite clay decreased to 1.29 nm for 3 min.This is due to the loss of water initially present in the interlayers.Some reported data showed higher d-spacing after ion exchange, compared with bentonite. 10,11The reason may be due to the use of a salt solution for the process of ion exchange, which increased the number of ions between the layers, resulting in swollen bentonite.In this study, d-spacing of the composite is less than in the parent bentonite.Here, there are two main reasons.First, the reaction is accomplished in a solid phase, and second the high process temperature evaporates the moisture between the layers.A comparison of porosity for parent bentonite and alkaline ion exchange composites by BET analysis was carried out.At first, the bentonite sample had 179.58 m 2 g -1 porosity.After 3 min, porosity was increased to 202 m 2 g -1 .The reason is due to diffusion of iron ions to the bentonite.Given the SEM results, the porosity of composite can be enhanced by the opening of the bentonite's flakes.
Bleaching Efficiency
When bleaching refined soybean oil, chlorophyll reduction is the most important quality parameter.Results from the bleaching runs for soybean oil using alkaline ion exchanged bentonite and commercial bentonite used as a reference are presented in Figure 3.The greatest reduction in neutralised oil using the commercial bentonite was achieved in red by a factor of 26.2% and in yellow units by a factor of 27.1%.The experimental results indicate that composite prepared for 1 min has the same efficiency in bleaching crude soybean oil with the bentonite.The highest transparency, 1.5-fold in red and 1.25-fold in yellow greater than that of neutralised oil, was obtained with the alkaline ion exchange composite prepared for 3 min.Substantial removal of colour substances makes the oil highly transparent.It is obvious that composites displayed more bleaching efficiency than the commercial bentonite currently in use by the oil industry.Briefly, iron oxide particles enhanced removal capabilities towards various pigments in the crude oil.On the other hand, alkaline ion exchange process was found to increase the porosity, which implied an increase in surface area, and hence the adsorption capacity.
cONcLUSION
The alkaline ion exchange process increased the surface area of the bentonites.
Based on the test data, the highest transparency, 1.5-fold in red and 1.25-fold in yellow greater than that of neutralised oil, was obtained with the alkaline ion exchange composite prepared for 3 min.Thus, bentonite/iron oxide can efficiently decolourise crude soybean oil through removal of colouring agents like b-carotene.This process gives a good adsorbent with better bleaching properties than commercial bentonite.
Figure 1 :
Figure 1: Image of (a) the parent bentonite and (b) ion exchanged bentonite for 3 min.
Figure 2
Figure2illustrates the XRD pattern of pure bentonite and composite at different times.A typical pattern was observed for the bentonite, with an intense 2θ = 6.27 reflection relative to the basal spacing d 001(1.44 nm).The other reflections correspond to montmorillonite's crystalline structure (2θ = 19.84°,20.87°, 26.68°, 27.65°, 32.47°, 34.90°, 50.23° and 60.13°).12,13These peaks appear in the patterns of alkaline ion exchange composites.After the ion exchange, the original d spacing in the montmorillonite clay decreased to 1.29 nm for 3 min.This is due to the loss of water initially present in the interlayers.Some reported data showed higher d-spacing after ion exchange, compared with bentonite.10,11The reason may be due to the use of a salt solution for the process of ion exchange, which increased the number of ions between the layers, resulting in swollen bentonite.In this study,
Figure 2 :
Figure 2: The XRD pattern of (a) pure bentonite and (b) ion exchanged bentonite for 3 min.
Figure 3 :
Figure 3: Bleaching capacity by using commercial bentonite and alkaline ion exchanged bentonite prepared at different times. | 2,191 | 2018-08-25T00:00:00.000 | [
"Materials Science",
"Environmental Science"
] |
Long non-coding RNA DLGAP1-AS1 facilitates tumorigenesis and epithelial–mesenchymal transition in hepatocellular carcinoma via the feedback loop of miR-26a/b-5p/IL-6/JAK2/STAT3 and Wnt/β-catenin pathway
Hepatocellular carcinoma (HCC) is one of the most common and lethal malignancies worldwide, and epithelial–mesenchymal transition (EMT) is a crucial factor affecting HCC progression and metastasis. Long noncoding RNAs (lncRNAs) have been validated to act as critical regulators of biological processes in various tumors. Herein, we attempted to elucidate the uncharacterized function and mechanism of lncRNA DLGAP1-AS1 in regulating tumorigenesis and EMT of HCC. In our study, DLGAP1-AS1 was shown to be upregulated in HCC cell lines and capable to promote HCC progression and EMT. Besides, DLGAP1-AS1 was proven to serve as a molecular sponge to sequester the HCC-inhibitory miRNAs, miR-26a-5p and miR-26b-5p, thus enhancing the level of an oncogenic cytokine IL-6, which could activate JAK2/STAT3 signaling pathway and reciprocally elevate the transcriptional activity of DLGAP1-AS1, thus forming a positive feedback loop. Moreover, we elaborated that the cancerogenic effects of DLGAP1-AS1 in HCC cells could be effectuated via activating Wnt/β-catenin pathway by positively regulating CDK8 and LRP6, downstream genes of miR-26a/b-5p. In conclusion, our results demonstrated the detailed molecular mechanism of DLGAP1-AS1 in facilitating HCC progression and EMT in vitro and in vivo, and suggested the potentiality of DLGAP1-AS1 as a therapeutic target for HCC.
Introduction
Hepatocellular carcinoma (HCC), which is known as the most prevalent (75-85%) type of liver cancer, is a severe malignant tumor torturing patients from all over the world 1 . HCC is ranked the sixth most common cause of neoplasm and the third most frequent cause of cancer mortality worldwide 2 . Although numerous progress on surgical and medical techniques for HCC treatment have been made, the prognosis for HCC patients still remains poor with an overall 5-year survival rate of 5% approximately, largely owing to lack of more effective therapeutic methods, delayed diagnosis, as well as high rates of postoperative recurrence and metastasis 3,4 . Therefore, it is of considerable importance to elucidate underlying molecular mechanisms in relation to HCC progression to exploit novel therapeutic strategies.
Epithelial-mesenchymal transition (EMT) is characterized as a crucial biological process by which cells lose their epithelial features and acquire properties for migration and invasion 5 . In HCC, specifically, EMT has been proven to be crucial in determining tumor progression and metastasis, and can be accelerated by various biological factors, such as inflammatory cytokine interleukin 6 (IL-6), JAK2/ STAT3 signaling, and dysregulation of Wnt/β-catenin pathway 6,7 . Therefore, our research principally focused on mechanisms to trigger EMT process of HCC cells in order to search for appropriate therapeutic approaches.
Long non-coding RNAs (lncRNAs) have been engaging great interest of scientific researchers. Basically, lncRNAs are classified as a sort of RNA transcripts containing more than 200 nucleotides in length with poor or no proteincoding ability 8,9 . Recently, it has been verified by accumulating evidence that lncRNAs are playing remarkable roles in regulating the multifarious processes of many diseases, including cancers such as HCC 10,11 . For instance, researchers have made numerous discoveries in recent years to disclose that various lncRNAs, such as TSLNC8, HNF1A-AS1, and PTTG3P, display aberrant expressions in HCC, and can act as tumor suppressors or oncogenes to regulate HCC progression and metastasis 12,13 .
In this study, we investigated the function and mechanism of the lncRNA named "discs, large (Drosophila) homolog-associated protein 1 antisense RNA 1", or DLGAP1-AS1 for short, whose involvement in HCC remains uncharacterized. The results of our study demonstrated the participation of DLGAP1-AS1 in regulating tumorigenesis and metastasis of HCC in vitro and in vivo, and suggested that DLGAP1-AS1 could be a potential target for the treatment of HCC.
Tissues specimen
A total of 60 primary HCC tissue samples and adjacent normal tissues were collected at Guangdong Provincial People's Hospital. This study was approved by the Research Ethics Committee of Guangdong Provincial People's Hospital. Written informed consents were obtained from all patients. Patients participating in this research did not receive treatment before surgery, chemotherapy or radiotherapy. The tumor samples were immediately frozen in liquid nitrogen and then kept at −80°C.
Quantitative real-time PCR (qRT-PCR) analysis
For isolation of total RNA from cells, TRIzol reagent (Invitrogen) was employed in line with the supplier's protocol. Afterward, the reverse transcription was carried out with total RNA applying Transcriptor First Strand cDNA Synthesis Kit (Roche, Mannheim, Germany). qRT-PCR was implemented with SYBR Green I Master (Roche) on the LightCycler® 480 System (Roche). Relative gene level was normalized to GAPDH or U6 and expression fold change was calculated using the 2 −ΔΔCt method.
Wound healing assay
Transfected HepG2 or SNU-387 cells were cultured till confluence was up to greater than 90%. Then, cell layers were scratched utilizing a plastic scriber. Upon that, cells were washed twice by phosphate buffer saline (PBS; Sigma-Aldrich) and incubated for 36 h. The wound was visualized and images were taken at 0 and 36 h with the inverted microscope (Nikon, Tokyo, Japan).
RNA immunoprecipitation (RIP) assay
RIP were implemented by employing a Magna RIP ™ RNA-Binding Protein Immunoprecipitation Kit (Millipore). Ago2 antibody or IgG antibody was purchased from Abcam. Precipitated RNAs were subjected to qRT-PCR.
Chromatin immunoprecipitation (ChIP) assay
After sonication of chromatin into fragments (500 bp), immunoprecipitation was carried out by adopting anti-STAT3 antibody or anti-IgG antibody. Precipitated DNA fragments were eventually extracted and subjected to qRT-PCR.
TOP/FOP flash assay
HepG2 cells were treated with various transfection plasmids and TOP/FOP Flash plasmids (Upstate Biotechnology, Lake Placid, NY, USA). Relative luciferase activities were examined utilizing dual-luciferase reporter assay system (Promega).
Xenograft in vivo analysis
The animal experiments were approved by the Ethics Committee of Guangdong Provincial People's Hospital, according to the Guide for the Care and Use of Laboratory Animals by the National Institute of Health. Four-week-old BALB/c nude mice were purchased from Shanghai SLAC Laboratory Animal (Shanghai, China). Transfected HCC cells were injected into mice subcutaneously. Five mice in each group were measured. The tumor volume was measured every 4 days. After 4 weeks, mice were sacrificed and tumors were weighed.
In vivo metastasis assay
In vivo metastasis assay was conducted as previously described 17 . Four-week-old SCID-Beige female mice were provided by the medical college of Guangdong Provincial People's Hospital. All animal experiments were performed and finished in accordance with protocols provided by the Institutional Animal Care and Use Committee of Guangdong Provincial People's Hospital.
Enzyme-linked immunosorbent assay (ELISA)
Relative expression of IL-6 was detected with the ELISA kit (NeoBioscience, Shenzhen, China). Optical density was read with the microplate reader Victor X3 (PerkinElmer, Waltham, MA, USA) at 450 nm.
Statistical analysis
Results were acquired from assays implemented thrice and presented as mean ± SD. P < 0.05 was considered statistically significant. Variance analyses were conducted via Student's t-test or one-way ANOVA. Statistical analyses were carried out by the use of SPSS 22.0 (IBM, Armonk, NY, USA). Pearson correlation test was applied for analyzing expression correlation.
Results
Upregulation of DLGAP1-AS1 was correlated with tumorigenesis of HCC First of all, according to UCSC Genome Browser online database, DLGAP1-AS1 expression level was relatively low in normal human liver tissues (Fig. 1a). Comparatively, the significantly elevated DLGAP1-AS1 expression in liver hepatocellular carcinoma (LIHC) dataset in comparison with normal dataset was presented using GEPIA online database (Fig. 1b). Subsequently, DLGAP1-AS1 expression is upregulated in four individual HCC cell lines (Hep G2, SNU-182, Hep 3B, and SNU-387) in comparison with normal human liver epithelial cells THLE-3, where Hep G2 showed the highest level and SNU-387 showed the lowest level of DLGAP1-AS1 expression among these HCC cell lines, consistent with the results from bioinformatics analyses (Fig. 1c). Therefore, we established knockdown and overexpression models for DLGAP1-AS1 in HCC cell lines, respectively by transfecting DLGAP1-AS1 siRNAs into Hep G2 cells, and DLGAP1-AS1 overexpression plasmid into SNU-387 cells (Fig. 1d).
CCK-8 assay illustrated that the proliferation rate of Hep G2 cells transfected with DLGAP1-AS1 siRNAs was decreased compared with the si-NC group, while the proliferation rate of SNU-387 cells was increased in pcDNA3.1/DLGAP1-AS1 group compared with the vector control (Fig. 1e). As for cell apoptosis, TUNEL assay showed that DLGAP1-AS1 knockdown enhanced the apoptotic level of Hep G2 cells, whereas DLGAP1-AS1 overexpression reduced apoptosis in SNU-387 cells (Fig. 1f). Besides, wound healing assay illustrated that DLGAP1-AS1 knockdown reduced, while DLGAP1-AS1 overexpression enhanced the migration ability of HCC cells (Fig. 1g). In order to explore whether DLGAP1-AS1 could promote EMT process, we measured the levels of several representative EMT markers, and found that the mRNA and protein levels of the epithelial marker Ecadherin were raised by DLGAP1-AS1 knockdown and reduced by DLGAP1-AS1 overexpression, whereas the levels of the mesenchymal markers N-cadherin, Vimentin, and Twist showed the opposite tendency (Fig. 1h, l). These results suggested that DLGAP1-AS1 was closely correlated with tumorigenesis of HCC.
DLGAP1-AS1 acted as a molecular sponge for miR-26a-5p and miR-26b-5p In order to examine the molecular mechanism of DLGAP1-AS1, we hypothesized that DLGAP1-AS1 might act as a ceRNA and resorted to starBase v3.0 online database to search the candidate miRNAs sequestered by DLGAP1-AS1. we found 19 miRNAs that can bind with DLGAP1-AS1. RNA pull-down assay revealed that miR-26a/b-5p showed the highest enrichment in DLGAP1-AS1-bound probe (Fig. S1a). Moreover, miR-26a-5p and miR-26b-5p, a couple of representative candidates which belong to the same miRNA family and have been occasionally investigated together, enormously attracted our interest for research. More importantly, both of them have been frequently reported to exert significant anticancerous functions on tumorigenesis and EMT in various cases of cancer, including HCC 18 . In consequence, we chose miR-26a-5p and miR-26b-5p as our objects of investigation. The binding sites within DLGAP1-AS1 sequences where they were predicted to be sponged were also illustrated (Fig. 2a).
(see figure on previous page) Fig. 1 Upregulation of DLGAP1-AS1 was correlated with tumorigenesis of HCC. a DLGAP1-AS1 expression in normal human tissues (n = 570) were displayed by UCSC Genome Browser. b DLGAP1-AS1 expression levels in LIHC (red; n = 369) and normal (gray; n = 160) datasets obtained from GEPIA boxplot analysis. c DLGAP1-AS1 expression levels were assessed using qRT-PCR in four HCC cell lines and normal human liver epithelial cells THLE-3. d DLGAP1-AS1 knockdown and overexpression efficiencies were evaluated using qRT-PCR. e, f CCK-8 assay and TUNEL assay assessed the influence of DLGAP1-AS1 knockdown or overexpression on proliferation or apoptosis of Hep G2 and SNU-387 cells. Scale bar = 200 μm. g Wound healing assay was performed to determine the effect of DLGAP1-AS1 on HCC cell migration. Scale bar = 200 μm. h, i EMT-related factors in Hep G2 or SNU-387 cells after DLGAP1-AS1 knockdown or overexpression were respectively detected using qRT-PCR and WB. All data are presented as the mean ± SD of three independent experiments. *p < 0.05, **p < 0.01.
IL-6 was targeted by miR-26a/b-5p and was under regulation of DLGAP1-AS1
Based on our preceding study on the interaction between DLGAP1-AS1 and miR-26a/b-5p, we proceeded to search for potential genes targeted by miR-26a/b-5p.
Using three online bioinformatics tools, we found that there were 380 mRNAs that can be regulated by both miR-26a-5p and miR-26b-5p (Fig. S1b). Next, these candidate mRNAs were subjected to qRT-PCR analysis in response to the upregulation of miR-26a-5p or miR-26b- Fig. 2 DLGAP1-AS1 acted as a molecular sponge for miR-26a-5p and miR-26b-5p. a The binding sites for miR-26a-5p and miR-26b-5p within DLGAP1-AS1 sequence were predicted by starBase. b qRT-PCR evaluated miR-26a/b-5p levels in four HCC cells and in normal cell THLE-3. c The effects of DLGAP1-AS1 knockdown or overexpression on miR-26a/b-5p expression were exhibited using qRT-PCR. d RIP assay was performed using the Ago2 antibody to demonstrate the enrichment of DLGAP1-AS1 and miR-26a/b-5p in HCC cells. e RNA pull-down assay was performed to detect the binding ability of DLGAP1-AS1 with miR-26a/b-5p. f Luciferase reporter assay was conducted in HEK-293T cells. All data are presented as the mean ± SD of three independent experiments. *p < 0.05, **p < 0.01, ***p < 0.001.
5p. Top five downregulated mRNAs were shown in Fig. S1c, among which IL6 was expressed lowest in cells transfected with miR-26a-5p mimics or miR-26b-5p mimics. With the aid of starBase that the mRNA of IL-6, a characteristic inflammatory cytokine closely involved in cancers, was an appropriate target for them (Fig. 3a). IL-6 is noteworthy owing that it has been broadly characterized as a major cancerogenic factor contributing to malignancy, EMT and metastasis of multifarious cancers, including HCC 19 . Hence IL-6 was selected as our following study object. First, IL-6 mRNA expression was detected in HCC cell lines and normal cells, confirming its upregulation in HCC cells (Fig. 3b). Additionally, IL-6 protein level was quantified using ELISA, showing the same tendency (Fig. 3c). The influence of DLGAP1-AS1 knockdown or overexpression on IL-6 was assessed on levels of mRNA and protein, indicating IL-6 was positively related with DLGAP1-AS1, which could sponge miR-26a/b-5p and deregulate IL-6 expression (Fig. 3d, e). As for the molecular mechanism, RIP assay illustrated that DLGAP1-AS1, miR-26a/b-5p, and IL-6 mRNA were enriched in anti-Ago2 groups (Fig. 3f). RNA pull-down assay verified the binding capacity of IL-6 mRNA with wild-type biotinylated probes for miR-26a/b-5p (Fig. 3g). To determine the molecular regulation between miR-26a/b-5p and IL-6 mRNA, luciferase activity of wild-type IL-6-3′-UTR reporters was initially lowered by miR-26a/b-5p, and then partially recovered with DLGAP1-AS1. Meanwhile, the luciferase activity of mutant reporters was barely affected (Fig. 3h).
As for the influences on EMT-related factors, E-cadherin level enhanced by DLGAP1-AS1 knockdown was partially reduced, while N-cadherin, Vimentin and Twist levels suppressed by DLGAP1-AS1 knockdown were partially elevated (Fig. 4f, g). Similarly, we designed rescue assays in SNU-387 cells to demonstrate the DLGAP1-AS1/miR-26a/b-5p/IL6 axis. As expected, proliferation, apoptosis, migration and EMT process of SNU-387 cells that were regulated by DLGAP1-AS1 overexpression were recovered partly by overexpression of miR-26a/b-5p or silencing of IL6 (Fig. S2a-e). These results indicated that miR-26a/b-5p and IL-6 took part in the implementation of the regulatory functions of DLGAP1-AS1 in HCC cells.
IL-6 transcriptionally elevated DLGAP1-AS1 expression in HCC cells through JAK2/STAT3 signaling pathway
It is acknowledged that transcriptional activation mediated by transcription factors (TFs) or co-factors plays a critical role in contributing to the aberrant expression of cancer-related genes 20 . In order to explore the mechanism by which DLGAP1-AS1 was upregulated in HCC, we applied the online bioinformatics tools UCSC and JAS-PAR to examine the promoter region of DLGAP1-AS1 gene. Consequently, 17 potential binding sites for an important human TF, signal transducer and activator of transcription 3 (STAT3), were predicted within DLGAP1-AS1 promoter sequence (Fig. 5a). STAT3 is prominent as a typical transcriptional activator which plays a key role in many cancer types, such as HCC, by regulating the expression of important genes associated with cancer, thus arousing our interest 21 . In order to verify that DLGAP1-AS1 was transcriptionally under the regulation of STAT3, we constructed a variety of reporter plasmids, containing several truncations of potential region for DLGAP1-AS1 promoter (2000 bp upstream), and performed luciferase reporter assay in Hep G2 cells. We observed that higher luciferase activity was associated with the region between −500~−1, while the whole sequence (−2000~−1) was used as a positive control. Moreover, luciferase activity of −250~-1 reporter, rather than −500~−250 reporter, was elevated, suggesting that the region was most likely to be responsible for STAT3 interaction and transcriptional activation (Fig. 5b). Since the region had been shown in silico to contain one potential binding site (−73~−63) for STAT3, ChIP assay using STAT3 antibodies was subsequently conducted, illustrating that the fragment of DLGAP1-AS1 promoter containing the STAT3 motif at −73~−63 region was enriched in anti-STAT3 groups, further confirming the interaction between STAT3 and DLGAP1-AS1 promoter at the predicted binding site (Fig. 5c).
The levels of phosphorylated STAT3 (p-STAT3, the activated form) were detected in HCC and normal cells, illustrating that STAT3 activation was promoted in HCC The binding sites for miR-26a-5p and miR-26b-5p within IL-6 3′-UTR sequence was predicted by starBase. The red nucleotides represent the mutant binding site designed for luciferase reporter assay. b qRT-PCR evaluated IL-6 mRNA levels in HCC cell lines and in normal cells. c ELISA evaluated IL-6 protein levels in HCC cell lines and in normal cells. d, e The effects of DLGAP1-AS1 knockdown in Hep G2 cells and DLGAP1-AS1 overexpression in SNU-387 cells on IL-6 mRNA and protein levels were respectively exhibited using qRT-PCR and ELISA. f RIP assay was performed using the Ago2 antibody to demonstrate the enrichment of DLGAP1-AS1, miR-26a/b-5p and IL-6 mRNA in HCC cells. g RNA pull-down assay was performed to detect the binding ability of IL-6 mRNA with miR-26a/b-5p. h Luciferase reporter assay of IL-6-3′-UTR-WT or IL-6-3′-UTR-Mut reporters elucidated the interaction between IL-6 mRNA and miR-26a/b-5p, and the competing effect of DLGAP1-AS1 to interact with miR-26a/b-5p. All data are presented as the mean ± SD of three independent experiments. **p < 0.01, ***p < 0.001. cells, to which the elevated DLGAP1-AS1 level could be attributed (Fig. 5d). Intriguingly, it has been elucidated that STAT3 can be activated by IL-6 through Janus kinase 2 (JAK2), and IL-6/JAK2/STAT3 has been implied as a crucial accelerator for tumorigenesis and EMT in many cancer types, including HCC 22 . Since IL-6 had been proven to be positively regulated by DLGAP1-AS1, we wondered whether DLGAP1-AS1 could be reciprocally upregulated by IL-6 through activating JAK2 and STAT3. Therefore, we established STAT3 overexpression model in SNU-387 cells where DLGAP1-AS1 expression was relatively moderate (Fig. 5e). We performed WB analysis in SNU-387 cells, and found that STAT3 and p-STAT3 levels were enhanced by pcDNA3.1/STAT3, implying the overexpression efficiency. Furthermore, phosphorylated JAK2 (p-JAK2, the activated form) and p-STAT3 were upregulated after IL-6 treatment. Nevertheless, a supplement of 0.5 μM Cucurbitacin I, a specific inhibitor of Fig. 4 The inhibitors for miR-26a/b-5p and IL-6 treatment both rescued the anti-oncogenic effects of DLGAP1-AS1 knockdown. a qRT-PCR detected that miR-26a/b-5p levels in Hep G2 cells with DLGAP1-AS1 knockdown were downregulated by transfecting miR-26a/b-5p inhibitors. b ELISA evaluated the efficiency of IL-6 treatment in Hep G2 cells with DLGAP1-AS1 knockdown. c CCK-8 assay showed that both miR-26a/b-5p inhibitors and IL-6 rescued the inhibitory effect on cell proliferation of DLGAP1-AS1 knockdown. d Wound healing assay showed that both miR-26a/ b-5p inhibitors and IL-6 rescued the inhibitory effect on cell migration of DLGAP1-AS1 knockdown. e TUNEL assay showed that both miR-26a/b-5p inhibitors and IL-6 rescued the promotional effect on cell apoptosis of DLGAP1-AS1 knockdown. f, g The influences of miR-26a/b-5p knockdown and IL-6 treatment on EMT-related factors in Hep G2 cells with DLGAP1-AS1 knockdown were respectively analyzed using qRT-PCR and WB. All data are presented as the mean ± SD of three independent experiments. *p < 0.05, **p < 0.01. JAK2/STAT3 pathway, notably attenuated the expression and activation of JAK2 and STAT3. These results verified that IL-6 could activate STAT3 via JAK2 (Fig. 5f).
Subsequently, we performed luciferase reporter assay using DLGAP1-AS1-promoter reporters containing wildtype and mutant STAT3 motifs. STAT3 overexpression or IL-6 treatment significantly elevated luciferase activity of wild-type reporters, and Cucurbitacin I reversed the promotional effect of IL-6 on luciferase activity. However, luciferase activity of mutant reporters could scarcely be affected. These results suggested that IL-6 could enhance the transcriptional activity of DLGAP1-AS1 through facilitating the interaction between STAT3 and DLGAP1-AS1 promoter at the predicted motif (Fig. 5g). f WB analysis evaluated the efficiency of STAT3 overexpression and the activating effect of IL-6 on JAK2/STAT3 pathway in SNU-387 cells, while JAK2/STAT3 pathway inhibitor Cucurbitacin I was applied so that the IL-6-induced activation of JAK2 and STAT3 was reversed. g Luciferase reporter assay was performed in SNU-387 cells to verify that STAT3 interacted with DLGAP1-AS1 promoter at the predicted binding motif. The interaction was enhanced by IL-6 treatment and repressed by Cucurbitacin I treatment. h DLGAP1-AS1 expression levels influenced by STAT3, IL-6, and Cucurbitacin I were assessed using qRT-PCR. All data are presented as the mean ± SD of three independent experiments. **p < 0.01.
CDK8 and LRP6 were targeted by miR-26a/b-5p and were under regulation of DLGAP1-AS1 Considering the partial rescue of IL6 for DLGAP1-AS1 in HCC cells, we further investigated whether some other downstream targets exerted functions in DLGAP1-AS1induced HCC cell activities. Our research continued to pursue potential downstream genes that could participate in hepatocarcinogenesis via activating Wnt/β-catenin pathway. Then, we analyzed whether DLGAP1-AS1 and miR-26a/b-5p could regulate the activity of Wnt/ β-catenin pathway by directly regulating CTNNB1. Through luciferase reporter assays, we determined that DLGAP1-AS1 and miR-26a/b-5p could not directly regulate CTNNB1 (Fig. S3a-c), thus to activating Wnt/ β-catenin pathway. With the help of starBase, we discovered the binding sequences between miR-26a/b-5p and the 3′-UTR regions of cyclin-dependent kinase 8 (CDK8) and low density lipoprotein receptor-related protein 6 (LRP6) (Fig. 6a). CDK8 has been identified to be a hallmark regulator to activate Wnt/β-catenin signaling through β-catenin stabilization 23 . LRP6 has been recognized as a co-receptor to facilitate Wnt/β-catenin signaling via promoting β-catenin nuclear translocation 24 . Besides, both CDK8 and LRP6 have been reported to act as oncogenes in HCC 25 . As a consequence, these two genes were chosen as our study objects. The expression levels of CDK8 and LRP6 were evaluated likewise using qRT-PCR for mRNAs and WB for proteins, illustrating their increase in HCC cells compared with normal cells (Fig. 6b, c). Besides, CDK8 and LRP6 were positively regulated by DLGAP1-AS1 on mRNA and protein levels (Fig. 6d, e). To demonstrate the molecular mechanism, the enrichment of DLGAP1-AS1, miR-26a/b-5p and mRNAs of CDK8 and LRP6 in anti-Ago2 groups was exhibited by RIP assay, indicating the recruitment of these molecules in RISCs (Fig. 6f). RNA pull-down assay showed that wild-type miR-26a/b-5p probes could significantly pull-down mRNAs of CDK8 or LRP6, illustrating their binding capacity (Fig. 6g). Moreover, luciferase activity of wild-type CDK8 or LRP6-3′-UTR reporters, not of mutant reporters, was reduced by miR-26a/b-5p, and partially enhanced by addition of pcDNA3.1/DLGAP1-AS1 (Fig. 6h). In conclusion, DLGAP1-AS1 could also act as a ceRNA to sponge miR-26a/b-5p and regulate CDK8 and LRP6.
DLGAP1-AS1 promotes HCC development and EMT via Wnt/β-catenin pathway activation through CDK8 and LRP6
We subsequently discussed the involvement of CDK8 and LRP6 in facilitating HCC progression and EMT via activating Wnt/β-catenin pathway. First, CDK8 or LRP6 overexpression plasmids were transfected into Hep G2 cells which had been transfected with DLGAP1-AS1 siRNAs previously. The transfection efficiency was evaluated by detecting the variation of CDK8 or LRP6 expression (Fig. 7a). Next, TOP/FOP flash assay was conducted to detect the degree of β-catenin-mediated T-cell factor/lymphoid enhancer factor (TCF/LEF) transcriptional activation. The result illustrated the inhibitory effect of DLGAP1-AS1 knockdown, and the promotional effect of CDK8 or LRP6 overexpression on Wnt/β-catenin pathway activity (Fig. 7b). It was also illustrated that DLGAP1-AS1 knockdown reduced total protein level of β-catenin, facilitated β-catenin phosphorylation (a label for degradation), and inhibited β-catenin nuclear translocation, while CDK8 or LRP6 overexpression could partially rescue these effects (Fig. 7c). Furthermore, several typical downstream genes of Wnt/β-catenin pathway, namely Cyclin D1, c-Myc and MMP9, were downregulated by DLGAP1-AS1 knockdown, and partially upregulated by CDK8 or LRP6 overexpression (Fig. 7d). These results demonstrated that DLGAP1-AS1 could positively regulate Wnt/β-catenin pathway activity via CDK8 and LRP6. Then we evaluated how the properties of DLGAP1-AS1-silenced Hep G2 cells were affected by CDK8 or LRP6 overexpression, or by treatment of 6 μM CHIR99021, a typical Wnt/β-catenin pathway activator. As a result, cell proliferation and migration suppressed by DLGAP1-AS1 knockdown were partially reversed (Fig. 7e, f), cell apoptosis enhanced by DLGAP1-AS1 knockdown was partially attenuated (Fig. 7g), the enhanced level of Ecadherin was downregulated, and the suppressed levels of N-cadherin, Vimentin, and Twist were upregulated (Fig. 7h, i). In summary, our results demonstrated that DLGAP1-AS1 could up-regulate CDK8 and LRP6 to activate Wnt/β-catenin pathway in HCC cells, thus promoting tumorigenesis and EMT.
DLGAP1-AS1 contributed to HCC growth and metastasis in vivo
We further investigated the contribution of DLGAP1-AS1 to promoting HCC growth and metastasis by adopting an in vivo tumor model. After the xenograft tumor model had been established, compared with sh-NC group, tumors with significantly smaller size and lighter weight were developed in sh-DLGAP1-AS1 group, while miR-26a/b-5p suppression or IL6 overexpression rescued the inhibitory effect of DLGAP1-AS1 knockdown on tumorigenicity in vivo ( Fig. S4a and Fig. 8a, b). Furthermore, the expression levels of genes involved in our study were measured using qRT-PCR, ELISA and WB from xenograft tumor tissues, showing that the expression tendencies of these genes were in consistence with those in vitro (Fig. 8c-e). Eventually, we evaluated the capacity of tumor metastasis through observing and measuring the metastatic nodules transferring to lung tissues, illustrating that HCC lung metastasis was prominently inhibited by DLGAP1-AS1 knockdown, and the inhibitory effect could be reversed by knockdown of miR-26a/b-5p or the upregulation of IL6 (Fig. 8f). Moreover, SNU-387 cells transfected with pcDNA3.1, pcDNA3.1/DLGAP1-AS1, pcDNA3.1/DLGAP1-AS1, pcDNA3.1/DLGAP1-AS1+miR-26a/b-5p antagomir or pcDNA3.1/DLGAP1-AS1+sh-IL6 were injected into the nude mice. Afterward, we observed that the tumor growth and metastasis were promoted by the upregulation of DLGAP1-AS1, while were inhibited after overexpression of miR-26a/b-5p or silencing of IL6 ( Fig. S4b-d). To be concluded, our results validated the cancerogenic function of DLGAP1-AS1 in vivo.
Next, clinical significance of DLGAP1-AS1/miR-26a/b-5p/IL6 axis was analyzed in HCC patients. At first, the expression of DLGAP1-AS1 was elevated in HCC samples compared to adjacent normal samples (Fig. S5a). In addition, high expression level of DLGAP1-AS1 was observed in tissues collected from patients with metastasis and recurrence (Fig. S5a). After grouping the patients into two groups (high and low) in accordance with the mean value of DLGAP1-AS1 expression, Kaplan-Meier analysis was made. The results showed that patients in high expression group had a poorer prognosis than those in low expression group (Fig. S5b). Furthermore, the low expression of miR-26a/b-p and high expression of IL6 were assessed in HCC tissues compared with adjacent normal tissues (Fig. S5c, d). Accordingly, Pearson correlation test showed that miR-26a/b-5p was negatively correlated with DLGAP1-AS1 or IL6 (Fig. S5e, f), whereas the DLGAP1-AS1 and IL6 were positively correlated with each other (Fig. S5g).
Discussion
HCC remains a major health issue worldwide with increasing occurrence and poor prognosis. The majority of HCC cases take place in developing countries, among which China is one of the most high-risk areas worldwide 26 . EMT is a complicated biological process involving many regulating elements and signaling pathways. Our research herein discussed a potential mechanism that could facilitate EMT capacity of HCC cells.
Since numerous studies have reported that lncRNAs can exert their regulatory functions in HCC progression in a miRNA-dependent pattern to protect target mRNAs from being degraded 27,28 , we assumed DLGAP1-AS1 to act as a ceRNA, and found using starBase that DLGAP1-AS1 sequence contained the binding sites for miR-26a-5p and miR-26b-5p, whose lowered expression in HCC cells and downregulation by DLGAP1-AS1 were illustrated. RIP, RNA pull-down and luciferase reporter assays were subsequently conducted, thus verifying the prediction that DLGAP1-AS1 directly interacted with miR-26a/b-5p. Both miR-26a-5p and miR-26b-5p have been reported as HCC suppressors 18,29 . Moreover, miR-26a-5p and miR-26b-5p together have been reported to be able to inhibit carcinogenesis and metastasis in many cancers, such as oral squamous cell carcinoma 30 , prostate cancer 31 and bladder cancer 32 . Therefore, the involvement of miR-26a/ b-5p in DLGAP1-AS1-induced biological effects was explored.
IL-6 is an inflammatory cytokine with multiple essential physiological and pathological functions. Autocrine, paracrine or circulating IL-6 acting on cancer cells has long been regarded as a major oncogenic factor 33 . In the present study, IL-6 was proven to be targeted by miR-26a/ b-5p through bioinformatics analysis and detecting molecular interaction. Besides, IL-6 was upregulated and under the regulation of DLGAP1-AS1 in HCC cells. Furthermore, the influences of DLGAP1-AS1 knockdown on proliferation, migration, apoptosis and EMT-related factors were rescued by overexpression of miR-26a/b-5p inhibitors or IL-6 treatment, indicating the ceRNA network concerning DLGAP1-AS1, miR-26a/b-5p and IL-6 in HCC cells.
Subsequently, we investigated the potential transcription activator responsible for DLGAP1-AS1 upregulation. With the help of bioinformatics tools, we found that the motif of STAT3, which is noteworthy as a key regulator for transcription of various cancer-related genes 21 , existed (see figure on previous page) Fig. 6 CDK8 and LRP6 were targeted by miR-26a/b-5p and were under regulation of DLGAP1-AS1. a The binding sites for miR-26a-5p (left) and miR-26b-5p (right) within the 3′-UTR sequences of CDK8 (top) and LRP6 (bottom) were exhibited through starBase prediction. b qRT-PCR evaluated CDK8 and LRP6 mRNA levels in HCC cell lines and in normal cells. c WB analysis evaluated CDK8 and LRP6 protein levels in HCC cell lines and in normal cells. d, e The effects of DLGAP1-AS1 knockdown in Hep G2 cells and DLGAP1-AS1 overexpression in SNU-387 cells on CDK8 or LRP6 expression were exhibited using qRT-PCR and WB. f RIP assay was performed using the Ago2 antibody to demonstrate the enrichment of DLGAP1-AS1, miR-26a/b-5p and mRNAs of CDK8 and LRP6 in HCC cells. g RNA pull-down assay was performed to detect the binding ability of CDK8 or LRP6 mRNA with miR-26a/b-5p. h Luciferase reporter assay elucidated the interaction between CDK8 or LRP6 mRNA and miR-26a/b-5p, and the competing effect of DLGAP1-AS1 to interact with miR-26a/b-5p. All data are presented as the mean ± SD of three independent experiments. *p < 0.05, **p < 0.01, ***p < 0.001.
(see figure on previous page) Fig. 7 DLGAP1-AS1 promotes HCC development and EMT via Wnt/β-catenin pathway activation through CDK8 and LRP6. a qRT-PCR detected that CDK8 or LRP6 level in Hep G2 cells with DLGAP1-AS1 knockdown was upregulated by transfection of pcDNA3.1/CDK8 or pcDNA3.1/ LRP6. b TOP/FOP flash assay was performed to verify the deactivating effect of DLGAP1-AS1 knockdown on TCF/LEF transcription, which was reactivated by CDK8 or LRP6 overexpression. c The influences of DLGAP1-AS1 knockdown and CDK8 or LRP6 overexpression on β-catenin expression, phosphorylation, and nuclear translocation were evaluated using WB analysis. d WB analysis displayed the protein levels of several typical downstream genes of Wnt/β-catenin pathway, which were downregulated by DLGAP1-AS1 knockdown and then upregulated by transfection of pcDNA3.1/CDK8 or pcDNA3.1/LRP6. e, f CCK-8 assay and wound healing assay showed that the inhibitory effects on cell proliferation and migration of DLGAP1-AS1 knockdown were attenuated by overexpression of CDK8 and LRP6, as well as by treatment with CHIR99021. g TUNEL assay showed that the promotional effect on cell apoptosis of DLGAP1-AS1 knockdown was attenuated by CDK8, LRP6 or CHIR99021. h, i The influences of CDK8 and LRP6 overexpression and CHIR99021 treatment on EMT-related factors in Hep G2 cells with DLGAP1-AS1 knockdown were respectively analyzed using qRT-PCR and WB. All data are presented as the mean ± SD of three independent experiments. *p < 0.05, **p < 0.01. Fig. 8 DLGAP1-AS1 contributed to HCC growth and metastasis in vivo. a Tumor volume from different treatment groups was measured after injection. b Tumor weight from different treatment groups was measured after the mice were euthanized. c-e The expression levels of several genes related with the present study from each group of xenograft were evaluated using qRT-PCR, ELISA and WB. f Representative images of HE-stained mouse lung tissues were taken to demonstrate lung metastasis of HCC xenograft. The amount of metastatic nodules from each group was calculated and analyzed accordingly. All data are presented as the mean ± SD of three independent experiments. *p < 0.05, **p < 0.01. within DLGAP1-AS1 promoter sequence, indicating that STAT3 could be a potential TF for DLGAP1-AS1. Subsequently, luciferase reporter assay and ChIP assay illustrated the molecular interaction of STAT3 and DLGAP1-AS1 promoter at the predicted binding site. The level of activated STAT3 was enhanced in HCC cell lines, consistent with that of DLGAP1-AS1. IL-6, as a cytokine, is capable to act on cancer cells to activate JAK-STAT pathway, thus inducing carcinogenic effects such as proliferation, apoptosis inhibition, metastasis, and angiogenesis 34 . Here, we hypothesized that IL-6 could reciprocally promote DLGAP1-AS1 transcriptional via activating JAK2 and STAT3. Our results elucidated that IL-6 enhanced the levels of activated JAK2 and STAT3, and both STAT3 overexpression and IL-6 treatment elevated the transcriptional activity and the expression level of DLGAP1-AS1. Additionally, these effects were reversed by addition of JAK2/STAT3 pathway inhibitor Cucurbitacin I. The feedback loop by which DLGAP1-AS1 expression was enhanced in return was therefore discovered.
Wnt/β-catenin pathway, also known as the canonical Wnt pathway, is a highly conserved signaling pathway whose activation has an association with multiple cancer types, including HCC 7 . Besides, many crucial genes related with cancer progression, such as Cyclin D1, c-Myc, and MMP9, are modulated by Wnt/β-catenin pathway. Herein, our research continued to pursue potential downstream genes participating in hepatocarcinogenesis via activating Wnt/β-catenin pathway. We found through bioinformatics analysis, RIP, RNA pull-down and luciferase reporter assays that CDK8 and LRP6, both of which have been proven as oncogenes in HCC and able to activate Wnt/β-catenin pathway 25 , were targeted and regulated by miR-26a/b-5p. CDK8 and LRP6 were also upregulated and under the regulation of DLGAP1-AS1 in HCC cells. Furthermore, the biological functions of DLGAP1-AS1 knockdown were partially reversed by CDK8 or LRP6 overexpression, or by addition of Wnt/ β-catenin pathway activator CHIR99021, indicating the ceRNA network concerning DLGAP1-AS1, miR-26a/b-5p, and CDK8/LRP6 to activate Wnt/β-catenin pathway. Finally, the in vivo experiments on xenograft models further verified the cancerogenic effect of DLGAP1-AS1 on tumor growth and metastasis of HCC, suggesting the potential clinical value of DLGAP1-AS1.
In conclusion, the present study demonstrated that DLGAP1-AS1 facilitated HCC tumorigenesis and EMT by sponging miR-26a-5p and miR-26b-5p in vitro and in vivo. The participation of IL-6/JAK2/STAT3 pathway and Wnt/β-catenin pathway was also demonstrated to play important roles in mediating the oncogenic function of DLGAP1-AS1. Our results suggest the potentiality of DLGAP1-AS1 as a biomarker for HCC treatment, and provide a new insight for understanding the molecular mechanisms associated with HCC. | 7,828.8 | 2020-01-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Epidemic Trend and Effect of COVID-19 Transmission in India during Lockdown Phase
To evaluate the present situation concerning the epidemic trend associated with the COVID-19 in Indian demography, the dynamics of the case rise has been analyzed from the perspective of the different index. The index for the analysis has been chosen in terms of the Case Recovery Rate (CRR), CASE Fatality Rate (CFR), as well as Mortality rate (MR). The study includes the rise of the case related to the pandemic in the different demographic regions of India as well as deep analysis and calculation of the indexes considered for the study. The analysis of the rising cases has also been investigated to relax the imposed rule so that economy of the country will not get affected adversely. Several preventive and control initiative has been taken by the central and state government in collaboration. The result of this paper can be taken as an input to decide further policy in the fight against the COVID- 19.
INTRODUCTION
COVID-19 had started from the Chinese terrain. The first case had been reported in the city of Wuhan of China. Prima facie of the incident, some 40 cases found in Hubei town of the Wuhan city. All of the patients have been suffering from pneumonia and occupation of some patient is vendors, as well as some, are being engaged in the seafood market in Wuhan city. The authorities of china along with the World Health Organization (WHO) come into action and extensive trials to know the cause of the disease started. The etiological lab testing has reached its outcome soon and made a declaration of the new virus named as Novel corona virus [1]. Fig.1 shows the spread of the disease in Indian states and union territories. During this fatality, death has been recorded in China due to COVID-19. The Chinese authorities made a formal announcement that a man in his early sixties has been died because of COVID -19 and the occupational work of that man was related to the seafood market. After that, throughout a short time, this disease spread across the globe at a rapid pace. Looking at the increasing number of the countries with COVID -19, WHO announced COVID -19 a communal health crisis on 30 January 2020 as outrage due to COVID-19 has been increasing continuously. The first mortality other than China was (of a Chinese male in Wuhan) recorded in the Philippines on 2 February amid the growing deaths in China. WHO coined a name for the new corona virus disease on 11th February: COVID-19. On March 11, the WHO announced COVID-19-a endemic that had infected about 114 countries by then [2].
The researchers found the corona virus as the class of pathogens, which has a great tendency to attack the respiratory cycle of humans. Earlier cases of the corona virus have occurred in the structure of Extreme Acute Respiratory Syndrome (SARS) and Middle East Respiratory Syndrome (MERS). Now it appears as a COVID-19 in the current context, which is triggered by the SARS2 Corona virus and presents a major hazard to individuals. A percentage of patients suffering from pneumonia syndrome were registered in Wuhan and Hubei Province of China in December 2019 and were later described as symptoms caused by corona virus multiply. As per the lab test of the disease, firstly these patients were supposed to be infected with a pathogen related to an animal as well as the market of the seafood at Wuhan. However, finally, the City of Wuhan in China was identified as the epicenter of the disease called COVID-19 and also fatality have been increased as the spreading this deadly disease all over the world. Till 2 January 2020, a total of 42 labs in china confirmed the presence of the corona virus and these labs also identified the patient with such symptoms. The symptoms due to COVID have been seen as chest pain, indigestion, coughing, breathing problems, sneezing, and respiratory sickness. It has been also found that many of these people are suffering from different conditions such as hypertension and cardiovascular diseases. As per China's National Health records, 18 deaths from COVID were registered in China until 22 January 2020 and then in four days, the demise rate increased to triple with 5502 confirmed incidents. By the end of January 2020, there were 7734 cases in China and 90 in other countries like Thailand, Malaysia, Japan, India, Italy. Iran, USA, Taiwan, Vietnam, Canada, Nepal, France, Cambodia, Nepal, Germany, Korea, Singapore, Sri Lanka, the United Arab Emirates, the Philippines, Australia, Finland. Besides, WHO announced an international public health disaster due to the seriousness of the disease [5].
The classification of the virus has been decided by the international committee on taxonomy. They named the virus the Severe Acute Respiratory Syndrome coronavirus-2(i.e. SARS Cov-2) and the disease is designated as the corona virus Disease -19 as COVID-19 by the WHO. The COVID-19 spread with 118,326 active cases and 4,292 casualties in more than 114 countries on 11 March 2020 and announced a pandemic by WHO. Less than a week after its declaration as to the pandemic, the situation is getting worse and Italy becomes the second most affected country after China. Currently, about 204 economies across the globe are affected by COVID-19 and have disrupted both advanced and underdeveloped countries in terms of economic growth. The WHO published the report of COVID cases on 14 April 2020, which claimed 5, 53,823 cases in the USA, 1, 59.526 cases in Italy, 1, 59, 495 cases in Spain,97,049 cases in France,1,25097 cases in Germany. These data have given a clear indication that the numbers of cases in these countries are much higher than that of China (83,697) which is the first epicenter of this disease. The chronology of incidents that have occurred during a COVID-19 outbreak across the globe is shown in Fig. 1. Any susceptible groups such as the old, children under 5, or citizens with manifold chronic illnesses are at superior risk of COVID. China succeeded in stabilizing the situation, other than in the present situation it is most horrible in America, Europe, in addition to Asian countries [6].
According to who guidelines, Elderly people above age 60 years, and children below5 years are highly vulnerable to this disease. Pregnant women have also come into the population which can easily get affected by this disease. China was quick to get action against this disease and stabilize the situation but as the situation gets stabilize in china the disease spreads rapidly in the rest of the world. The worst affected parts of the world are America, Europe as well as south Asian countries. Among the South Asian countries, India has been affecting most due to COVID-19. This might be the reason that India is the most populous country in the world after China and population density is also too high in Indian demography. Although the rate of spread of COVID-19 in India was not too high in starting, the number of cases was being confirmed by the Lab testing is low. But as time passes the rate of the spread of this disease has grown. In comparison to the rest of the world, the pace of the spread of disease is too slow in India [7]. In March, the pace of the disease was very slow in India but unfortunately, the transmission rate of the disease went up side by each passing day. The most satisfying fact is that rate of COVID has the worst effect by the measure of the Indian government as the Country was gone in complete lockdown. The main point of the focus of this paper is to analyze the situation of the political, economic, and social changes and defensive measures taking by the administration to keep the people safe. This study analyses India's fatality rate and contrasts its international scenario utilizing a discourse on various factors that account for the dissemination, prevention, and treatment steps in use to manage COVID19 broaden [8].
Design
The paper aims to research the outcomes of the COVID-19 spread as the number of cases reported, changes adopted by the people for their living and daily activity, and defensive measures in use by the people and
Sample
For the collection of the sample, 5000 people have been considered belonging to different cities of the Indian states and union territories. Besides this, the government authorized data was also being considered from different sources. The government Aarogya Setu app has also been used for the validation of the data as the number of a confirmed case, the number of an active case, and the number of confirmed death reported.
Instrument
A simple cumulative analysis has been used to study the data. For the cross-linking of the data, various sources have been taken into consideration. The district administration data have been cross-checked with the central government data to prove the validation of the data taken into account for the study. The outcome of the data has been crossed checked with the help of various government and nongovernment organizations for cumulative study [9].
Analysis
The region-wise data of the COVID-19 and related characteristics of the disease have been compiled and analyzed with descriptive statics. Fig. 4, showing the exponential growth of the number of cases of COVID-19 in India.
Demographic Studies
The demographic counting of the people, who were infected with the COVID-19, was different in comparison to other countries. A detailed analysis has been shown in Fig. 5 according to data gathered from the 5000 people of different age groups. The fig. illustrates that 59 % of people infected belong to the age group of 20-49 years, followed by 25% of people in the age group of 50 -69 years old. This data revealed that most people belong to the working class who had a greater tendency to come out from their house for work and had a lot of interaction with other peoples. The data of India are slightly different from countries like China and Italy, where older persons are more prone to COVID-19. But in India, only 5% of people in the older age group had been infected by this deadly disease. In the range of below 20 years, only 11% of people were infected with the disease. But in the older age group, people were badly affected by the disease and frequently counted in reported death [10].
The average age of the people infected in India calculated as 39 which were quite below that of other countries like China and Italy, where the age of the people average found to be 49.5 and 64 respectively. This data clearly shows the difference between the infected age group in India and Other countries as well. The experts revealed that this difference in an infected age group is due to the differences in the median age of India and the other countries. The median age in India was 28.4 years while in other countries this age is found to be greater than that of India. The median age of china is 38.4 years while the median age of Italy is 41.9 respectively as per the census report of 2020 (Population report of the United Nation). This is a well-known fact that currently, India is the youngest country among another country according to the data available of median age [11].
According to gender differences, the male population was more prone to be infected than the female population. As per data available, 76% of the male population and 24 % of the female population had been infected so far. This data had shown a diversion than that data of the other countries. In the early stage of the outburst of the COVID-19, the ratio of the infected male and female was almost equal in China. But in the case of South Korea, 60 % of females were found to be infected by this disease. In India, the huge gap between the patient's gender of COVID-19 is the reason for the social bias of the community towards women. The mindset of the masses creates ample discrepancy towards their counterpart. But it might be possible that this difference will tend to reduce as the number of the test to be conducted increases. The other reason for this might be that the women in India undergo less internationally traveling than the male [11].
Geo-Temporal Studies
The government of India issued a guideline to impose a complete lockdown in the country. The lockdown had extended one by one in four-phase and after that, the unlock of the country had started in phase manner. The government issued a complete guideline for the people to follow during the lockdown phases. The social distancing comes out as the main weapon against the fight of COVID-19. The holistic approach of following social distancing during the lockdown phases is the only way to win the battle of COVID in India with a huge population and population density. In India, Some states had done praiseworthy work to fight against the disease but still number of cases is tremendously increasing in some of the states. Maharashtra, a southwestern state was worst affected state followed by Andhra Pradesh, Tamil Nadu, Karnataka, and the national capital territory of Delhi. Maharashtra ranked number one in the total number of cases and also ranked one in the total number of deaths [12].
Preventive with Control Measures
The WHO released Guidelines on COVID 19 of suspected patients for infection prevention and control (IPC). Health care amenities should have given clinical triage according to the IPC recommendations to ensure and separate the suspicious patients. To start triage position to train physical condition staff, also to screen questionnaires. It ought to include hand sanitation as well as respiratory hygiene, decontamination of enduring care facilities, use of safety apparatus, environmental sanitation, and safe disposal of medical waste to ensure the implementation of the preventive measures on all persons. Family members including visitors must strictly observe before meeting the suspected patient at COVID 19. The patient ought to be held in a single, enclosed 60 L / s room ideally. In the absence of sufficient single space, the suspicious patient can be put in a separate bed together. Fig.7 showing the unlock phase in India to boost the social and economic measure [13]. [15]. All service and shop closed except essential one. 3 Closure of commercial and private establishment (Only work from Home allowed). 4 Suspension of all educational institutions. 5 Closure of all place of worship. 6 Suspension of all transport mechanism. 7 Prohibition of all social, political, sports, entertainment, academic, cultural, religious, activities.
CONCLUSION
The current study summarizes the latest COVID-19-transmission review in India. At the second phase of the outbreak, the disease is always expected to spread, and not at the community level. But, due to low testing rates in India, data on the virus' transmissibility is still not recorded fully. It was shown that as the number of tests during the lockdown time rises, and the more companies came into the light and regular infection reporting emerge. The disease is starting to increase across dense populations within the Mumbai (India's financial capital) Dharavi area. However, the Indian administration's tremendous and effective measures have reduced the rate of COVID outbreak in India, other than the endemic is still not to appear in its maturity stage, and Indian may face several difficulties ahead.
The latest COVID epidemic has to turn out to be a global clinical danger and a community health emergency for the world's working population and health workers. Active treatment, cure, and vaccination are under review. Virus transmission is increasingly growing, and the number of uninsured patients and death rates continue to rise every day. Only protective actions can be adopted to stop the spread of the COVID through the human transmission. From the analysis, it is clear that quarantine alone is not sufficient to regulate the transmission of the COVID-19 virus.
Extensive study is needed to determine the exact exchange rate of this virus, and to invest heavily in the production of specific therapy or vaccine. Also, the COVID-19 pandemic virus needs ongoing surveillance, identification of hospitalized individuals, and theoretically forecast their future adaptation, mode of transmission, and pathogenic organisms. Surely such factors will influence mortality rates. However, along with MoHFW, the Indian Government has taken successful containment steps such as Janta Curfew, time-to-time travel advisory for foreign and domestic travelers, country lockdowns, and helpful preventive guidance. A healthy and optimistic mental wellbeing, together with all these steps, will play an important role in reducing this threat worldwide. Each individual has to be attentive to the identified symptoms of the disease of the outbreak and to get care in good time for the reported incidents.
CONSENT
As per international standard or university standard, patients' written consent has been collected and preserved by the authors.
ETHICAL APPROVAL
As per international standard or university standard written ethical approval has been collected and preserved by the authors. | 3,963.4 | 2021-01-11T00:00:00.000 | [
"Economics"
] |
Optical Solitons with Tilted Wavefronts
The propagation of nonresonant solitons with phase and group wavefronts tilted with respect to each other is studied. It is shown that the tilt of fronts leads to the redefinition of the group velocity dispersion, introducing an additional anomalous contribution to it. As a result, light temporal and spatiotemporal solitons can be formed at the normal dispersion of the group velocity and focusing nonlinearity, including the cases of absence of this dispersion. A spatiotemporal soliton is a structure that is extended along group fronts normally to the plane of polarization and is localized in all directions perpendicular to the direction of extension of the soliton.
INTRODUCTION
Optical solitons can be spatial and temporal. Spatial solitons are continuous light energy beams that are infinitely extended in the direction of propagation and are limited in the transverse directions. They are formed as a result of mutual compensation of the nonlinear transverse self-focusing and diffraction divergence. Temporal solitons are short pulses that are localized in the direction of propagation and are infinitely extended in the transverse directions. They are due to the mutual compensation of nonlinear selfcompression and dispersion spreading. It is important that a temporal soliton in the presence of focusing nonlinearity is formed if the group velocity dispersion (GVD) is anomalous. If nonlinearity is defocusing, GVD should be normal. Thus, focusing nonlinearity and diffraction are involved in the formation of spatial solitons, and nonlinearity and dispersion are responsible for the formation of temporal solitons.
A spatiotemporal soliton (or light bullet) is a stable energy bunch localized in all directions propagating in space. Considering the light bullet as a combination of the spatial and temporal solitons, one can conclude that its formation requires the presence of focusing nonlinearity, anomalous GVD, and diffraction in a homogeneous medium.
Laser pulses with tilted wavefronts are currently used in laboratories for various aims [1][2][3][4][5][6][7][8]. The phase and group wavefronts of these pulses are noncollinear with respect to each other and tilted to each other by an angle θ. The angle between the phase, , and group, , velocities of the pulse is obviously the same.
The effect of diffraction is reduced to the bending of phase wavefronts. This in turn results in the transverse broadening of the pulse. Because of the tilt of phase wavefronts, the projection of diffraction broadening on the direction of the group velocity of the pulse leads to its spreading in the direction of propagation. This spreading is similar to the effect of dispersion. Thus, varying the angle between phase and group wavefronts, one can control the effective dispersion, including its sign, owing to diffraction. Can in this case diffraction replace dispersion and, thereby, serve as one of the mechanisms of formation of temporal and spatiotemporal solitons? The aim of this work is to answer this question.
EQUATION FOR THE ENVELOPE
OF A NONRESONANT PULSE Let phase wavefronts of the pulse with the carrier frequency ω incident on the isotropic nonresonant medium propagate along the z' axis. The group velocity is directed along the z axis that lies in the (z', x') plane in the Cartesian coordinate system and makes the angle θ with the z' axis ( Fig. 1). Let the plane of polarization of the pulse be parallel to the y axis. Correspondingly, the electric field E of the pulse can be represented in the form (1) where is the slowly varying envelope of the pulse field and k is the wavenumber.
Since imaginary exponentials of the z' coordinate in Eq. (1) are rapidly oscillating, we separate the sec-
OPTICS AND LASER PHYSICS
ond derivative with respect to this coordinate in the wave equation (2) Here, c is the speed of light in vacuum, is the Laplacian transverse with respect to the z' axis, and P is the polarization response of the medium induced by the electric field of the pulse.
Further, we conventionally represent the polarization response P as the sum of its linear and nonlinear parts, take into account perturbatively the time dispersion of its linear part, and neglect the dispersion of the nonlinear part. As a result, substituting Eq. (1) into Eq. (2) and neglecting derivatives of higher than the second derivative, we obtain (see, e.g., [9]) Here, n is the refractive index at the frequency ω, is the second-order GVD coefficient, and is the function describing optical nonlinearity.
We pass to the system of z and x coordinates rotated with respect to the system of z' and x' coordinates by the angle θ using the formulas of transformation (4) where is the Laplacian transverse to the z axis.
The right-hand side of Eq. (4) contains relatively small terms including derivatives with respect to coordinates perpendicular to the group velocity, as well as terms small in the parameter (ωτ p ) -1 , where τ p is the time duration of the pulse. Under these conditions, it is possible to use the unidirectional propagation approximation along the z axis [10], which, in particular, corresponds to the inclusion of the transverse dynamics of the pulse in the paraxial approximation. According to the left-hand side of Eq. (4), the velocity of this propagation is . Consequently, can be set on the right-hand side of Eq. (4). As a result, we arrive at the equation (5) where , The second term in Eq. (7) appearing because of diffraction makes a contribution to the second-order GVD. Since this contribution is negative, the diffraction of the pulse with tilted wavefronts promotes the formation of anomalous GVD. Below, the second term in Eq. (7) will be called diffraction GVD. A similar situation was considered in [1,11] in application to the optical method of terahertz radiation generation.
TEMPORAL SOLITON
In the presence of Kerr nonlinearity when , where and is the third-order nonlinear optical susceptibility, Eq. (5) takes the form +αψ ψ+ Δ ψ ω ∂τ z' z ph g Setting = 0, we obtain the solution of Eq. (8) in the form of the one-dimensional (temporal) soliton (9) where (10) and the velocity of propagation is given by Eq. (6).
It is seen that the light temporal soliton (9) exists if β eff /α < 0. The Kerr nonlinearity in most of the solid dielectrics is focusing . Consequently, β eff < 0 should be the case. In particular, let β = 0. For example, this equality for fused silica is satisfied at ω ≈ 1.0 × 10 15 s -1 [12]. Then, using Eqs. (7) and (10) and the expression for α, we write (11) The maximum intensity of the soliton is given by the expression (12) Here, is the nonlinear refractive index determining the additive to the linear index n; it is related to χ (3) as cn 2 n 2 = 12π 2 χ (3) [13].
Thus, the diffraction GVD in real experiments can promote the formation of the temporal soliton with tilted wavefronts.
The quantity ρ is obviously proportional to the local intensity of the pulse. 14), and separating the real and imaginary parts, we arrive at the system of equations (15) (16) Here, , is the nabla operator in the variables y and ξ, and is the Laplacian in these variables.
Let the optical nonlinearity function have the form (17) Here, the coefficient α was defined above Eq. (8) and is related to the Kerr nonlinearity and σ = , where is the fifth-order nonlinear optical susceptibility.
If as often occurs [14], the optical nonlinear is saturated and . Taking into account Eq. (17), we consider the axisymmetric solution of system of Eqs. (15) and (16) in the cylindrical coordinates and ζ. Assuming that (18) where g is a positive constant, we have . Then, follows from Eq. (15). Thus, the variable ρ depends only on y and ξ.
In the case under consideration, Eq. (16) takes the form (19) where , Here, is neglected because q, (see Eqs. (13) and (20)) in the paraxial and slowly varying envelope approximations. Equation (19) at has localized solutions vanishing at infinity [14]. In this case, R 0 is the characteristic size of the localization region on the (y, ξ) plane. The light energy is not localized along the transverse x axis because of the tilt of wavefronts (see the second term on the left-hand side of Eq. (5)).
∂ρ + ∇ ρ∇ ϕ ∂ζ
No. 4 2022 SAZONOV Thus, the spatiotemporal soliton of Eq. (5) with tilted wavefronts has the shape of a cigar extended in the direction orthogonal to the velocity and to the plane of polarization of the soliton (Fig. 1). We now analyze the stability of this soliton. To this end, we consider the axisymmetric self-similar solution of Eq. (15) [21][22][23] Here, is proportional to the intensity of the light energy on the central axis of the spatiotemporal soliton (Fig. 1) at its equilibrium radius R 0 , R(ζ) is the radius of this soliton, and f(ζ) and G(r/R) are arbitrary smooth functions.
Since the solution is localized on the (y, ξ) plane, the function G(r/R) can be approximated by the Gaussian [14] (22) Then, on the right-hand side of Eq. (16), Taking into account Eq. (17) and using the axial approximation [21], in which only the first two terms of expansion in the Taylor series in the parameter r 2 /R 2 , , , are significant in the expression for G, substituting Eqs. (21) and (22) into the left-hand side of Eq. (16), and equating the coefficients of r 0 and r 2 on the left-and right-hand sides of Eq. (16), we obtain (23) Equation (24) is formally similar to the equation of motion of a Newtonian particle with unit mass in an external force field with the potential energy given by Eq. (25).
The stable spatiotemporal soliton should correspond to the local minimum of the function U(R). According to Eq. (25), this minimum is absent in the presence of only the focusing Kerr nonlinearity (σ = 0). Consequently, in this case, as in the absence of tilt of wavefronts [14], the spatiotemporal soliton cannot be formed. If σ = 0, the pulse is self-focused at and is defocused otherwise. Using the ( ) ρ ρ ϕ ζ + ζ gent. Thus, the conditions presented above are in satisfactory agreement with the used approximations.
CONCLUSIONS
To summarize, this study has shown that the tilt of wavefronts of the optical pulse can significantly affect the character of formation and propagation of temporal and spatiotemporal solitons. The diffraction broadening of the pulse along phase wavefronts in the projection on the direction of the group velocity looks like its dispersion spreading. This formally leads to the redefinition of the second-order GVD coefficient in the form of Eq. (7). Thus, the dispersion of the group velocity can be controlled by varying the tilt angle of wavefronts. It is important that the additive caused by diffraction GVD is negative. As a result, the effective GVD can become anomalous. This in turn promotes the formation of solitons in media with focusing Kerr nonlinearity at β > 0 when temporal and spatiotemporal solitons with untilted wavefronts cannot be formed. Such a situation occurs, e.g., at the propagation of laser pulses of the visible spectral range in fused silica. In these cases, selecting the tilt angle such that β eff < 0, one can create favorable conditions for soliton propagation regimes. The possibility of the soliton regime at β = 0 can be considered as a particular case. Here, we discuss dissipationless solitons with tilted wavefronts.
Owing to the tilt of wavefronts, spatiotemporal solitons have the shape of a cigar strongly extended along group fronts perpendicular to the plane of polarization. At the same time, they are localized in all directions perpendicular to the direction of extension of the soliton.
Here, the stability of spatiotemporal solitons with respect to azimuthal perturbations [24] and to bending perturbations corresponding to bending of group wavefronts [25][26][27][28] is not studied and will be considered elsewhere.
In this work, the possibility of formation of nonresonant quasimonochromatic solitons with tilted wavefronts has been analyzed. A similar study of resonant solitons in the self-induced transparency regime, in particular, the study of the possibility of formation of resonant light bullets with tilted wavefronts, is also of interest.
For optical solitons with a short time duration, it is necessary to take into account higher order GVD. In these cases, the tilt of wavefronts should be of significant importance, in particular, for few-cycle light pulses. | 2,907.8 | 2022-02-01T00:00:00.000 | [
"Physics"
] |
Extreme ultraviolet vector beams driven by infrared lasers
CARLOS HERNÁNDEZ-GARCÍA,* ALEX TURPIN, JULIO SAN ROMÁN, ANTONIO PICÓN, ROKAS DREVINSKAS, AUSRA CERKAUSKAITE, PETER G. KAZANSKY, CHARLES G. DURFEE, AND ÍÑIGO J. SOLA Grupo de Investigación en Aplicaciones del Láser y Fotónica, Departamento de Física Aplicada, University of Salamanca, E-37008 Salamanca, Spain Universitat Autònoma de Barcelona, Cerdanyola del Vallès, E-08193 Barcelona, Spain Center of Advanced European Studies and Research, 53175 Bonn, Germany Optoelectronics Research Centre, University of Southampton, UK Department of Physics, Colorado School of Mines, Golden, Colorado 80401, USA *Corresponding author<EMAIL_ADDRESS>
INTRODUCTION
The state of polarization of light is often considered as a property independent of the spatio-temporal beam distribution. Using this approach, a light field can be described asẼr; t Er; tẽ 0 , whereẽ 0 is the light's state of polarization. In this case,ẽ 0 is uniform along the whole light beam, which is the common case for beams with a linear, elliptical, or circular state of polarization. However, there are scenarios where the polarization state varies from point to point of the light beam, i.e.,ẽ 0 ẽ 0 r; t. Light beams with spatially variant polarization,ẽ 0 ẽ 0 r, are known as vector beams. In recent years, there has been an increased interest in the generation of vector beams due to the novel effects they present and their particular interaction with matter, making them essential tools in different areas of science and technology [1]. Light beams with radial and azimuthal polarizations are the paradigm of vector beams. On one hand, radial vector beams are especially interesting due to the non-vanishing longitudinal electric field component present in tightly focusing systems, which allows one to sharply focus light below the diffraction limit [2,3]. This property has been greatly significant in fields such as laser machining [4][5][6][7], optimal plasmonic focusing [8], particle acceleration [9,10], and molecular orientation determination [2,11]. Radially polarized vector beams have also been shown to be relevant for the nanolocalization of dielectric particles [12], and the control of the radiation of relativistic electrons [13]. On the other hand, azimuthal vector beams can induce longitudinal magnetic fields with potential applications in spectroscopy and microscopy [14]. In addition, we note that vector beams present other interesting applications in the generation of quantum memories with multiples degrees of freedom [15], enhanced optical trapping [16], and polarization-dependent measurements in atomic systems [17]. They have also been used in fundamental science to demonstrate an optical analog to the spin Hall effect [18], to extend the concept of Pancharatnam-Berry phase [19], to observe an optical Möbius strip [20], and in the entanglement of complex modes both in the quantum [21] and classical [22,23] regimes.
Over the last two decades, high-order harmonic generation (HHG) has been demonstrated as a unique mechanism for the generation of coherent EUV and soft x-ray radiation, in the form of attosecond bursts [24,25]. The underlying physics at the microscopic level can be simply understood by the so-called threestep model [26,27]: An electron is tunnel ionized from an atom or molecule by an intense linearly polarized laser field, then accelerated, and finally driven back to its parent ion, releasing all the energy acquired during the process in the form of high-order harmonics upon recombination, extending from the EUV to the soft x-ray regimes [28]. From the macroscopic point of view of the HHG process, an infrared laser beam is focused into a gas target, and, if efficient phase-matching conditions are met [29], an EUV/ x-ray beam is emitted.
There are different techniques to produce vector beams in the infrared-visible regime. Vector beams presenting cylindrical symmetry, i.e., cylindrical vector beams, have been demonstrated by the coherent addition of two orthogonally polarized Hermite-Gauss beams [1,30], by means of both uniaxial [5,31] and biaxial [32] crystals, using circular multilayer polarizing grating end mirrors [33], with azimuthally dependent half-waveplates (s-waveplates) [34], by combining two spatial light modulators [35], with optical fibers [36], by means of electrically-tuned q-plates [37], and with a glass cone [38]. Non-cylindrically symmetric vector beams have been reported using c-cut uniaxial crystals [39], conical refraction in biaxial crystals [40], with q-plates [41], and by transforming a Laguerre-Gauss beam with a half-waveplate and a π cylindrical lens mode converter [19]. However, the spectral limitations of these generation techniques based on linear optics prevent the efficient generation of vector beams in the extreme-ultraviolet (EUV) and x-ray regimes, which would further extend the applications mentioned in the previous paragraph down to the nanometric scale. One of the most valuable aspects of HHG is that the properties of the EUV/ x-ray harmonics can be controlled through proper modifications of the driving beam, thus avoiding the use of inefficient optical devices in the EUV/x-ray regime to control the beam properties. For instance, not only the spatio-temporal properties of the driving field are imprinted in the harmonic beam, but also the mechanical properties involving orbital angular momentum (OAM) and/or spin (polarization). On one hand, EUV beams with spatial phase twist have been recently generated through OAM conservation [42][43][44], conveying the synthetization of attosecond helical beams [42,45,46]. Regarding the polarization state of the harmonics, although the HHG conversion efficiency drops quickly with the increase of the ellipticity of the driving field [47], different techniques have been recently developed to generate elliptically and circularly polarized harmonics through spin angular momentum conservation [48][49][50][51][52]. As a result, attosecond pulses from elliptical [53,54] to purely circular polarization [55] are predicted to be produced.
In this work, we overcome the existing limitations for the generation of EUV/x-ray vector beams by transferring the complex structure of the infrared vector beam through high-order harmonic generation. We use an s-waveplate to generate infrared driving vector beams (from radial to azimuthal) that are upconverted to shorter wavelength radiation. Our numerical simulations are in excellent agreement with the experimental results, which allows us to predict that harmonic vector beams can be synthesized into attosecond vector beams.
The driving beam, a linearly polarized femtosecond infrared pulse, is converted into a radial/azimuthal vector beam using an s-waveplate [34]. The s-waveplate is a super-structured space variant waveplate that converts linear to radial or azimuthal polarization, depending on the polarization angle of the incident beam (see Figs. 1(a) and 1(b), respectively). To characterize the resulting vector beam we place: a half-waveplate before the s-waveplate to control the input beam polarization direction, and a vertical linear polarizer after the s-waveplate to analyze the generated beam. In Fig. 1(c) we plot the measured spatial intensity distribution observed after the analyzer for different angles (α) of the half-waveplate. We observe that for α 0°we obtain a radial vector beam distribution, while for α 45°we obtain an azimuthal vector beam. Once the IR vector beam is properly selected, we generate high-order harmonics by focusing the beam into an argon gas jet, as sketched in Figs. 1(a) and 1(b). The resulting harmonic vector beam is detected in the far-field. Fig. 1. Scheme for the generation of (a) EUV radial and (b) azimuthal vector beams. A vertically, or horizontally, linearly polarized IR beam is converted into a radial, or azimuthal, IR beam by an s-waveplate, respectively. The resulting vector beam is focused into a gas jet, where each atom interacts with the local IR field, emitting linearly polarized harmonics in the EUV/x-ray regime. Upon propagation, the far-field high-order harmonics are emitted in the form of (a) radial or (b) azimuthal vector beams. (c) Spatial intensity distribution of the IR vector beam generated in the lab with the s-waveplate, after passing through a vertical linear polarizer (the analyzer). With the help of a half-waveplate (axis angle α) placed before the s-waveplate, the input IR linear polarization is varied from vertical (α 0°) to horizontal (α 45°).
A. Theoretical and Experimental Generation of Harmonic Vector Beams
To study the generation of EUV/x-ray vector beams through HHG, we first perform numerical simulations, including propagation through the electromagnetic field propagator [56] (see Section 1 of Supplement 1). In our simulations, we consider the infrared beam as a Laguerre-Gaussian mode (LG 1;0 ) without the azimuthal phase (see Section 1 of Supplement 1) with varying spatial polarization. In particular, we have considered radial and azimuthal vector beams with a beam waist w 0 30 μm. The argon gas jet is modeled by a Gaussian distribution along the y and z dimensions, whose full width at half-maximum (FWHM) is 500 μm, and possesses a constant profile along its axial dimension, x, with a peak density of 10 17 atoms∕cm 3 . For the simulations presented below, the laser pulse envelope is assumed to be a sine-squared function of 5.8 cycles (15.2 fs) FWHM, whose amplitude (E 0 ) is chosen to give a maximum peak intensity at focus of 1.6 × 10 14 W∕cm 2 at a wavelength of λ 790 nm. Longer pulses that are closer to our experimental driver were not implemented due to the high computational time required, and they would not have modified the main results presented in this work.
In Fig. 2 we present the simulated angular intensity profiles of the 17th harmonic (46.5 nm, 26.7 eV, first and third rows) and the 23rd harmonic (34.3 nm, 36.1 eV, second and fourth rows), driven by an IR radial, (a) and (b), and azimuthal, (c) and (d), vector beam. For each harmonic, we show the intensity distribution projected into the vertical and horizontal polarization, and the sum of both components. As it can be appreciated, the state of polarization of each harmonic is that of the driving beam, i.e., both HHG and the harmonic phasematching preserve the generation of radial and azimuthal vector EUV beams. With respect to the beam profile, one can clearly appreciate that the on-axis nodal point of the fundamental beam is preserved in the far-field emission of harmonics. Recently, it was shown that the far-field emission of harmonics generated with an annular beam intensity profile with linear polarization carrying OAM also leads to an on-axis dark point due to a phase singularity present in light beams carrying integer OAM [42][43][44][45][46]57]. In this work the annular beam intensity does not exhibit OAM, but is radially or azimuthally polarized, i.e., it possesses a polarization singularity that manifests as the on-axis nodal point both at the fundamental beam and high-order A Gaussian linearly polarized IR beam is converted into a vector beam after passing through a half-waveplate and the s-waveplate. High-order harmonics are generated after focusing the beam into an Ar gas jet. The spectrometer input slit is placed at three different spatial positions of the harmonic beam, as indicated on the inset (note that the inset background is taken from Fig. 2). An aluminum filter is used to remove the IR beam, and the harmonics are separated by means of a diffraction grating. The HHG spectra recorded at (b) slit position 1 and (c) slit position 2 is shown as a function of the half-waveplate angle axis (α). The π∕2 periodicity allows us to identify the cases where the vector beam is radially or azimuthally polarized. In plot (d) we show the 19th harmonic signal as a function of α for the three slit positions selected. The yield at each position is normalized separately.
harmonics. The output of the s-waveplate may be considered to be a superposition of radial modes, all of which share the polarization state that forces the on-axis singularity.
In Fig. 3 we present the experimental results. The setup, which is detailed in Section 2 of Supplement 1, is shown in Fig. 3(a). The laser system (Femtopower HE PRO CEP system) delivers linearly polarized 25 fs pulses at a central wavelength of 790 nm, operating at a 1 kHz repetition rate and 0.85 mJ∕pulse. A halfwaveplate placed before the s-waveplate, allows us to select the input polarization direction, and thus the polarization distribution of the IR vector beam that is focused (focal length 30 cm) into an argon gas jet to drive harmonics. An iris was placed before the half-waveplate to optimize the harmonic phase-matching conditions [58]. The harmonic radiation enters into a Rowland-circletype spectrometer through a thin slit. In this work, we have horizontally displaced the EUV beam through the entrance slit to characterize different parts of the EUV beam, as depicted in the inset of Fig. 3(a). The diffraction grating in the spectrometer acts as a EUV polarizer, allowing us to characterize the harmonic polarization. To this end, we first characterized the spectrometer response to different linear polarization orientations by using the half-waveplate before the lens (without the s-waveplate), rotated at different angles, obtaining a maximum signal for vertical polarization (s-polarization) and a minimum for horizontal (p-polarization), as depicted in Fig. S1 in Supplement 1. Although other more sophisticated EUV polarizers could be used [59], the different efficiency of both polarization directions already allows us to characterize the EUV vector beams.
In Figs. 3(b) and 3(c) we present the harmonic intensity measured as a function of the half-waveplate axis angle (α), for the radiation entering through the slit position 1 (at the left edge of the beam) and slit position 2 (at the center of the beam), respectively. By rotating the half-waveplate axis, we generate all kinds of vector beams from radial to azimuthal. It can be observed that when α is 0, 0.5π, π, and 1.5π, the HHG signal presents a maximum at slit position 1 (b), while a minimum at slit position 2 (c), indicating that the EUV presents azimuthal polarization. On the other hand, the behavior is reversed at α equal to −0.25π, 0.25π, 0.75π, and 1.25π, indicating the generation of an EUV radial beam.
The 19th harmonic signal as a function of the half-waveplate axis angle (α) is plotted in the panel (d) at the three slit positions indicated in panel (a): pink plus signs for the left part of the beam (slit position 1), blue points for the central part (slit position 2), and purple crosses for the right part (slit position 3). We observe that the angle dependence for slit positions 1 and 3 is very similar, and completely out of phase with slit position 2, thus demonstrating that the 19th harmonic beam has all polarization distributions from radial to azimuthal. The experimental results are in excellent agreement with the theoretical simulations, showing that not only radial an azimuthal EUV beams are generated, but all the intermediate polarization states between these two extreme cases.
One of the most interesting properties of radially polarized beams is that they present a strong longitudinal electric field when tightly focused. In our HHG experiment, we have selected a loose focusing geometry (focal length of 30 cm, giving a maximum numerical aperture of NA 0.04 ) for which the longitudinal electric field is negligible [1,60]. Note that harmonic phasematching conditions would be modified in tighter focusing geometries where the driving field presents a longitudinal component, leading to modifications in the far-field harmonic profile [61].
B. Synthesis of Attosecond Vector Beams
High-order harmonics are naturally emitted in the form of attosecond pulses. Our simulations show that harmonic vector beams are also emitted as attosecond vector beams (Fig. 4). For this to happen, it is essential that several harmonics overlap spatially. In Fig. 4(a) we show the far-field divergence of the several x-polarized harmonics (from the 17th to the 23rd) when driven by an IR radial vector beam. We observe that there is a range of divergence angles in which these harmonics overlap with comparable intensity. Hence, our numerical simulations show that: Research Article (i) The polarization structure of different harmonics is the same for all the harmonic orders, and (ii) There is a wide range of observation angles where different high-order harmonics overlap. These two results allow us to coherently sum several harmonics to synthesize attosecond pulses. In Fig. 4(b) we show the isosurface attosecond radial vector beam obtained from the Fourier transform of the coherent integration of the high-order harmonics emitted in an argon gas jet (same parameters as in Fig. 2). The higher-order harmonics above the 11th have been selected by considering the transmission through an aluminum filter. The color scale shows the direction of the polarization at each spatial position of the beam. As it can be observed, each attosecond pulse within the train exhibits a radially polarized spatial distribution, mimicking that of the IR fundamental beam. In Fig. 4(c) we show the x-polarized attosecond pulse train (top) and time-frequency analysis (bottom) detected at y 0; x 0.5 mrad. The timefrequency analysis shows the temporal intervals when the highorder harmonics are emitted. The positive slope indicates that the so-called short trajectories are phase-matched, imprinting a positive chirp in the attosecond pulses. The positive chirp, together with the different divergence of the harmonics [shown in Fig. 4(a)], explains the conical shape of attosecond radial vector beam presented in Fig. 4(b). Although here we only showed the attosecond vector beam with radial polarization, attosecond beams with polarization from radial to azimuthal can be obtained through adequate synthesis of the harmonic vector beams presented in Section 2.A.
DISCUSSION
We have demonstrated, both experimentally and theoretically, the generation of coherent vector beams in the EUV regime driven by HHG. To do so, a fs radially polarized vector beam obtained with a s-waveplate in the infrared domain has been focused into an argon gas target at standard experimental conditions. The atoms within the target emit EUV radiation coherently providing that efficient phase-matching conditions are met. With this physical scenario, we have shown that the non-homogeneous polarization nature of the fundamental beam is transferred to the HHG beam, i.e., vector beams of both radial and azimuthal polarization in the EUV domain are obtained. We recall that HHG offers a unique opportunity for the generation of vector beams at very short wavelengths, unreachable with other frequency upconversion techniques, such as second-and third-harmonic generation in crystals, since in these techniques the state of polarization of the fundamental beam is destroyed during the nonlinear process. Note that higher-photon energies up to the soft x-ray regime can be obtained if longer, mid-infrared [28], or shorter, ultraviolet [62] wavelengths were used. We do not expect the fundamental physics presented here to be different in those two scenarios.
We have demonstrated theoretically that EUV attosecond vector beams can be produced by means of HHG. Note that although we have reported attosecond vector beams where the spatial polarization distribution is maintained from pulse to pulse, gating schemes [63] could be applied to harness the time-dependent polarization profile of these beams, and thus to produce vector beams that vary from radial to azimuthal polarization in the time domain. Similar schemes could be used to isolate a single attosecond pulse with the desired polarization distribution.
We also note that radially and azimuthally polarized beams are the natural (albeit nearly degenerate) modes of cylindrical waveguides [64]. By coupling the vector beams into capillary waveguides, phase matching and an increased interaction length should allow for the generation of more harmonic flux than obtained in the present experiment. Moreover, still higher yield is anticipated considering that the peak intensity is away from the optical axis. The harmonic amplitude is known to scale with the ∼4th power of the fundamental amplitude for the high-order harmonics generated in argon (see Supplemental Material at [44]). A simple integration of LG 0;0 and LG 1;0 mode profiles with same peak intensity, weighted by the 4th power harmonic intensity yield, shows that the effective source volume is more than 5× greater for the higher mode, in agreement with our quantum HHG simulations (see Section 3 of Supplement 1). Thus, we anticipate that gas-filled capillary waveguides, which were used to efficiently phase-match soft x-ray harmonics [28], are the perfect candidates to generate soft x-ray vector beams through HHG.
To our knowledge this is the first time that vector beams in the EUV regime have been produced in a tabletop setup, and most important, in the form of attosecond-to-femtosecond pulses that are perfectly synchronized to the driving laser. Vector beams already offer many applications in the optical domain, where, in particular, radial vector beams exhibit the sharpest possible focus. This, combined with the state-of-the-art EUV/X-ray focusing techniques already allowing for focal spots as small as 15 nm [65], will bring the application of vector beams to the nanometric scale, especially in areas such as ultrafast diffraction imaging [25,66] and EUV lithography [67,68].
On the other hand, the longitudinal magnetic field created at the center of tightly focused azimuthal vector beams presents promising applications in magnetic spectroscopy and microscopy [14]. The generation of ultrashort EUV/x-ray azimuthal vector beams conveys a revolutionary tool to be applied in nanomagnetics, due to their potential to generate ultrafast electronic currents in the nanoscale. Ultrafast charge currents can induce magnetic fields that steer the properties of magnetic nanoparticles [69]. We envision a unique opportunity to tailor magnetic domains in the femtosecond-to-attosecond timescales using EUV/x-ray vector beams. | 4,851 | 2017-05-20T00:00:00.000 | [
"Physics"
] |
HSF1 transcriptional activity mediates alcohol induction of Vamp2 expression and GABA release
Many central synapses are highly sensitive to alcohol, and it is now accepted that short-term alterations in synaptic function may lead to longer-term changes in circuit function. The regulation of postsynaptic receptors by alcohol has been well studied, but the mechanisms underlying the effects of alcohol on the presynaptic terminal are relatively unexplored. To identify a pathway by which alcohol regulates neurotransmitter release, we recently investigated the mechanism by which ethanol induces Vamp2, but not Vamp1, in mouse primary cortical cultures. These two genes encode isoforms of synaptobrevin, a vesicular soluble N-ethylmaleimide-sensitive factor attachment protein receptor (SNARE) protein required for synaptic vesicle fusion. We found that alcohol activates the transcription factor heat shock factor 1 (HSF1) to induce Vamp2 expression, while Vamp1 mRNA levels remain unaffected. As the Vamp2 gene encodes a SNARE protein, we then investigated whether ethanol exposure and HSF1 transcriptional activity alter neurotransmitter release using electrophysiology. We found that alcohol increased the frequency of γ-aminobutyric acid (GABA)-mediated miniature IPSCs via HSF1, but had no effect on mEPSCs. Overall, these data indicate that alcohol induces HSF1 transcriptional activity to trigger a specific coordinated adaptation in GABAergic presynaptic terminals. This mechanism could explain some of the changes in synaptic function that occur soon after alcohol exposure, and may underlie some of the more enduring effects of chronic alcohol intake on local circuit function.
INTRODUCTION
Alcohol abuse and dependence is a major global health problem, but little is understood about the neuroadaptations that underlie the development of this disease. Considerable evidence suggests that transient molecular changes can occur during a single alcohol exposure, and that these can persist over time, as individual neurons respond to each and every alcohol exposure in a systematic and coordinated manner (Nestler, 2001;Koob, 2006). In particular, many central synapses are highly responsive to alcohol, and alterations in synaptic function may lead to long lasting changes in local circuitry.
have researchers begun to investigate the effects of acute and chronic ethanol treatment on neurotransmitter release (Criswell and Breese, 2005;Siggins et al., 2005;Weiner and Valenzuela, 2006). Acute application of ethanol increases γ-aminobutyric acid (GABA) release in the central amygdala (CeA; Roberto et al., 2003), cerebellum (Carta et al., 2004) and ventral tegmental area (VTA; Theile et al., 2008), as revealed by increased miniature inhibitory postsynaptic current (mIPSC) frequency and pairedpulse depression. In addition, mIPSC frequency is increased in the VTA of mice administered a single ethanol dose one day prior to recording (Melis et al., 2002) and in the CeA of chronically ethanol-treated rats (Roberto et al., 2004). Despite these findings that alcohol increases GABA release, the effects of alcohol on synaptic vesicle fusion machinery are not well understood.
Soluble N-ethylmaleimide-sensitive factor attachment protein receptors (SNARE) proteins play a critical role in neurotransmitter release. During synaptic vesicle fusion, synaptotagmin 1 binds to the vesicular SNARE (v-SNARE) synaptobrevin/vesicleassociated membrane protein (VAMP) and plasma membrane phospholipids (Martens et al., 2007). This pulls the two membranes into closer proximity and promotes zippering of synaptobrevin and plasma membrane target SNAREs (t-SNAREs: SNAP-25, syntaxin-1), triggering vesicle fusion and neurotransmitter release. We have found that a subset of genes encoding SNAREs and SNARE-associated proteins are induced by acute alcohol exposure, including synaptotagmin 1 (Syt1), Vamp2, and Snap25 (Varodayan et al., 2011).
In particular, our laboratory showed that alcohol exposure rapidly induced Vamp2 gene expression, but not Vamp1 (Varodayan et al., 2011). These two genes encode distinct isoforms of synaptobrevin, but are not strictly redundant as VAMP2deficient mice die shortly after birth (Schoch et al., 2001) and mice with a VAMP1 null mutation develop a neuromuscular wasting disease and die within 2 weeks (Nystuen et al., 2007). It is possible that these outcomes are linked to differential patterns of Vamp gene expression throughout the body and in particular, the central nervous system. Vamp2 gene expression is high throughout the rodent forebrain, including across the entire cortex (Gene Expression Nervous System Atlas [GENSAT; Gong et al., 2007] Project. NINDS Contracts N01NS02331 & HHSN271200723701C to The Rockefeller University, New York, NY), whereas Vamp1 mRNA levels predominate in the diencephalon, midbrain, brainstem, and spinal cord (Trimble et al., 1990;Nystuen et al., 2007). Closer analysis of synaptobrevin expression in the cerebral cortex, however, found that VAMP1 and VAMP2 are co-expressed at different rates in GABAergic and glutamatergic axon terminals, suggesting that there are underlying cell type specific differences in their patterns of expression (Morgenthaler et al., 2003;Bragina et al., 2010).
As synaptobrevin is intimately involved in synaptic vesicle fusion, changes in its expression levels may alter neurotransmitter release. We reasoned that a careful study of the effects of alcohol on Vamp2 gene expression might reveal a molecular mechanism by which alcohol can alter neurotransmitter release.
MATERIALS AND METHODS
The Columbia University Institutional Animal Care and Use Committee approved all protocols involving the use of experimental animals in this study.
Cortical neurons were cultured for 14-21 days in vitro (DIV) and then exposed to ethanol (final concentrations 10-150 mM; Sigma-Aldrich, St. Louis, MO) or vehicle Dulbecco's phosphatebuffered saline control (Invitrogen, Carlsbad, CA) for specific time periods (15 min-24 h), by addition directly to the culture medium. All transfection protocols and electrophysiology recordings were performed after 16 DIV.
QUANTITATIVE REAL-TIME POLYMERASE CHAIN REACTION (qPCR) ANALYSES OF mRNA LEVELS
qPCR was carried out as previously described (Ma et al., 2004;Pignataro et al., 2007;Varodayan et al., 2011). Briefly, total RNA was isolated from the neurons using TRIzol (Invitrogen) and cDNA was prepared with the iScript cDNA synthesis kit (Bio-Rad, Hercules, CA). The first-strand reverse transcribed cDNA was then used as a template for PCR amplification with the appropriate specific primer pairs listed below. qPCR reactions were carried out with iQ SYBR Green Supermix (Bio-Rad) using a Chromo4 Real-Time PCR machine (Bio-Rad).
In preliminary experiments, the Vamp2 cDNA concentration was normalized against Actb, Gapdh and 18S [gene encoding ribosomal protein 18S] (QuantumRNA Internal Standards, Ambion, Austin, TX) cDNA within the same sample. As the results were not significantly different among the three internal standards, for all subsequent experiments the cDNA concentration for the gene of interest was normalized against the concentration of Actb cDNA within the same sample. The final results were expressed as percentage of increase vs. the control.
RNA INTERFERENCE EXPERIMENTS
RNA interference experiments were performed with 20-25 nucleotide small interference RNA (siRNA), as previously described (Pignataro et al., 2007;Varodayan et al., 2011). Briefly, cultured cortical neurons were transfected with Hsf1 or control scrambled siRNAs (Santa Cruz Biotechnology, Santa Cruz, CA) for 1 h at 37 • C. Cells were washed once and the transfection medium was replaced with conditioned medium for another 24 h prior to ethanol or vehicle treatment.
Data were acquired with pClamp 10.3 software (Molecular Devices), filtered at 2 kHz and digitized at 20 kHz. Each recording was a minimum of 6 min long, with the final minute of data analyzed to identify mPSCs. The mPSCs were detected using the Mini Analysis Program 6.0.7 (Synaptosoft, Fort Lee, NJ) with threshold criteria of 5 pA. To assess mPSC frequency and kinetics, the recording trace was visually inspected and only the automatically detected events with a stable baseline, sharp rising phase, and single peak were used.
STATISTICAL ANALYSES
The qPCR data were analyzed by one-way ANOVA followed by Dunnett's multiple-comparison post-hoc tests. In these experiments, n represents the total number of triplicate sample values averaged into each data point, and each data point contains at least three biological replicates. Electrophysiology numerical data were analyzed using a two-tailed unpaired t-test or by oneway ANOVA followed by Dunnett's multiple comparison post-hoc tests. In these experiments, n represents the number of cells tested from at least three biological replicates. All data are presented as mean ± s.e.m and the details of the statistical analyses are included in the appropriate figure legends.
ALCOHOL INCREASES Vamp2 GENE EXPRESSION
Our initial experiments confirmed our previous finding that Vamp2 is an alcohol-responsive gene (Varodayan et al., 2011). We found that ethanol induction of Vamp2 mRNA levels was concentration-dependent ( Figure 1A), with the Vamp2 gene responding modestly to ethanol concentrations more relevant to social intoxication (10-30 mM) and strongly to the high ethanol concentrations similar to those measured in blood samples of chronic alcoholics (80-100 mM) (Urso et al., 1981). The ethanol effect on Vamp2 gene expression showed a halfmaximal activation at 40 ± 6 mM (33 ± 4% increase compared with ethanol-naïve control) and saturated at 80 mM (57 ± 5% increase). These brief exposures to high ethanol concentrations were not toxic to the neurons, as treatment with 100 mM ethanol caused little, if any, apoptosis, as previously reported (Pignataro et al., 2007). The time course of the activation of Vamp2 transcription by 60 mM ethanol was rapid, with Vamp2 gene expression significantly increased at 30 min of exposure (22 ± 4% increase; Figure 1B). Vamp2 mRNA levels continued to rise during 8 h of 60 mM ethanol exposure (87 ± 10% increase) and were further increased at 24 h of continuous exposure (103 ± 9% increase).
HSF1 TRANSCRIPTIONAL ACTIVATION MEDIATES ALCOHOL INDUCTION OF Vamp2 GENE EXPRESSION
A subset of alcohol-responsive genes are known to be upregulated via activation of the transcription factor, heat shock factor 1 (HSF1; Pignataro et al., 2007Pignataro et al., , 2013Varodayan et al., 2011). To investigate whether HSF1 mediates Vamp2 gene induction by ethanol, we altered HSF1 protein expression and assessed changes in Vamp2 mRNA levels after ethanol treatment. We found that knock-down of HSF1 protein, using neuronal transfection with Hsf1 siRNA, decreased Vamp2 gene induction after ethanol exposure (from 61 ± 10% increase to 20 ± 7%; Figure 2A). Transfection with control siRNA had no effect on basal Vamp2 mRNA levels (Figure 2A).
Previous work from our laboratory demonstrated that the Vamp1 gene was not induced when primary cortical culture was exposed to 60 mM ethanol for 1 h (Varodayan et al., 2011). Here we found that the knock-down of HSF1 protein, using neuronal transfection of Hsf1 siRNA, had no effect on Vamp1 mRNA levels.
To confirm the role of HSF1 in mediating Vamp2 gene induction, we used a constitutively active Hsf1 construct (Hsf1-act).
This construct encodes a transcriptionally active HSF1 protein that can directly induce heat shock protein (Hsp) gene transcription in the absence of heat stress (Zuo et al., 1995;Xia et al., 1999). Neuronal transfection of this construct increased Vamp2 gene expression to a level similar to that seen after 1 h of 60 mM ethanol exposure (42 ± 6% increase; Figure 2B). Conversely, a dominant-negative Hsf1 construct (Hsf1-inact), which encodes a transcriptionally inactive HSF1 protein that suppresses stressinduced Hsp gene expression (Zuo et al., 1995;Xia et al., 1999), abolished the effect of ethanol exposure on Vamp2 mRNA levels (from a 62 ± 7% increase to 11 ± 4%; Figure 2B). Hsf1-inact transfection alone had no effect on basal Vamp2 gene expression ( Figure 2B). These experiments reveal that HSF1 transcriptional activity stimulates Vamp2 mRNA levels and mediates ethanol induction of the Vamp2 gene. In the case of the Vamp1 gene, altering HSF1 transcriptional activity by neuronal transfection with either Hsf1-act or Hsf1-inact and ethanol treatment had no effect on mRNA levels.
ALCOHOL INCREASES mIPSC FREQUENCY
As Vamp2 is one of several alcohol-responsive genes that encode proteins intimately involved in synaptic vesicle fusion (Varodayan et al., 2011), we explored whether ethanol alters neurotransmitter release. To investigate this potential mechanism, we used wholecell voltage clamp electrophysiology to record mPSCs in ethanol exposed cultured cortical neurons treated with 100 nM TTX to block action potential-dependent neurotransmitter release. In these experiments, increased mPSC frequency indicates alterations in the presynaptic terminal leading to an increased probability of synaptic vesicle fusion and neurotransmitter release, while increased mPSC amplitude reflects an increase in postsynaptic receptor sensitivity to the released neurotransmitter, possibly due to changes in receptor subunit composition or the number of receptors present Otis et al., 1994).
We first evaluated the effects of 60 mM ethanol exposure for 4-8 h on inhibitory currents (mIPSCs) by recording in the presence of 30 μM D-APV and 10 μM NBQX to block glutamatergic events. Notably, we found that ethanol increased the frequency of mIPSCs compared to control neurons, as seen in the representative traces and bar graph (f C = 0.42 ± 0.08 Hz, f E = 1.11 ± 0.23 Hz; Figure 3A upper panel, B). Ethanol had no effect on mIPSC amplitude (A C = 10.68 ± 0.93 pA, A E = 10.98 ± 0.74 pA; Figure 3A lower panel, C) or the rise time constant (t rC = 3.21 ± 0.22 ms, t rE = 3.24 ± 0.16 ms), but shortened the decay time constant (t dC = 12.59 ± 2.05 ms, t dE = 8.19 ± 0.78 ms; Table 1). The mIPSCs were totally blocked by the perfusion of 20 μM gabazine and partially recovered upon washout in all 5 cells tested, indicating that these events are GABAergic. Similar experiments conducted after 5-15 min of 60 mM ethanol exposure revealed no change in mIPSC frequency (f C = 0.47 ± 0.08 Hz, f E = 0.55 ± 0.13 Hz; n C = 13, n E = 17) or amplitude (A C = 9.40 ± 0.95 pA, A E = 8.03 ± 0.78 pA; n C = 13, n E = 17), suggesting that this mechanism of ethanolinduced GABA release may require the prolonged processes of transcription and translation.
HSF1 TRANSCRIPTIONAL ACTIVITY MEDIATES ALCOHOL INDUCTION OF mIPSC FREQUENCY
To investigate whether HSF1 transcriptional activity mediates the increased mIPSC frequency observed after ethanol exposure, we altered HSF1 protein expression and assessed mIPSC kinetics. Neuronal transfection of Hsf1-act increased mIPSC frequency similar to the frequency observed after ethanol exposure (f C = 0.18 ± 0.01 Hz, f E = 0.61 ± 0.19 Hz, f Hsf 1act = 0.63 ± 0.11 Hz; Figure 4A). Conversely, the dominant-negative Hsf1-inact construct abolished the effect of ethanol exposure on mIPSC frequency (f C = 0.34 ± 0.05 Hz, f E = 0.88 ± 0.25 Hz, f Hsf 1inact = 0.37 ± 0.04 Hz, f Hsf 1inact+E = 0.51 ± 0.19 Hz), while Hsf1-inact transfection alone had no effect on mIPSC frequency ( Figure 4C). No changes were observed in amplitudes (Figures 4B,D), rise times or decay times after transfection with either the Hsf1-act or Hsf1-inact constructs. These experiments reveal that HSF1 transcriptional activity increases GABA release and mediates ethanol induction of mIPSC frequency. In summary, in this study we have shown that ethanol acts via HSF1 to increase the gene expression of a specific subset of proteins involved in synaptic vesicle fusion and stimulate GABA release.
DISCUSSION
Ethanol alters GABA release throughout the central nervous system (Criswell and Breese, 2005;Siggins et al., 2005;Weiner and Valenzuela, 2006), but the underlying mechanisms are largely unknown. We recently showed that a subset of genes encoding SNARE complex proteins is induced by alcohol exposure. In particular, we found that alcohol differentially regulates two genes HSF1 activity does not alter the mean mIPSC amplitude in neurons transfected with an Hsf1-act construct, exposed to ethanol (E) or control sham transfected [C; n C = 15, n E = 17, n Hsf 1act = 19; F (2, 48) = 0.32; p = 0.73]. (C) Ethanol stimulation of mIPSC frequency is mediated by activated HSF1. Hsf1-inact transfection reduced the effects of ethanol (E) on mIPSC frequency. Hsf1-inact transfection alone had no effect on mIPSC frequency compared to control cultures sham transfected with empty pcDNA3.1+ construct [C; n C = 16, n E = 10, n Hsf 1inact = 12, n Hsf 1inact+E = 14; F (3, 48) = 2.56; p = 0.07]. (D) HSF1 activity does not alter the mean amplitude of mIPSCs in neurons transfected with an Hsf1-inact construct, exposed to ethanol (E) or vehicle control [C; n C = 16, n E = 10, n Hsf 1inact = 12, n Hsf 1inact+E = 14; F (3, 48) = 0.0639; p = 0.60; * P < 0.05, * * P < 0.01, * * * P < 0.001, or n.s. denotes no significance]. encoding synaptobrevin isoforms, rapidly inducing the Vamp2 gene, but not Vamp1, and were therefore interested in the mechanism underlying this difference (Varodayan et al., 2011). Here, we show that HSF1 transcriptional activity mediates ethanol induction of Vamp2 gene expression in cortical neurons. Since VAMP2 is intimately involved in synaptic vesicle fusion, we then investigated whether alcohol acts via HSF1 to alter neurotransmitter release. We found that HSF1 transcriptional activity mediates ethanol-induced GABA release, but has no effect on glutamatergic synaptic vesicle fusion.
A SINGLE ALCOHOL EXPOSURE INDUCES SNARE GENE EXPRESSION
We have previously shown that acute alcohol exposure rapidly induces transcription of some SNARE complex proteins, including the Vamp2, Syt1 and Snap25 genes, but not the Vamp1, Stx1a, and Syp genes (Varodayan et al., 2011). In this study we investigated the mechanism underlying Vamp2 gene induction by alcohol. There are few, if any, comparable studies on the effects of alcohol on Vamp2 gene expression. Interestingly, a recent transcriptome profiling study used tissue from alcoholic human brain cortices to identify Vamp2 as a hub gene that is likely to have high functional significance in biological processes associated with alcohol dependence (Ponomarev et al., 2012).
A MOLECULAR MECHANISM UNDERLYING THE EFFECTS OF A SINGLE ALCOHOL EXPOSURE ON SNARE GENE EXPRESSION
We found that ethanol induction of the Vamp2 gene is mediated by HSF1 activity. Transcriptional activation of HSF1 is a multistep process that involves: HSF1 translocation from the cytoplasm, where it is sequestered by chaperone proteins, to the nucleus; HSF1 trimerization and inducible hyperphosphorylation; and HSF1 binding to a DNA element to stimulate transcription (Cotto et al., 1997). We have previously shown that 60 mM ethanol exposure of primary cortical culture induces HSF1 translocation into the nucleus (Pignataro et al., 2007), phosphorylates HSF1 (Varodayan et al., 2011) and stimulates Hsp gene expression (Pignataro et al., 2007), indicating that ethanol promotes HSF1 transcriptional activity. Several other laboratories have also reported an association between alcohol exposure and HSF1dependent gene induction, including microarray studies where alcohol treatment increased Hsp gene expression (Lewohl et al., 2000;Gutala et al., 2004;Worst et al., 2005). In addition, we have previously reported that ethanol acts via HSF1 to induce the Syt1 gene and the gene encoding the α4 subunit of the GABA A receptor (Pignataro et al., 2007;Varodayan et al., 2011). As a whole, our current studies strongly suggest that HSF1 transcriptional activity mediates the effects of alcohol on a subset of alcohol-responsive genes, including some SNARE proteins. As the SNARE proteins are intimately involved in synaptic vesicle fusion, this raises the interesting question of whether the neuronal response to alcohol includes alterations in neurotransmitter release.
A SINGLE ALCOHOL EXPOSURE CAUSES A WAVE OF TRANSIENT PRESYNAPTIC ADAPTATIONS LEADING TO CHANGES IN GABA RELEASE
Changes in GABA release after ethanol exposure have been reported in the last decade (Criswell and Breese, 2005;Siggins et al., 2005;Weiner and Valenzuela, 2006). We found that mIPSC frequency increased in cortical neurons exposed to 60 mM ethanol for 4-8 h, but not 5-15 min, suggesting that this mechanism of ethanol-induced GABA release may require the prolonged processes of transcription and translation. Similar experiments by the Morrow laboratory found an unchanged mIPSC frequency in cultured cortical rat neurons exposed to 50 mM ethanol for either 4 h or 1-7 days (Fleming et al., 2009;Werner et al., 2011). As a whole, these results suggest that the increase in mIPSC frequency after a single ethanol exposure may be a transient neuronal adaptation. Studies conducted in vivo also showed changes in mIPSC frequency across the rodent brain, with Melis et al. (2002) observing an increase in mIPSC frequency in the VTA of mice injected intraperitoneally with ethanol one day prior to recording. Chronic ethanol-treated rats showed a similar increase in mIPSC frequency in the CeA and this frequency was further increased by the bath application of ethanol, indicating that the acute, and chronic effects of ethanol on GABA release are differentially mediated (Roberto et al., 2004). Overall, these data define a model of transient presynaptic adaptation, where ethanol promotes HSF1 transcriptional activity to induce a temporary increase in GABA release. This transient change in neurotransmitter release may lead to more permanent synaptic modifications, especially as the cycle is repeated with multiple exposures to alcohol.
A MOLECULAR MECHANISM UNDERLYING SOME OF THE EFFECTS OF A SINGLE ALCOHOL EXPOSURE ON GABA RELEASE
The mechanisms underlying the effects of ethanol exposure on GABA release have been largely unstudied. Our detailed analysis revealed that ethanol treatment of cultured cortical neurons increases GABA release via HSF1 transcriptional activity, although it is likely that a variety of alternate and overlapping mechanisms underlie the similar changes observed after different ethanol exposure models and across brain regions. For example, ethanol application in the cerebellum rapidly increases the number of mIPSC events in interneurons via activation of both AC/PKA and PLC/PKC pathways and internal calcium store release (Kelm et al., 2007(Kelm et al., , 2008(Kelm et al., , 2010. The effects of alcohol administration on these kinase pathways provide for a relatively fast GABAergic neuronal response, while the enhanced GABA release that occurs after chronic ethanol exposure is likely to be regulated by longer-lasting changes in gene expression that are triggered by HSF1 and other transcription factors.
A SINGLE ALCOHOL EXPOSURE CAUSES A WAVE OF TRANSIENT POSTSYNAPTIC ADAPTATIONS LEADING TO CHANGES IN GABA RECEPTOR SENSITIVITY
The synapse is a highly responsive structure and perturbations in presynaptic activity are typically met with an adaptive postsynaptic response, and vice versa. We found that treatment of cortical neurons with ethanol for 4-8 h shortened mIPSC decay time, an indication of changes in postsynaptic GABA A receptor subunit composition or number. mIPSC decay time also decreased in cultured rat cortical neurons exposed to ethanol for 4 h and 1 day, and recovered after 2-7 days (Fleming et al., 2009;Werner et al., 2011). A similar decrease in mIPSC decay time was observed in hippocampal neurons of rats administered a single dose of ethanol and withdrawn 12 h to 7 days, with recovery by day 14 (Liang et al., 2007). Liang et al. (2007) found that these changes in mIPSC kinetics coincided with changes in the surface expression of GABA A receptor subunits. In particular, an increase in α4 expression could cause α4βγ2 GABA A receptors to "crowd" α1βγ2 GABA A receptors out of the synapse, leading to changes in GABA A receptor sensitivity to ethanol. We previously found increased α4 expression in cultured cortical neurons exposed to 60 mM ethanol for 4-8 h (Pignataro et al., 2007), indicating that Frontiers in Integrative Neuroscience www.frontiersin.org December 2013 | Volume 7 | Article 89 | 6 similar changes in GABA A receptor subunit composition and sensitivity may be occurring in our current study. Overall these data define a model of postsynaptic adaptation to a single dose of ethanol in which there may be a temporary increase in the expression of α4-containing GABA A receptors. This transient change in subunit composition could lead to more permanent synaptic modifications, especially as the cycle is repeated with multiple exposures to alcohol.
MULTIPLE ETHANOL EXPOSURES COULD LEAD TO PERSISTENT ADAPTATION AT THE GABA SYNAPSE
The data presented here show that a single ethanol exposure induces Vamp2 gene expression and stimulates GABA release via HSF1 transcriptional activity. Repeated ethanol exposure could result in a persistent adaptation at the GABAergic synapse and lead to enduring changes in the local circuitry that may play a role in the development of alcohol abuse and dependence. It is interesting to note that ethanol's effects on HSF1 appear to alter neurotransmitter release in GABAergic, and not glutamatergic, neurons, and the apparent specificity of this effect among a variety of synapses merits further study.
AUTHOR CONTRIBUTIONS
Participated in research design: Florence P. Varodayan and Neil L. Harrison. Conducted experiments: Florence P. Varodayan. Performed data analysis: Florence P. Varodayan. Wrote or contributed to the writing of the manuscript: Florence P. Varodayan and Neil L. Harrison. | 5,580.4 | 2013-12-11T00:00:00.000 | [
"Biology",
"Medicine"
] |
EMD AND GNN-ADABOOST FAULT DIAGNOSIS FOR URBAN RAIL TRAIN ROLLING BEARINGS
. Rolling bearings are the most prone components to failure in urban rail trains, presenting potential danger to cities and their residents. This paper puts forward a rolling bearing fault diagnosis method by integrating empirical mode decomposition (EMD) and genetic neural network adaptive boosting (GNN-AdaBoost). EMD is an excellent tool for feature extraction and during which some intrinsic mode functions (IMFs) are obtained. GNN-AdaBoost fault identification algorithm, which uses genetic neural network (GNN) as sub-classifier of the boosting algorithm, is proposed in order to address the shortcomings in classification when only using a GNN. To demonstrate the excellent performance of the approach, experiments are performed to simulate different operating conditions of the rolling bearing, including high speed, low speed, heavy load and light load. For de-nosing signal, by EMD decomposi- tion is applied to obtain IMFs, which is used for extracting the IMF energy feature parameters. The combination of IMF energy feature parameters and some time-domain feature parameters are selected as the input vectors of the classifiers. Finally, GNN-AdaBoost and GNN are applied to experimental ex- amples and the identification results are compared. The results show that GNN-AdaBoost offers significant improvement in rolling bearing fault diagno- sis for urban rail trains when compared to GNN alone.
1. Introduction. The rotating machine is a fairly significant piece of industrial equipment with extensive applications in variety of fields. It is apparent from urban rail train fault data for the Guangzhou Metro in 2015 and 2016, that locomotive running gear fault account for nearly 36% of all faults, with faults in rotating mechanical components accounting for nearly 80% of running gear faults. As an important part of a rotating machine, rolling bearings have significant effect on the working status of a whole machine [6]. Research on rolling bearing fault diagnosis is, therefore, of high importance for urban rail trains.
Fault diagnosis mainly consists of two parts, the first of which is fault identification. Several approaches to machine learning, such as the support vector machines (SVM) [1] and the neural network (NN) [14] are used to identify fault types and have obtained some positive results. NN has strong robustness and ability for nonlinear approximation, adaptation, generalization, and associative memory [25]. However, some problems remain, the most important of which is associated with the use of a single fault diagnosis technique. The accuracy of fault identification is low due to the imperfection of these classifiers, such as the over-fitting of SVM [15] and the requirement of NN for a very large sample size. Sample selection and pretreatment have a significant impact on the effectiveness of NN techniques. Consequently, it is important that a combination of learning methods is applied to fault detection and identification. The genetic algorithm (GA) is an evolutionary computation model originated from the natural evolution and the genetic law, which are extensively applied to function optimization, automatic control, and other practical problems [19]. GA and NN can, to a certain extent, make up for the shortcomings of each other. The genetic neural network (GNN) is the combination of GA and NN, and is commonly applied for fault identification [9]. A boosting algorithm is a kind of ensemble learning algorithm, which originates from the probably approximately correct (PAC) model. Adaptive boosting (AdaBoost) is developed from the boosting algorithm by Yoav Freund and Robert Schapire [8,13,22]. AdaBoost can weaken the over-learning phenomenon of GNN and improve fault identification, to a certain extent.
The other component of diagnosis is feature parameters extraction. Traditional time-domain or frequency-domain analytical techniques, e.g. the fast Fourier transform (FFT) play critical parts in the analysis and processing of stationary signals. However, the working condition of rolling bearings is complex and there are a lot of nonlinear factors that have a significant effect on vibration signals [26]. FFT is unable to obtain combined information simultaneously in the timefrequency domain and this determines FFT is not suitable for non-stationary signal. Various timefrequency techniques have been applied in extracting fault features in non-stationary signal processing, for instance, Wigner-Ville distribution (WVD) [5] Choi-Williams distribution (CWD) [7] and short-time Fourier transform (STFT) [12]. However, all of these methods have their own limitations when dealing with non-stationary signals. For example, there is cross-term interference in the WVD and CWD methods and the window size of STFT does not change with the frequency. Empirical Mode Decomposition (EMD), an established technique in digital signal processing, can divide the complex signal into a few components, intrinsic mode functions (IMFs) EMD is mainly-used in machinery fault pattern recognition, seismic signal detection, and other practical areas since it applies a stationary processing approach to non-stationary signals, giving improved adaptability over time-frequency decomposition and other methods [20,23,25]. Before extracting feature parameters, signal denoising is needed to remove the interference of unnecessary noise. Wavelet transform (WT), as one of the famous techniques to reduce noise, is often employed to separate original signal and ensure the availability of feature parameters [4].
From above discussion, this paper proposes the GNN-AdaBoost fault identification algorithm using GNN as a subclassifier of the AdaBoost algorithm to identify a rolling bearing fault state more accurately. For proving the effectiveness and availability of GNN-AdaBoost algorithm, this paper simulates an urban rail train under different operating conditions and extracts rolling bearing vibration signal features based on EMD and time-domain methods. For different working conditions of load and speed, the GNN-AdaBoost algorithm and GNN algorithm are used for fault pattern identification. The effectiveness of the fault diagnosis approach of EMD and GNN-AdaBoost can be confirmed from the experimental results.
2.1.
EMD. EMD is a procedure for screening signal data and extracting IMFs from a given signal dataset put forward by Norden Huang [2,10], which is called the Hilbert-Huang transformation (HHT). IMFs make the instantaneous frequency significant, which could be either a linear or nonlinear function. The procedure of EMD is as follows.
For a dataset x (t), EMD identifies all the local maximum and minimum points, e u and e l . Then the upper and lower envelope curves of x (t) are simulated by using cubic spline interpolation for the reason that the envelope curves in theory are difficult to obtain directly. A new sequence, h 1 (t), can be obtained as follows: where m 1 (t) is the mean of the two envelope values. If h 1 (t) satisfies the two conditions above, then it can be an IMF. The procedure above is known as sifting process, which can eliminate riding waves and make waveform symmetrical. If h 1 (t) is not an IMF, Eq. (1) should be repeated up to k times in the subsequent steps until h 1k (t) satisfies the conditions of IMF: At this time, it is set that c 1 (t) = h 1k (t).
To ensure that the IMFs adequately reflect the actual amplitude and frequency of x (t), the sifting process should stop when the stopping condition is reached. The value of standard deviation (SD) is selected as the stopping condition, which is set between 0.2 and 0.3. SD is calculated as follows: The residue of x (t), r 1 (t), can be obtained as follows: Then r 1 (t) is regarded as original data, the sifting process should be repeated for n times until r n (t) becomes a monotonic function, shown as follows, From Eq. (4) and Eq. (5), no more IMFs can be extracted and the expression for the original data x (t) can be finally obtained as follows: IMFs, c i (t), are decomposed from the original signal during the sifting processes. Each IMF has actual physical meaning and contains the information of a certain frequency band, which as a reflector of rolling bearing health conditions.
Comparing the calculated IMFs directly, the vibration characteristic of the rolling bearing can be more accurately reflected by IMF energy, the feature parameters of which can be obtained as follows: For discrete signal where ∆t is the sampling period, n is the sample number, and k is the sampling point. Next, normalization processing is carried out as follows: where T is the feature parameters vector, which acts as an input parameter for fault pattern identification. The data resulting from Eq. (9) can reflect the size and the variation of the IMFs with time [3,17].
GNN-AdaBoost algorithm.
2.2.1. GNN. NN imitates the structure of a brain and has widely used in the fields of intelligent computation. NN can optimize weights and thresholds through its selfadaptation ability. The most employed of NN is back propagation neural network (BP-NN), which was proposed by Rumelhart and McCelland in 1986. However, there are some disadvantages to BP-NN; most notably that the determination of structure relies solely on the experience of experts and this lack of global search capability can easily lead to local convergence and a slow convergence speed [11,24]. The GA follows Darwin's theory of natural selection/survival of the fittest: a process that consists of coding, choice, heredity, and variation. The emergence of GA and the combination of GA and BP-NN has enabled BP-NN to be applied more extensively, since GA brings significant advantages through its global search ability, parallelism, robustness, adaptability, and convergence speed [9].
The basic principle of the combination of GA and NN is the use of a training network to apply weighting through the process of sample training, and through the optimization of connection weights and thresholds to achieve better results. The process steps involved in optimizing the weights and thresholds of NNs by GA are as follows.
(1) Determine the network structure, learning rules and termination condition, generate a set of weights and thresholds randomly and encode. The code chain links to an NN with particular weights and thresholds. (2) Calculate the error and determine the fitness function value. The larger the error, the smaller the fitness function.
(3) Chose the individuals with higher fitness as the male parent of the next generation and eliminate the worse parts. (4) Evolve the current population by crossing and variation to generate a new population. (5) Repeat steps 1-4 until the goal is reached.
AdaBoost algorithm is an iterative algorithm that aims at constructing a strong classifier by taking iterative processes. Its basic idea is to employ learning to get a number of weak classifiers on the same problem, and then to combine these weak classifiers with a strong classifier through a certain relationship. At each cyclic iterative process, the AdaBoost algorithm changes weights to produce another basic learning classifier according to the basic classifier [16,22]. After T cycles, T basic classifiers are processed by a weighted voting method to obtain a final, strong, classifier.
In this section, the GNN-AdaBoost classification algorithm is proposed. The process flow of the GNN-AdaBoost algorithm is illustrated in Figure 1 and process steps of the GNN-AdaBoost algorithm are as follows.
(1) Select data and initialization. From sample space, random selection is used to obtain m groups training samples the weights of which are initialized as follows: where D 1 (i) is the initial weight. (2) Use GNN to train weak classifier h t (x i ) After training a weak classifier, the testing samples are intended to test the classification performance by training error as follows: where ε t is the training error, h t (x i ) is the actual prediction result of the weak classifier, and y i is the expected result (3) Determine the weights of weak classifiers. According to the prediction results, the weights of the weak classifiers, α t , can be obtained as follows: (4) Update the weights of the training samples for the next round by increasing the weight of wrongly classified samples, shown as follows: where Z t is a normalization constant, and N i=1 D t+1 (i) = 1. (5) Generate strong classifier. After T cycles, the strong classifier is obtained as Eq. (14).
where H (x) is a strong classifier. The GNN-AdaBoost algorithm is shown in Figure 1. Since GNN-AdaBoost is a new integration algorithm with adaptive capability, it provides better solution for identifying the fault types of rolling bearings.
Based on the application of EMD and GNN-AdaBoost, the general procedure for rolling bearing health state diagnosis is presented in Figure 2. 3. Experiments. To validate the effectiveness and applicability of the methods presented, this section applies experimental techniques for rolling bearing fault identification. After feature parameters extraction, GNN-AdaBoost and GNN are used to identify rolling bearing fault types.
3.1. Signal acquisition and pretreatment. Experiments were designed and performed using the simulator stand for rolling bearing faults at the "Independent Research Project of State Key Lab of Rail Traffic Control & Safety", belonging to Beijing Jiaotong University, shown in Figure 3. The simulator stand consists of a drive motor, revolution speed transducer a rolling bearing and other components.
Speed and load parameters are set to values that are close to those used in real-world examples. The speed calculation is as follows: where n is the rotational speed of the rolling bearing, v is the velocity of the urban rail train, l is the perimeter of the wheelrail, and d is the diameter of the wheelrail.
In the Guangzhou Metro, the perimeter of the wheel rail is 840 mm and the top speed is 80 km/h. According to Eq. (15), when v = 60 km/h, n = 6.31 r/s and when v = 80 km/h, n = 8.42 r/s. The speed is therefore set to 6 r/s and 8 r/s. A pressure regulating valve is used to regulate pressure on the simulator stand. Various rolling bearing health conditions are designed, including: normal, inner-race fault, outer-race fault, and rolling ball fault, and only one crackle existing in each rolling bearing. Figure 4 signal is quite different for each rolling bearing state However, the fault type cannot be distinguished correctly from visual inspection alone. For the sake of getting more meaningful information from vibration signals, this paper calculates the partial time-domain characteristic parameters. There is high sensitivity and poor stability of the skewness and kurtosis when fault occurs, but the variance is relatively stable. Therefore, variance, skewness and kurtosis are selected as time-domain parameters.
The calculation methods for variance, skewness, and kurtosis are as follows: where σ 2 x is variance, β is kurtosis and α is skewness [18,21]. Using Eq. (16)-(18), the signal data under different conditions are processed, and the time-domain feature parameters are obtained, as Figure 5 shows.
3.2.2.
IMF energy moments extraction. The de-nosing signal is processed using a Hilbert transformation based on the EMD method and 13 IMFs are obtained, as Figure 6 shows. Each IMF contains information on the frequency band, which reflects the energy change of the rolling bearing vibration signal in different states. It can be seen clearly that the first five IMFs are particularly obvious among all the 13 IMFs. With Eq. (8), IMF energy feature parameters are calculated, which were selected as part of input vectors for final rolling bearing fault identification. A group of eight feature parameters consisting of five IMF energy feature parameters and variance, skewness, kurtosis parameters form the input feature parameters vectors for fault identification.
3.3. Fault identification. Two methods are applied for rolling bearing fault identification: GNN; and GNN-AdaBoost based on the feature parameters vector above. A comparison is made of their performance in correct identification to see if an advantage exists to either method. A study with speed of 6 r/s and with light load is used as an example signal. For this example, 80 groups of vibration signals are needed for each of the four conditions, giving a dataset of 320 samples in total.
3.3.1. Fault identification based on GNN. The GNN has three layers with different node number: the input layer, containing eight nodes; the hidden layer, containing ten nodes; and the output layer contains four nodes. The output corresponds to the four rolling bearing health states: normal, inner-race fault, outer-race fault, and rolling ball fault. For each class, the 320 feature parameters are divided into two groups randomly. A total of 240 groups of data were taken as training samples, with the other 80 used as testing samples. The desired output codes are shown in Table 1. Next, the trained GNN is used to predict the rolling bearing fault type. If the error between the output from the GNN and the testing samples is less than 0.05, then the result is credible.
3.3.2.
Fault identification based on GNN-AdaBoost. The initial weight of each testing sample is 1/80. The sample selection, structure of GNN, expected output code and training process in GNN-AdaBoost are the same as for the GNN in Section 4.3.1.
In this approach, GNN is now iterated 25 times to obtain 25 weak classifiers; the error and weight changes of the four rolling bearing conditions are shown in Figure 7. As the error of each condition decreases in a fluctuating way, the weight value increases with each iteration. The bigger the error, the greater the weight value. The classification accuracy of each iteration is, thereby, better than the accuracy of the preceding iteration-the classification accuracy is improved by increasing the weight of the error samples. Figure 8 shows the relationship between error, ε, and iterations of GNN-AdaBoost. As the number of iterations increases, the error shows a downward trend. After 23 iterations, the error remains approximately constant, showing that 25 iterations are, in this case, sufficient to enable the effective performance of the GNN-AdaBoost algorithm. The best feature is selected as a weak classifier in each iterative learning step and all 25 weak classifiers are used for constructing the strong classifier H (x) In order to define a suitable strong classifier, a normalization process is carried out on the 25 weak classifiers with appropriate weights applied. Finally, the resulting strong classifier is used for identifying the fault type of the testing sample. 4. Results and discussion. The main contribution of this paper is proposing a fault diagnosis approach for the rolling bearings of urban rail trains based on EMD and GNN-AdaBoost. The experiments simulate the working condition of rolling bearings on urban rail trains based on the example of the Guangzhou Metro. In order to process the data and reach fault type identification, the following work is completed: a soft threshold de-noising method is employed to pre-process the original signals; in Section 4.2, the input vectors for GNN and GNN-AdaBoost consist of IMF energy feature parameters and some typical time-domain feature parameters, utilizing signal characteristics as far as possible. The experimental results are illustrated in Figure 9 and Table 2. Figure 9 shows the test results for GNN-AdaBoost and GNN under different working conditions. GNN is effective when the rolling bearing is normal, but makes incorrect decisions in the fault states. There is a significant improvement in fault diagnosis when using GNN-AdaBoost when compared to GNN. Table 2 shows the accuracy rates of different classifiers under different rolling bearing working conditions. Irrespective of speed, the diagnosis results for GNN-AdaBoost exceed a 95% accuracy rate for both light and heavy load, which is better than that found from GNN alone, and meets the requirements of field application. GNN-AdaBoost has, therefore, better performance and greater accuracy.
5.
Conclusion. In this paper, an automatic diagnosis system for urban rail train rolling bearings based on EMD and GNN-AdaBoost is presented. To reduce noise disturbances, wavelet de-noising should be carried out first to process the collected signal. EMD can adaptively decompose signal efficiently and it has good performance when analyzing signals with nonlinear and non-stationary properties. A number of IMFs are obtained by EMD and IMF energy feature parameters can better reflect vibration characteristic of signal. Some time-domain feature parameters such as variance, skewness, and kurtosis have strong association with different rolling bearing health state. Both them are employed to form input feature vectors of GNN-AdaBoost-an algorithm with strong classifying ability that combines GNN and AdaBoost. To provide further validation of the effectiveness of GNN-AdaBoost, GNN is used in parallel with GNN-AdaBoost to classify the fault type in the same experiment. The results prove that GNN-AdaBoost performs better at rolling bearing fault identification than GNN alone. Irrespective of the working condition, in terms of load and speed, the accuracy of GNN-AdaBoost is significantly improved when compared with the use of GNN alone. Future research should be carried out to investigate the potential application of GNN-AdaBoost to more complex problems. (a) | 4,751.6 | 2019-01-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
ON THE ( IN ) DEFINITENESS OF IMPERSONAL PRONOUNS
This paper addresses the question whether impersonal pronouns should be analyzed as indefinite or definite expressions based on their discourse anaphoric potential. I present new data that support the claim that impersonal pronouns should be analyzed as neither (see Koenig & Mauner 1999). I sketch a formal analysis that captures this behaviour. Furthermore, I show that the availability of quantificational variability effects for impersonal pronouns is not foolproof evidence for their indefiniteness as is usually assumed in the literature (see Malamud 2013).
INTRODUCTION
Although the cross-linguistic variation found for possible uses of impersonal pronouns is quite well-studied, open questions on their semantic analysis remain.One persistent point of contention is whether impersonal pronouns -based on their semantic/pragmatic behaviour -should be analyzed as definite or indefinite expressions.Practically all possible answers to this question have been argued for.They have been analyzed as definite expressions (e.g., Kratzer 1997;Alonso-Ovalle 2002), as indefinite(-like) expressions (e.g., Condoravdi 1989;Moltmann 2012;Malamud 2013), and as "a-definites" (Koenig & Mauner 1999).
The main aim of this paper is to add new empirical facts to the discussion, which, to my mind, tip the scales in favour of Koenig & Mauner's claim that impersonal pronouns cannot be grouped with either definite or indefinite NPs.The empirical investigation is conducted using the German impersonal pronoun man, specifically, its existential use ("existential man").
Like all impersonal pronouns cross-linguistically, German man has a generic use, as in (1).
(German) man has-to his parents respect 'One has to respect one's parents.'This use occurs exclusively in generic sentences -sentences stating a rule or nonaccidental regularity.The existential use of man is given in (2).
yesterday has man the uni set-on-fire ≈ 'Yesterday, someone set the university on fire.' This use occurs only in episodic sentences -sentences describing a specific situation/eventuality, including accidental generalizations. 1 Unlike the generic use, the existential use is not uniformly available.English one, for instance, lacks this use. 2his investigation focuses exclusively on existential man since the generic use is inseparably tied to the intensional, quantificational generic operator Gen. Since definite and indefinite singular NPs interact with Gen in different ways (see Krifka et al. 1995), similarities in the semantic/pragmatic behaviour of man and these NPs are always masked by Gen (Zobel 2014).
The paper is structured as follows.In Section 2, I present new data on the discourse anaphoric potential (DAP) of impersonal pronouns as compared to (in) definite NPs.In Section 3, I show that the DAP of existential man is comparable to that of implicit agents of short passives -as Koenig & Mauner (1999) argued for French on. 3Section 4 discusses quantificational variability effects (QVE) with man.QVE is seen as the most robust argument for classifying man as an indefinite (see Malamud 2013).Section 5 sketches the core idea for a semantic analysis of man based on Onea (2013Onea ( , 2015)).Section 6 concludes the paper.
THE DAP OF (IN)DEFINITE NPS AND EXISTENTIAL MAN
The question whether impersonal pronouns are definite or indefinite expressions is not discussed in the literature with respect to a single, specific theory of (in)definiteness (see Heim 2011 for a recent overview).The central question, also pursued in this paper, is whether the semantic/pragmatic behaviour of impersonal pronouns is comparable to that of definite or indefinite NPs, or whether they are distinct from either.
The aspect of the semantic/pragmatic behaviour of man that I focus on is the discourse anaphoric potential (DAP) of existential man.That is, for which kinds of anaphoric nominal elements existential man can be an antecedent, and conversely, the referents of which nominal elements can be taken up by existential man.The DAP of (in)definite NPs is very well studied (see Heim 2011); the data on the DAP of existential man is still incomplete (see Cabredo-Hofherr 2010; Malamud 2013 for previous results).
As English one lacks an existential use, existential man is usually translated as someone (see (2)).This translation is inadequate.The scope behaviour of existential man is not comparable to that of indefinite pronouns: existential man, unlike indefinite pronouns, always takes narrow scope with respect to other quantifiers (Zifonun 2000).Based on the data presented below, I argue that it denotes an indeterminate "group" of individuals (possibly a single person), which I label "X".Which individuals the speaker means by "X" can, in the right contexts, be inferred.my class wears a t-shirt".Non-accidental generalizations arguably involve intensional quantification (see Greenberg 2007).
The DAP of Existential Man across Sentence Boundaries
The DAP of (in)definite NPs across sentence boundaries is summarized in (3).
(3) a) Indefinite NPs can occur discourse initially and can serve as antecedents for strictly anaphoric expressions (i.e., definite NPs and personal pronouns).They cannot take up discourse referents (DRs) that have been previously introduced.b) Definite NPs are marked discourse initially.They can serve as antecedents for other strictly anaphoric expressions and can take up DRs that have been previously introduced.
Like indefinite but unlike definite NPs, existential man can occur discourse initially. 4That is, the group X does not have to be previously introduced.In this case, existential man is intuitively interpreted similar to an indefinite pronoun.
have you that heard yesterday has man the uni set-on-fire 'Did you hear?Yesterday, X set the university on fire.' (X ≈ someone) Unlike definite and indefinite NPs, though, X cannot be taken up by 3rd sg.pronouns or arbitrary singular definite descriptions (see Cabredo-Hofherr 2008;Zifonun 2000).None of the expressions in subject position in (5) can refer back to X denoted by man in (4).
the man / he / she has a match in a garbage-can thrown 'The man / he / she threw a match in a garbage can.' The group X in (4) can, however, be taken up by (i) 3rd pl.personal and demonstrative pronouns with a corporate/bridging reading,5 (ii) bridging definite NPs (see Schwarz 2009), and (iii) existential man.This is illustrated in (6) (= i & ii) and ( 7) (= iii), which can both continue (4).man has a match in a garbage-can thrown 'X' threw a match in a garbage can.' (X' → X of ( 4)) The expressions in ( 6), which intuitively refer to X in (4), are not strictly anaphoric to it.The perceived "reference sharing", I argue, is the result of inference.This is supported by the different number specifications of man (sg.) and sie/die/die Brandstifter (pl.); strictly anaphoric expressions (e.g., co-referring personal pronouns and definite NPs) always agree with their antecedents in person and number. 7ubsequent occurrences of existential man may but do not have to refer back to the group of individuals given by a preceding occurrence of existential man.Example (9) can continue (4), just like (8).
man searches still for the arsonists 'X'' is still looking for the arsonists.'(X'' → the police) Since the group understood for existential man in (4), the arsonists, are explicitly referred to in (8), another group of individuals has to be inferred for man in (8).World knowledge suggests that the people looking for the arsonists are most plausibly the police.
Lastly, existential man can be interpreted as denoting previously introduced DRs, as in (9).This possibility is only shared by definite NPs.
(9) Eine Gruppe von Studenten ist für ihren Vandalismus bekannt.Gestern hat man zum a group of students is for their vandalism known yesterday has man for Beispiel die Uni angezündet.example the uni set-on-fire 'A group of students is known for their vandalism.For example, yesterday X set the university on fire.' (X → the group of students) In ( 9), zum Beispiel (Engl.'forexample') signals that the second sentence takes up the subject matter of the preceding sentence.Hence, the group of students introduced in the first sentence is a plausible candidate for the agents of the second sentence (= X).Crucially, the speaker in (9) does not explicitly claim that the group of students is responsible for setting the university on fire, which would be the case if she had used the strictly anaphoric 3rd pl.personal pronoun sie (Engl.'they').This, I argue, is a result of determining the specification of X via inference.
In general, highly topical or salient DRs can be inferred as "referents" of man -provided that the discourse relations that link the utterances, as in (9), do not discourage this inference (see Asher & Lascarides 2003 on discourse relations).
man has self decided pro to stay 'X decided to stay.' (= X decided that X stays) The possibility of reflexivization and control for existential man is not a counterargument against the claim that man cannot co-refer with strictly anaphoric expressions.Here I follow Chierchia (1995) and Landau (2010) in assuming that reflexivization and control do not involve co-reference.
Multiple occurrences of existential man in multi-clausal sentences (possibility (iii) above) can again be read either as referring to the same group or a larger group of individuals, or as referring to two (not necessarily overlapping) groups, as in (11).The former reading is preferred.
(11) Man hat hoffnungsvoll gefragt, ob man sich morgen trifft.man has hopeful asked whether man self tomorrow meets 'X asked hopeful whether X' are meeting up tomorrow.'
Interim Summary
From Sections 2.1 and 2.2, I conclude that existential man and (in)definite NPs differ as follows: • Indefinite NPs always introduce new DRs and, hence, cannot refer to DRs that were previously introduced.• Definite NPs (almost) always refer to DRs that were previously introduced.
• Existential man never introduces new DRs that could be referred to by anaphoric expressions, and cannot refer to DRs that were previously introduced.
As Condoravdi (1989), Moltmann (2012), andMalamud (2013), among others, argue, man contributes a free variable that is, in the generic use of man, unselectively bound by Gen.If this idea is to be extended to existential man, one has to find a way to distinguish variables, which are needed to model quantification, from DRs, which are needed to model anaphoric relations, and find a way to connect these two appropriately.The core idea for an analysis with these features is sketched in Section 5.
EXISTENTIAL MAN AND IMPLICIT AGENTS
The data on the DAP of existential man given in Section 2 mirrors the DAP of German implicit agents of short passives (IAP), which are "strong implicit arguments" (Landau 2010).IAPs in German can occur discourse initially, as in (12a), but cannot be taken up in a subsequent sentence by strictly anaphoric expressions, as in (12b).
the uni was iap set-on-fire 'Someone set the university on fire.' b) #Der Mann / er / sie hat Benzin verwendet.
the man / he / she has gas used 'The man / he / she used gas.' The IAP in (12a) can be picked up by bridging definites and corporate/bridging pronouns, as in (13), as well as another IAP, as in ( 14). it was iap decided pro gas to use (≈ Someone/X decided to use gas.)b) Hier wurde iap sich nicht geprügelt.
here was iap self not hit ≈ `No one hit each other here.' In sum, this strong parallel in the DAP of existential man and IAPs suggests that they should indeed be analyzed similarly (pace Malamud 2013).Koenig & Mauner (1999) observe the same characteristics for French IAPs and the impersonal pronoun on.They introduce the notion of "a-definites" in ( 16) to refer to expressions with this DAP (compare to Section 2.3).
(16) A-definites are expressions that are "inert in discourse": they cannot serve as the anchor of an anaphoric element-unless the perceived anaphoricity is the result of lexical or inferential processes.(Koenig & Mauner 1999: 213, 220ff)
IMPERSONAL PRONOUNS AND QVE
As the main argument for classifying impersonal pronouns as indefinite-like expressions, the observation is usually given that impersonal pronouns show "classical" quantificational variability effects (QVE) with adverbs of quantification like often, usually, or seldom (see Malamud 2013) In addition, Malamud (2013: 26) observes that English IAPs show only QVE-like effects with for the most part, as in (18).Such QVE-like effects have been reported to occur only with plural definite expressions (Nakanishi & Romero 2004).10(18) In Spain, Michael Jackson is for the most part admired.(Malamud 2013: 21) (≈ QVE Most Spaniards admire Michael Jackson.) Hence, the availability of QVE vs. QVE(-like) effects apparently differentiates impersonal pronouns from implicit agents.That is, the result of Section 3 that existential man and IAPs show parallel behaviour seems to be incorrect.
However, this conclusion is premature.Firstly, there are cases of classical QVE with German IAPs: (19) can be interpreted as stating that the majority of implicit agents (i.e., doctors/researchers) assume the given list of reasons.That is, üblicherweise (Engl.'usually') quantifies over implicit agents.Man war größtenteils in legerer Sommerkleidung gekommen.man was for-the-most-part in casual summer-dress come 'For the most part, X had appeared in casual summer dress.' 12 (X → the audience) (≈ QVE Most people in the audience had appeared in casual summer dress.) Together, ( 17) and ( 20) would imply that man has to be classified and analyzed as both indefinite and definite, which is an undesirable result.I believe that the possibilities regarding QVE vs. QVE-like effects vary with the uses of man.For reasons of space, further details have to be left for another occasion.The upshot is that the possibility of QVE with man is not air-tight evidence that man is an indefinite(-like) expression.
SKETCHING A FORMAL ANALYSIS FOR MAN
To capture the DAP of existential man, we need a formal system that can distinguish between variables and DRs (see Section 2.3).While the formal system proposed in (Onea 2013(Onea , 2015) ) is not explicitly designed to do this, it can be extended to capture this distinction.
In Onea's (2013Onea's ( , 2015) ) system13 , all lexical entries take assignment functions as arguments."Referential expressions" (i.e., (in)definite NPs, proper names, pronouns) place constraints on these assignments.A proper name like Peter, as in (21), contributes the value h(i) (= an individual) returned by the assignment argument h for its index i, provided that the restriction on the assignment, h(i) = Peter, is met.
A sentence like Peter laughs is assigned the denotation in ( 22), which is true for an assignment h iff h(i)=Peter and h(i) is laughing in w.The restriction on compatible assignments contributed by Peter in ( 21) is inherited by the full sentence.In this system, quantification and binding both utilize the assignment arguments.Quantifiers quantify over sets of assignments; pronouns denote the output that their assignment argument provides for the index that they bear, as in ( 23). ( 23) [[ pron i ]] w = λh.h(i) Onea's system is only designed to handle inter-sentential binding and anaphora.To model cross-sentential anaphora, I extend it by a parameter G, which records the active DRs.
G is a set of assignment functions.At the start of the conversation, G equals the set of all assignment functions A. Each subsequent sentence reduces this set.For instance, the denotation of Peter laughs in ( 22) removes all assignment functions in G that do not output Peter for the index i or for which the individual returned for i does not laugh in w.
For man, I assume that it has the same denotation as anaphoric pronouns, as in ( 23), which is equivalent to assuming that man contributes a free variable in more familiar static systems.
To ensure that existential man does not access or restrict the set G (i.e., does not access or contribute a DR), we need to assume that it is bound by a selective variant of existential closure at the VP level (Onea 2015).The denotation of Man hat gelacht (Engl.≈ 'Someone laughed') is as in ( 25).
The selective existential closure operator ∃ i in (25) introduces existential quantification over assignments g that are identical to the assignment argument h except for the output for the index i (g= i h).Since only the restrictions placed on h will constrain G, this means that any content that is predicated of g(i) will not access or add restrictions to G. Conceptually, this ensures that existential man cannot refer to existing DRs or introduce new DRs, as desired.
The generic use of man is captured by assuming that man is bound by Gen at the sentence level (see Condoravdi 1989;Moltmann 2012;Malamud 2013), and QVE with man can be modeled by assuming that it is bound by an adverb of quantification (see Malamud 2013).For reasons of space, I cannot present this proposal and its implications in any more detail.
CONCLUSION
I have shown that the DAP of existential man differs from that of indefinite and definite NPs, but is parallel to that of German IAPs, which can be classified as "a-definites" following Koenig & Mauner (1999).Furthermore, I showed that using the availability of QVE as an argument for the claim that impersonal pronouns are indefinite is not as straightforward as has been previously claimed.Lastly, I sketched a formal analysis that can capture the DAP of existential man outlined in this paper.njihovo vedenje.Poleg tega pokažemo, da prisotnost učinkov kvantifikacijske variabilnosti v primeru neosebnih zaimkov ni neizpodbiten dokaz za njihovo nedoločnost, kot se običajno predpostavlja v literaturi (cf.Malamud 2013). | 3,927 | 2016-12-28T00:00:00.000 | [
"Linguistics"
] |
Flattening axial intensity oscillations of a diffracted Bessel beam through a cardioid-like hole
We present a new feasible way to flatten the axial intensity oscillations for diffraction of a finite-sized Bessel beam, through designing a cardioid-like hole. The boundary formula of the cardioid-like hole is given analytically. Numerical results by the complete Rayleigh-Sommerfeld method reveal that the Bessel beam propagates stably in a considerably long axial range, after passing through the cardioid-like hole. Compared with the gradually absorbing apodization technique in previous papers, in this paper a hard truncation of the incident Bessel beam is employed at the cardioid-like hole edges. The proposed hard apodization technique takes two advantages in suppressing the axial intensity oscillations, i.e., easier implementation and higher accuracy. It is expected to have practical applications in laser machining, light sectioning, or optical trapping.
Introduction
The nondiffracting beam exhibits a strict propagating invariance property in free space with good axial intensity uniformity as well as stable transverse resolution [1]. Therefore, it attracts intensive research interest due to its wide applications in laser machining [2], light sectioning [3], interferometry [4], optical trapping [5,6], microscopy [7] and so on. Durnin et al. have demonstrated mathematically that the three-dimensional infinite-sized Bessel beams are exact solutions to the scalar wave equations [8,9]. However, it is physically impossible to generate an infinite-sized Bessel beam, in that it is not square integrable and contains infinite energy. Approximate Bessel beams were created in experiments by using an annular slit in the back focal plane of a lens [10][11][12], holographic techniques [12][13][14][15], axicons [16,17], or other ways [18]. In many previous papers, it was reported that the Bessel beam diffracted from a finitesized circular hole presented prominent axial intensity oscillations [19][20][21]. A sudden cut-off of the incident field leads to the strong diffraction effect at the hole edges. This problem was generally solved by covering a gradually absorbing mask. Almost complete suppression of axial intensity oscillations was achieved through apodization of the incident Bessel beam with flattened Gaussian or trigonometric apodized functions [20,22,23].
Although the gradually absorbing apodization technology presents a flat axial intensity profile theoretically, it faces two problems in practical applications. Firstly, the amplitude modulation is usually realized by using the spatial light modulator (SLM) [24][25][26][27][28], but the transmittance is difficult to be adjusted exactly the same as the expected value. If each pixel on the SLM brings about a transmittance error, the obtained intensity distribution will deviate much away from the ideal one. Secondly, it would be difficult that both the total size and spatial resolution of the SLM meet requirements simultaneously so that the apodized field drops very smoothly around the aperture edges. Based on these two considerations, a question is raised. Can we flatten the axial intensity oscillation of a finite-sized Bessel beam by a hard apodization (HA) technique? Here the HA technique means that there only exists a transmitted hole on the input plane, with other parts being opaque. Gray-level transmittance is no longer used for relieving the experimental difficulty and heightening the achieved intensity accuracy. For comparison, we refer to the gradually absorbing apodization technology as the soft apodization (SA) technique.
This paper is organized as follows. In Section 2, the design principles are described in detail, and the boundary formula of the transmitted hole is deduced analytically. For demonstrating the validity of our proposed strategy, in Section 3 the parameters are given and wave propagating behaviors are simulated by the complete Rayleigh-Sommerfeld method [29]. Physical explanations are also delivered. Section 3 consists of two subsections. In subsection 3.1, axial intensity distributions are calculated, and the intensity uniformity is characterized. In subsection 3.2, intensities on several cross-sectional planes within the beam propagation range are evaluated. A brief conclusion is drawn in Section 4 with some discussions.
Boundary formula of the transmitted hole for hard apodization
Let's first consider the diffraction of a Bessel beam from a finite-sized circular hole. The input plane is the xy plane, which situates at z 0 = 0 m. The incident Bessel beam propagates along the z-axis, whose amplitude distribution satisfies the J 0 (β r) function, where J 0 is the zeroth order Bessel function; β is the propagation constant; r = x 2 + y 2 denotes the radial distance from an arbitrary source point (x, y) to the origin on the input plane.
In the transmitted region (z > 0), the field at any point (x ′ , y ′ , z) is calculated by the complete Rayleigh-Sommerfeld method as [29] E(x ′ , y ′ , z) = kz i2π where k = 2π/λ represents the wave vector and λ represents the incident wavelength; Ω denotes integral region of the circular hole; ρ = (x ′ − x) 2 + (y ′ − y) 2 + z 2 is the distance from the source point (x, y, 0) on the input plane to the observation point (x ′ , y ′ , z). E 0 (x, y, 0) = J 0 (β r) = J 0 (β x 2 + y 2 ) stands for the incident field. The intensity at the observation point is given by I(x ′ , y ′ , z) = |E(x ′ , y ′ , z)| 2 , where |E| represents magnitude of the complex number E. On assuming x ′ = y ′ = 0, we will obtain the axial intensity distribution of the Bessel beam diffracted from the circular hole. In this case, the axial field can be simplified in polar coordinates as follows: where R is the radius of the circular hole; E 0 (r, 0) = J 0 (β r) represents the incident Bessel beam on the input plane z=0; r = x 2 + y 2 denotes the radial distance on the input plane. It is well known that strong axial intensity oscillations are encountered due to a hard truncation of the incident field. For suppressing the axial intensity oscillations, in Ref. [20] Cox et al. introduced a trigonometric apodized function as follows where T (r) represents the amplitude transmittance function on the input plane; ε is a smoothing parameter ranging from 0 to 1. Under this SA condition, the transmittance gradually decreases from 1 to 0 within an annular region r ∈ [εR, R], as displayed in Fig. 1(a). The green and pink dashed curves depict two circles with radii of εR and R, respectively. It is seen from Fig. 1(a) that the color gradually becomes from red to blue when the radius increases from εR to R. Correspondingly, the transmittance monotonically decreases from 1 to 0. For calculating the transmitted axial intensity, Eq. (2) is used where the input field now becomes E 0 (r, 0) = J 0 (β r)T (r). By using this SA technique, they achieved a flat axial intensity distribution of the diffracted Bessel beam, as seen in Figs. 4(a) and 4(b) in Ref. [20]. In the following, we will discuss our strategy. From Eq. (2), the field at any axial point (0, 0, z) is a diffraction superposition of all the source points on the input plane. Since there is only one integral variable r in Eq. (2), the optical system is rotationally symmetric. Namely, in the input plane each source point on a definite circle with the same radius r has the same incident field E 0 (r, 0) = J 0 (β r), contributing equally to a given axial observation point. Therefore, we can make such a equivalent change. For instance, in Fig. 1(a), if the transmittance T (r 0 ) equals 0.5 for some radius r 0 , we may let the incident field passes one half while the other half is blocked at radius r 0 . It is easy to understand that we should get the same axial intensity distributions in both cases. For other transmittance values, the only difference lies in the filling ratio of the transmitted part. Now we turn to explore the general shape that the transmitted hole should be like. In Fig. 1(b), the shadowed region represents the equivalent transmitted part for a given radius r within r ∈ [εR, R]. The filling ratio F(r) should fulfill the following equation After mathematical transformation, we may write Eq. (4) explicitly as follows All the points satisfying Eq. (5) form a closed curve, as drawn by the cardioid-like cyan solid curve in Fig. 1(c). From the above analysis, if the incident Bessel beam passes through a cardioid-like hole while it is prevented outside, we can expect the same flat axial intensity distribution as that in Ref. [20]. Figure 1(c) shows the transmittance distribution on the input plane under the proposed HA condition, indicating the transmittance being 1 and 0 inside and outside the region encircled by the cardioid-like curve, respectively.
Propagating behaviors of the apodized Bessel beams
In this section, we will perform numerical simulations to prove the validity of the proposed HA technique. Parameters are chosen as follows. For the incident Bessel beam, the propagation constant is β = 10 4 m −1 with wavelength of λ = 0.5 µm. The circular hole has a radius of R = 50 mm. In this subsection, the axial intensity distribution of the diffracted Bessel beam is investigated. Firstly, by assuming E 0 (r, 0) to be J 0 (β r) in Eq. (2), we can obtain the axial intensity distribution of the diffracted Bessel beam from a finite-sized circular hole without apodization, as shown by the black curve in Fig. 2(a). It is seen from Fig. 2(a) that the axial intensity oscillates with a relative error of about ±7%. Secondly, axial intensity profiles are flattened by the SA and HA techniques to prove their equivalence, as shown by the blue solid and red dashed curves in Fig. 2(b), respectively. In both cases, we choose the same smoothing parameter ε 0 as 0.5. Under the SA condition, the incident field on the input plane in Eq. (2) is given by E s 0 (r, 0) = J 0 (β r) × T (r), where T (r) is given by Eq. (3). The integral region Ω 1 is the circular hole. Under the HA condition, since the cardioid-like hole is rotationally asymmetric, axial intensity should be calculated from Eq. (1), where the incident field on the input plane is E 0 (x, y, 0) = J 0 (β x 2 + y 2 ) and the integral region Ω 2 is the cardioid-like hole determined by Eq. (5). It is demonstrated in Fig. 2(b) that both curves overlap, namely, we have achieved the same flat axial intensity distributions of the diffracted Bessel beam by the previous SA and the proposed HA techniques. Thirdly, through optimizing the smoothing parameter ε in Eq. (5), we have computed the diffracted axial intensity distributions of the Bessel beam enclosed by the cardioid-like curve, as drawn in Fig. 2(c). The smoothing parameter ε ranges from 0.7 to 0.9 for every 0.05. The blue, green, red, cyan, and black curves correspond to different smoothing parameters of 0.7, 0.75, 0.8 ,0.85, and 0.9, respectively. With the increase of the smoothing parameter, the propagation distance is lengthened while the axial intensity uniformity is weakened, as observed in Fig. 2(c). It can be interpreted. When the smoothing parameter becomes larger, the transmittance drops more rapidly from 1 to 0 near the hole edges. Therefore, the diffraction effect is stronger and the axial intensity oscillations appear to be more prominent. When the smoothing parameter ε reaches 1, it degenerates to the case in Fig. 2(a). On taking both the axial intensity uniformity and the propagation distance into account, the smoothing parameter is compromised as 0.8 in the following.
The axial intensity distribution of the apodized Bessel beam
Next, the axial intensity uniformity is quantitatively characterized when the smoothing parameter is 0.8. The flat axial intensity region is defined as the axial domain with intensity relative error less than ±0.01%. Numerical results reveal that the flat axial intensity region extends out to z f = 32.01 m. The axial range expands to 39.15 and 46.42 m when the restriction of intensity relative error is relaxed to ±0.1% and ±1%, respectively. Then, the axial intensity oscillations are gradually strengthened. After the axial intensity reaches its peak value of 1.037 at z f = 48.44m, it drops rapidly towards 0. When the propagation distance is longer than 61.46 m, the axial intensity drops below 1%. The numerical simulations agree well with the results obtained from geometrical optics. Since the Bessel beam can be considered as superposition of plane waves whose wave vectors lie on a cone with an angle θ with respect to the z-axis [10,20], where β = k sin(θ ) = 2π sin(θ )/λ . In geometrical optics, the effective axial beam range is given by z max = R/ tan(θ ) = 62.83 m.
Intensity distributions on several cross-sectional planes for the hard apodized Bessel beam
Transverse position x (mm) Transverse position y (mm) Since the cardioid-like curve in Fig. 1(c) is rotationally asymmetric, the field on crosssectional planes should also be asymmetric. In this case, the transmitted intensity distributions are calculated from Eq. (1) and the integral region Ω is the cardioid-like hole. To see how asymmetric the field is, figure 3 displays the regional intensity deviation on three crosssectional planes. Figure 3(a) presents the intensity deviation ∆I 1 = I 1 (x 1 , y 1 ) − I 0 (x, y), where I 1 (x 1 , y 1 ) represents the intensity distribution on the lateral plane z 1 = 10 m and I 0 (x, y) = |J 0 (β x 2 + y 2 )| 2 is the intensity of the incident Bessel beam on the input plane z 0 = 0 m. Figures 3(b) and 3(c) are the same as Fig. 3(a) except for z 2 = 20 m, and z 3 = 30 m, respectively. A cardioid-like shape of intensity deviation is observed in Fig. 3, which suggests that asymmetry of the cardioid-like hole leads to the intensity asymmetry. With the propagation of Bessel beam, the size of the cardioid-like shape is gradually decreased. The intensity deviation is in the magnitude of 10 −3 from the color bars, indicating the intensity relative error is less than 1%. Accordingly, the diffracted Bessel beam preserves a relatively stable propagation property in a considerably long axial range, although somewhat asymmetric. To see the intensity asymmetry property more clearly, we extract the intensity distributions along the x-axis and the y-axis on the above three cross-sectional planes, as shown in Figs. 4(a), 4(b), and 4(c). The blue solid and red dashed curves correspond to intensities along the x-axis and the y-axis, respectively. It is seen from Figs. 4(a) to 4(c) that the intensity profiles on both axes almost overlap. In order to magnify the intensity difference along the x-axis and the y-axis, we calculate the intensity deviation ∆I x = I i (x i , 0) − |J 0 (β |x|)| 2 and ∆I y = I i (0, y i ) − |J 0 (β |y|)| 2 on the three cross-sectional planes, as shown by the blue and red curves in Fig. 4(d) from top to bottom. I i (x i , 0) and I i (0, y i ) (i=1,2,3) represent the intensities along the x-axis and the y-axis, respectively. The intensity deviation ∆I x or ∆I y presents the difference between the intensity at the observation point and that at the corresponding point on the input plane, indicating the propagation invariance property. It is seen from Fig. 4(d) that the red curves (∆I y ) are symmetric about the origin point while the blue curves (∆I x ) are asymmetric about the origin point. The reason is that the cardioid-like hole is symmetric about the x-axis but asymmetric about the y-axis. Since the maximum intensity deviation is less than ±10 −2 , the diffracted Bessel beam propagates relatively stably within the beam propagation range. If we calculate the asymmetric intensity difference ∆I xy = I x − I y = ∆I x − ∆I y , the intensity asymmetric property will be quantified. The intensity asymmetry does exist since the two curves in Fig. 4(d) are clearly separated. However, the maximum asymmetric intensity difference ∆I xy is only 0.89%. Especially, for applications of Bessel beams we are generally interested in a few lobes near the center. The asymmetric intensity difference ∆I xy is below 0.12% for the central five intensity lobes. Consequently, the intensity asymmetry can almost be neglected in the paraxial region.
Conclusion and discussions
In this paper, we present a new feasible way to flatten the diffracted axial intensity distribution of a finite-sized Bessel beam. Numerical results by the complete Rayleigh-Sommerfeld method have validated our strategy. It is found that the Bessel beam diffracted from a cardioid-like hole maintains a very good propagating invariance property along the axial direction. The boundary formula of the cardioid-like hole is given analytically. Compared with the previous SA technique, the proposed HA technique provides a more feasible choice in practical applications with higher accuracy, since the error only comes from the hole boundary.
In fact, the proposed HA technique is a universal method for suppressing the axial intensity oscillations, by transforming the transmittance distribution into spatial transmitted filling factors. Therefore, for other amplitude transmittance functions like the flattened Gaussian shape, by analogy we can obtain another closed transmitted hole on the input plane. Moreover, the proposed method may be extended to suppress axial intensity oscillations for other kinds of incident waves, such as plane waves or Gaussian waves. It is expected to have practical applications in many optical processing systems. | 4,138.4 | 2017-11-02T00:00:00.000 | [
"Physics",
"Engineering"
] |
Particle-Resolved Computational Fluid Dynamics as the Basis for Thermal Process Intensification of Fixed-Bed Reactors on Multiple Scales
: Process intensification of catalytic fixed-bed reactors is of vital interest and can be conducted on different length scales, ranging from the molecular scale to the pellet scale to the plant scale. Particle-resolved computational fluid dynamics (CFD) is used to characterize different reactor designs regarding optimized heat transport characteristics on the pellet scale. Packings of cylinders, Raschig rings, four-hole cylinders, and spheres were investigated regarding their impact on bed morphology, fluid dynamics, and heat transport, whereby for the latter particle shape, the influence of macroscopic wall structures on the radial heat transport was also studied. Key performance indicators such as the global heat transfer coefficient and the specific pressure drop were evaluated to compare the thermal performance of the different designs. For plant-scale intensification, effective transport parameters that are needed for simplified pseudo-homogeneous two-dimensional plug flow models were determined from the CFD results, and the accuracy of the simplified modeling approach was judged.
Introduction
Fixed-bed reactors are heavily used in the chemical and process industry, especially in the field of heterogeneous catalysis, where there are thousands of individual catalytic fixed-bed reactors with a low tube-to-particle diameter ratio N ≤ 10 that are interconnected to tube-bundle reactors. This design decision is the result of optimizing multiple objectives, such as low pressure drop, good radial heat transport, and high active catalytic surface area [1]. Nevertheless, advancing climate change and the shortage of raw materials make more resource-and energy-efficient processes necessary. Here, numerical methods can play a paramount role to develop better designs faster and that are more cost efficient. In the last few years, particle-resolved computational fluid dynamics (CFD) was heavily used by numerous authors to develop process intensification strategies with the focus on the effects on the mesoscopic pellet scale. The range of works extends from investigations of the influence of particle shape on bed morphology and fluid dynamics [2][3][4], heat transport [5][6][7][8], and mass transfer processes [9][10][11][12] to the development of novel reactor concepts, such as packed foams [13][14][15], periodic open-cell structures [16][17][18], finned reactors [19], or the use of random macroscopic wall structures [20,21]. However, particleresolved CFD is a numerically very demanding method, and its applicability is currently limited to systems with a few thousand particles [22].
When talking about process intensification, however, this can take place at different spatial scales [23], ranging from the molecular scale to the pellet scale to plant scale. On the plant-scale level, process intensification options are process optimization [24][25][26], the development of process integration concepts [1,27,28], and applying dynamic operating conditions [29,30]. Due to the restrictions discussed above, particle-resolved CFD cannot directly be applied for the simulation on the largest scale. For this, process simulation software, e.g., Aspen Plus, gPROMS, or the open-source solution DWSIM, is the more efficient choice. Most often, these software packages use two-dimensional pseudo-homogeneous models to describe fluid dynamics and the heat and mass transfer of fixed-beds. For these kinds of models, the knowledge of effective transport parameters, e.g., the effective viscosity and thermal conductivity, the wall heat transfer coefficient, and the axial dispersion coefficient, is necessary to obtain accurate results. Since those parameters need to lump a series of effects that have their fundamentals on micro-and meso-scale fluid dynamic effects, published data varies greatly [31] and can often only be found for certain reactor designs. This is where particle-resolved CFD comes into play, since it has the potential to act as a data source to derive the effective transport parameters that are needed for a reliable process simulation. Recent publications by Dixon [32] and Moghaddam [33] show encouraging results.
In the scope of this work, we investigated different fixed-bed reactor concepts numerically, using particle-resolved CFD. Besides reactors filled with different particle shapes, namely spheres, cylinders, Raschig rings, and four-hole cylinders, additionally, the impact of macroscopic wall structures was studied for packings of spherical particles. The research focus lied on the quantification and qualitative characterization of their heat transport characteristics. For the sake of reduced complexity and to nail the investigations down to the impact of fluid dynamic effects only, no chemical reactions were considered in the investigated cases. The aim of this study was to: 1. understand the effect of particle shape and macroscopic wall structures on the packing morphology and, with this, the fluid dynamics and heat transport in fixed-beds; 2.
quantify improvements in the fixed-bed reactor design that can be achieved, only from a fluid dynamics point of view; 3.
increase the phenomenological understanding of fluid dynamics and heat transfer in fixed-bed reactors; 4.
show how effective transport parameters, such as the effective thermal conductivity and wall heat transfer coefficient, can be extracted from particle-resolved CFD results. Those parameters can then be used in simplified process simulation models for process intensification on the plant scale.
Materials and Methods
In this section, the fundamentals of the numerical models are briefly discussed. For a more detailed description, the reader is referred to literature that explains the fundamentals of CFD [34], particle-resolved CFD [22] and simplified fixed-bed reactor modeling [24,35] in more detail.
Particle-Resolved CFD
Particle-resolved CFD is an established numerical method for the simulation of fixed-bed reactors. This CFD-based modeling approach is characterized by a full threedimensional spatial resolution of all particles and their interstices. The general procedure consists of four fundamental steps: packing generation, CAD generation, meshing, and the CFD simulation itself. For a more detailed discussion about particle-resolved CFD, the interested reader is referred to comprehensive review articles by Dixon et al. [36,37] and Jurtz et al. [22]. A description of all numerical methods used in the scope of this work can be found in Supplementary Material, Sections S1 and S2 attached to the article. The most important material properties and boundary conditions are summarized in Table 1. All nu-merical simulations were conducted with the commercial CFD tool Simcenter STAR-CCM+ provided by Siemens PLM Software.
CFD Simulation
Fluid density (kg/m 3 )) Ideal gas law Fluid specific heat (J/(kg K)) 1006.82 Fluid thermal conductivity (W/(m K)) 0.02414 In this study, the discrete element method (DEM) was used to numerically generate random packings of spherical and various cylinder-like particle shapes. For the nonspherical particles, the contact detection algorithm of Feng et al. [39] for cylindrical particles was used. The particle beds of Raschig rings and four-hole cylinders are identical to the ones of the cylinders in terms of particle position and orientation, since they are based on the same DEM simulation results. This means that for particles with inner voids, as Raschig rings and multi-hole cylinders, the effect of the inner voids on the particle dynamics during the filling process was neglected. In our previous studies, it was shown that this was a valid assumption [4]. For all filling simulations, the linear spring contact model was used. An overview of all generated packings is given Figure 1.
The aim of this study was to analyze the impact of particle shape and packing mode on the fluid dynamics and heat transfer. Therefore, fixed-beds filled with different particle shapes were generated, whereby the tube-to-particle diameter ratio was held fixed to a value of N = 5 by setting a constant volumetric sphere-equivalent particle diameter of d p,v = 11 mm and inner tube diameter of D = 55 mm. It is known from previous studies [40,41] that even or odd numbered tube-to-particle diameter ratios can lead to additional heterogeneities in the bed morphology. It was expected that additional morphological heterogeneities would lead to an increase of thermal heterogeneities as well. In order to identify the limitations of the pseudo-homogeneous two-dimensional plug flow model, we decided to benchmark the model for such an extreme case against particle-resolved CFD results. Increasing particles thermal conductivity to extreme values, as recently done by Moghaddam et al. [33], is also an option to increase thermal heterogeneities. However, since in most applications, the ceramic-type catalyst support, which is often porous, is used, which is characterized by a low thermal conductivity (λ s ≈ 0.2 W/(m K)), we decided to not choose that option. A sketch of each investigated particle shape, including its dimensions, is given in Figure 2. The bed height was set to h = 100 d p,v . For each particle shape, two different packing modes were created: a rather loose and a dense bed configuration. To achieve different packing densities, the static friction coefficient was used as a tuning parameter during the DEM simulation. A more detailed description of this method can be found in our previous publications [4,42].
The macroscopic wall structure was generated with a Java macro. Along the reactor length of 1.1 m, on 96 axial stages, fifteen spheres were homogeneously distributed on each stage, whereby the center of mass of each sphere was in coincidence with the tube radius. Subsequently, each sphere was move in the outward direction by a factor of rand(0, 1) · d p,v /2. Thereafter, all spheres of each stage were rotated with a random angle around the reactor axis, using a rotation angle of rand(0, 1) · 2π. In a final step, the macroscopic wall structure was generated by subtracting all spheres from the reactor tube. The structured tube was afterwards filled with spherical particles, whereby the same simulation parameters and boundary conditions were used as for the smooth wall setup.
Meshing
After the bed generation was completed, the position and orientation of all particles were extracted, and based on these data, a CAD model of the fixed-bed was generated by placing CAD parts of the respective particle shape. Subsequently, the geometry was meshed, whereby the improved local "caps" approach, developed by Eppinger et al. [8], was used to avoid bad cell quality near particle-particle and particle-wall contacts. This enhanced meshing strategy was based on earlier work of the authors [41]. In one of our previous studies, it was proven that this meshing approach not only worked fine for spherical particles, but also for more complex particle shapes such as cylinders, Raschig rings, and multi-holed cylinders [4,43]. It was found that the mesh settings used led to mesh-independent results regarding fluid dynamics and heat transfer [3,10,44]. Recently, Eppinger and Wehinger [8] investigated the impact of the gaps that were introduced between the particles during the meshing process. They found that the gap size had only a marginal effect on bed voidage and pressure drop. However, the authors found that the fluid in the gaps was no longer stagnant if the gap size was increased above a value of 0.01 d p,v , which was the value that was used in the present work. Therefore, a bigger gap size than the one used in this study could negatively affect the accuracy of heat transfer simulations, since an additional thermal resistance would be introduced for the inter-particle heat transfer.
To avoid unwanted inlet and outlet effects, the inlet and outlet faces were extruded a distance of 1 D and 3 D away from the bed, respectively. Two prism layers with a target thickness of 0.025 d p,v were used to capture the fluid dynamic and thermal boundary layer at the particles and the tube wall.
CFD Simulation
The momentum, energy, and turbulence transport equations were solved in a segregated manner (see Section S2). For the sake of reduced complexity and to be able to study the convective and conductive heat transfer without any superposing heat transfer mechanism, the heat transfer simulations were conducted under conditions where radiative heat transfer can be neglected, which was, therefore, not accounted for. Because of this, an inlet temperature of T 0 = 20°C and a constant wall temperature of T w = 200°C were used along the fixed-bed region.
For all simulations, the SIMPLE-algorithm was used for pressure-velocity coupling, and turbulence was considered through Reynolds-averaged Navier-Stokes (RANS) equations in conjunction with realizable k-ε model-based closures. This turbulence model was successfully used in our previous works [4,8,10,41,43].
Simplified Heat Transfer Modeling
The class of pseudo-homogeneous models is widely used for the simulation of fixedbed reactors. Here, the particle scale is not resolved. Instead, all effects are lumped into effective transport parameters. In terms of heat transport, the effective transport parameters needed are either a radially invariant effective thermal conductivity λ eff,r and a wall heat transfer coefficient α w or only a radially varying effective radial thermal conductivity λ eff,r (r). The two different concepts are described in the following sections.
Pseudo-Homogeneous λ eff,r -α w Model
The pseudo-homogeneous two-dimensional plug flow heat transfer model under steady-state conditions is described by: whereas the following boundary conditions are used: Here, u z = u 0 /ε is the constant interstitial velocity, ρ f the fluid density, and c p,f the specific heat of the fluid. The λ eff,r -α w model lumps all radial heat transfer mechanisms into a constant effective radial thermal conductivity λ eff,r . The steep temperature drop at the tube walls is modeled by the introduction of an artificial wall heat transfer coefficient α w , using Equation (2). The thermal conductivity in axial direction can be assumed to be equal to the stagnant effective thermal conductivity λ eff,z = λ 0 eff,r , or it can be neglected if the system is dominated by convective effects. The model itself has been critically discussed by many authors [31,45,46], but nevertheless widely spread due to its simplistic nature.
It was found by Yagi and Kunii [47] that the radial effective thermal conductivity can be expressed as: The first term on the right-hand side is the effective radial thermal conductivity of the stagnant bed. A huge number of correlations exists to determine λ 0 eff,r , which have been reviewed by van Antwerpen et al. [48]. Based on a unit cell approach, Zehner and Schlünder [49] derived the following correlation that is widely used: Here, κ is the ratio of solid to fluid thermal conductivity and B is the deformation parameter, which is related to the void fraction by B = 1.25((1 − ε)/ε) 10/9 . The correlation can be further extended by incorporating secondary effects like radiative heat transfer or the effect of particle-particle contacts on the heat transfer. For a more detailed description, the interested reader is referred to the work of Tsotsas [50] and van Antwerpen et al. [48].
For the heat transfer coefficient at the wall, Yagi and Kunii [51] proposed the following correlation for Nu w = α w d p,v /λ f : using: Nilles and Martin [52,53] developed the following correlation that is widely used: According to Dixon [31] two methods are commonly used to determine λ eff,r and α w . The first option is a parameter estimation done by conducting an optimization study based on the λ eff,r -α w model, whereas the objective is to minimize the sum of squared error regarding the radial temperature profile at one or more axial positions. Alternatively, the method described by Wakau and Kaguei [54] can be used. This method is based on the approximate solution of the pseudo-homogeneous λ eff,r -α w model and allows determining λ eff,r and α w from the axial temperature profile in the core of the bed and the average outlet temperature. Both can easily be extracted from the particle-resolved simulation results.
The latter method was used in this study and presented in great detail by Wakao and Kaguei [54]. By neglecting the axial thermal conductivity, the analytical solution of Equation (1) is: n y a n 1 + (a n /Bi) 2 J 1 (a n ) Here, r is the radial position and Bi the Biot number . a n is the n-th root of the following equations that include the Bessel function of the first kind and zeroth-order J 0 and the first kind and first-order J 1 : BiJ 0 (a n ) = a n J 1 (a n ).
The parameter y is expressed by: whereas z is the axial position, ρ f the fluid density, and c p,f its specific heat. Deep in the bed, when y ≥ 0.2, the first term in the series of Equation (13) becomes predominant, leading to: with: BiJ 0 (a 1 ) = a 1 J 1 (a 1 ). (17) In the center of the bed (r = 0 and T = T core ), Equation (16) is reduced to: If Equation (18) is logarithmized, it gives: It was shown by Wakao and Kaguei [54] that the following relationship for the average outlet temperature T m is valid for a reasonably large axial position: From Equation (20), a 1 can be solved iteratively, and λ eff,r can be calculated from the slope of Equation (19), subsequently. The wall heat transfer coefficient can either be determined from the intercept of Equation (19) or from Equation (17). The latter method was promoted by Wakao and Kaguei [54], since the authors argued that α w is very sensitive to slight changes of the intercept.
Both methods were tested during this study. A sensitivity test was conducted based on the particle-resolved CFD results for a packing of spherical particles at Re p = 100. The sensitivity analysis of α w and λ eff,r towards the accounted temperature range was conducted, whereas the range of Θ core = (T core − T 0 )/(T w − T 0 ) was varied as follows: It was found that λ eff,r had a relative standard deviation (RSD) of ±6 %. Then, from the intercept of Equation (19) calculated, the values of α w had a very low RSD of ±3 %, while the suggested method of Wakao and Kaguei increased the RSD to ±15 %, which was in contrast to the authors' argumentation. Nevertheless, a huge discrepancy in α w was found: the values of the intercept-method were up to three times lower in comparison to the values determined from Equation (17). A comparison of particle-resolved CFD results against the results of the two-dimensional plug flow model in terms of axial and radial temperature profiles revealed that the temperature profiles were mispredicted if the intercept method was used, while the method promoted by Wakao and Kaguei led to reasonable results. Therefore, as Wakao and Kaguei did, we also highly recommend calculating α w from Equation (17) instead of the intercept of Equation (19). To evaluate λ eff,r and α w , the axial temperature profile was limited to 0.2 ≤ Θ ≤ 0.8 in this work.
Pseudo-Homogeneous
Instead of describing the additional thermal resistance close to the wall with a heat transfer coefficient, a radially varying effective radial thermal conductivity can be introduced. Furthermore, the radial variations of the interstitial velocity and effective axial thermal conductivity can also be considered. With this, Equation (1) is modified as follows: In this case, the artificial boundary condition described in Equation (2) vanishes and is replaced by the following Dirichlet boundary condition: As reviewed by Dixon [31], multiple models exist to determine λ eff,r (r). Most often, the reactor is split into two regions to characterize the heat transfer in the near-wall and the bulk region separately. The models reported in the literature vary in their definition of the extent of each region and the description of λ eff,r (r) = f (r). Ahmed and Fahien [55] defined the wall region to be 2 d p,v thick and used a cubic dependency for λ eff,r (r) in the bulk and a linear decrease in the wall region. They used the correlations of Argo and Smith [56] in combination with correlations for the radial void fraction distribution to obtain the necessary values of λ eff,r in the center of the bed, at the tube wall, and at the interface of both regions. Contrary, Gunn et al. [57][58][59] used a constant value for λ eff,r in the bulk region and assumed a quadratic dependency of T(r) in the wall region. They defined the wall region to be 0.3 d p,v thick. Smirnov et al. [60] defined a wall thickness that depended on bed voidage and particle specific surface area. They used a constant effective thermal conductivity in the bulk region and a linear dependency close to the wall. Winterberg et al. [61] proposed a Reynolds number-dependent thickness of the wall region. In the core region, a constant λ eff,r was assumed, which decreased in the wall region, using a power-law approach that depends on the Reynolds number, Péclet number, and three more parameters. Recently, Pietschak et al. [62] reviewed several heat transfer correlations and found the correlation of Winterberg et al. [61] to be superior, especially if axial and radial variations of fluid properties were considered. Pietschak et al. [63] proposed a new correlation that accounted for the drop of λ eff,r close to the wall, but without the need to introduce a discontinuity at the interface of the near wall and the bulk region. The authors correlated λ eff,r (r) = f ρ f , c p,f , d p,v , ε 0 , ε w , ε(r), u 0 (r) and added the cross-mixing factor and an exponent as additional parameters. The radial velocity and void fraction profiles were taken from additional correlation, but the needed data could potentially also be derived from particle-resolved simulations, as recently shown by Dixon [32].
In this work, the correlation of Winterberg et al. [61]: using: was used as the basis to determine λ eff,r (r). In Equation (23), K w is the cross-mixing factor that describes the relationship between effective thermal conductivity, particle shape, and flow velocity deep in the bed. The cut-off parameter k f,w in Equation (24) sets the dimensionless wall distance, after which the constant thermal conductivity, which was assumed in the core region, drops towards the wall. The exponent n f,s describes the curvature of the damping function close to the wall.
To determine the parameters above, the circumferentially averaged radial temperature profiles in the interval z = [0.1 : 0.1 : 1.1] were extracted from the particle-resolved CFD simulations for the simulation with Re p ≥ 500. For the lowest investigated Reynolds number of Re p = 100, the radial temperature gradients flattened out quickly. Therefore, in this case, only the temperature profiles in the range of z = [0.1 : 0.1 : 0.3] were considered. Based on the model described by Equation (21), a parameter optimization study was conducted, using the Nelder-Mead algorithm, while the objective was to minimize the sum of least squares of the difference of the radial temperature profiles between the simplified model and the particle-resolved results. In total, a number of 1100 (Re p ≥ 500) and 300 (Re p = 100) data points were available for the optimization task for each operating condition.
Heat Transfer Validation
Experimental validation data for axial or radial temperature profiles are scarce and hard to find. Nevertheless, Wehinger et al. [3] and Dong et al. [6] were able to prove the accuracy of the particle-resolved CFD approach, especially in combination with the local "caps" meshing strategy, in terms of axial and radial temperature profiles.
Based on experimental data that were provided by Clariant International Ltd., a validation study was conducted in this work to confirm the reliability of particle-resolved CFD also under industrially relevant conditions (T > 1000°C). The experimental setup consisted of a hot box, fired with an electrical furnace, and a single reformer tube with an inner diameter of 0.1016 m and a bed height of 1 m. With thermocouples, placed at the outside of the reformer tube, the axial profile of the outer wall temperature was measured. The temperature in the center of one of the packed reformer tubes was measured with a 0.25 standard 316 SS axial thermowell until an axial distance of approximately 0.5 m. The thermocouples used were of type K, with an accuracy of ±1.5°C or ±0.4%,whichever was greater. In the scope of this study, two different particle shapes, a tablet-like cylinder with six holes (33 × 18 mm) and an almost equilateral cylinder with ten holes (19 × 16 mm), were investigated.
The experimental setup was replicated numerically, whereas special emphasis was given to meet the particle count that was determined in the experiments. To achieve this, particle static friction coefficients were calibrated, as described by Jurtz et al. [4,42]. Since the tube thickness was relatively big, not only the fluid and the particles, but also the reformer tube itself were spatially discretized. While a fully conformal contact interface was used for the particle-fluid interface, for the sake of reduced cell count, the tube-fluid interface was performed as a non-conformal mapped contact interface. An overview of the investigated setups, including snippets of the resulting meshes, can be seen in Figure 3. Preliminary studies showed that the thermowell not only affected the flow field significantly, as recently discussed by Dixon and Wu [64]. It was found that the heat conduction through the thermowell could not be neglected, since it significantly affected the temperature distribution in the vicinity of the measuring device. To consider for heat conduction in the thermowell, a three-dimensional shell model was used that solved for the lateral conductive heat transport and modeled the heat transport in the face normal direction via the assumption of a constant temperature gradient. The radiative heat transport was considered using a surface-to-surface radiation model as done by Wehinger at al. [7] recently. The experimentally measured temperature distribution on the outer side of the reformer tubes was applied as a spatially varying fixed temperature boundary condition at the outer tube surface. The inlet temperature, according to experimental data, was set to 560°C and the operating pressure to 1.5 barg. Nitrogen was used as the working fluid, whereas the ideal gas equation of state was used. The fluid viscosity and thermal conductivity were calculated using the Chapman-Enskog model. Inlet flow rates were varied between 15 and 50 Nm 3 /h. Particles' thermal conductivity was set to 0.25 W/(m K), whereas for the tube and thermowell, the following function, derived from the spec sheet, was used: λ s [W/(m K)] = 8.195 · exp 1.188 · 10 −3 · T . The emissivity was set to 0.75 for the particles surface and to 0.6 for the inner reformer tube and the thermowell.
The simulation results are given in Figure 4 in terms of the axial profiles of the dimen- The numerical data are presented as a scattered cloud of small symbols to also visualize the temperature variation in the circumferential direction. It can be seen that due to the conductive heat transport within the solid of the thermowell, temperature variations in circumferential direction were low. Without considering this heat transfer mechanism, temperature differences of over 50 K were found (see Section S3),which indicated that the measuring device not only affected the fluid dynamics, but also the measured temperature field significantly. This strengthened the argument that the use of high-fidelity numerical methods can improve the accuracy of determining effective heat transfer parameters significantly. Deep in the bed, an excellent agreement could be found between the predicted and measured temperatures for the six-hole tablets. Only for z/h ≈ 0.1, some deviations were found. However, if one considers the obvious impact of the heterogeneous bed morphology on the axial temperature profile, the accuracy was still acceptable. For the 10-hole cylinders, the experimental temperature profile was hit almost perfectly for z/h ≤ 0.35. For a flow rate of 15 N m 3 /h, also deep in the bed, a good agreement with the experimental data was found. However, at higher flow rates, for z/h ≈ 0.5, the simulations results were far off. The reason for this could not be identified, but the fact that the experimental data showed an increase of the axial temperature gradient at higher bed depths for high flow rates was suspicious and may indicate that the temperature sensors were damaged under the harsh operating conditions. Similar problems have been noted by other authors [31] and illustrate the challenges associated with experimental temperature measurements in fixed beds. A possible re-ordering of particles at the tip of the thermowell during operation might also be a possible reason for the deviations observed.
Results and Discussion
The heat transport in fixed-bed reactors is strongly coupled to fluid dynamics effects that are induced by the heterogeneous bed morphology. Therefore, in the first part, the bed morphology and fluid dynamics of all generated packings were investigated. Afterwards, the different configurations were compared with regard to their thermal performance. The global heat transfer coefficient U =Q w / A w ∆ log T , using ∆ log T = (T out − T in )/ log((T w − T in )/(T w − T out )), was used to compare the different designs, whereby U was evaluated for an axial threshold of the reactor that fulfilled the criterion 0.0 ≤ Θ core ≤ 0.8. In the last part, the simulation results were used to determine effective thermal transport parameters that are commonly needed for the pseudo-homogenous two-dimensional plug-flow model. The results were compared against particle-resolved CFD results to understand the reliability of simplified models. A summary of the most important results and simulation parameters is given in Table 2. A schematic drawing of the setup is presented in Figure 5.
Bed Morphology and Fluid Dynamics
Already, the first visual impression of the generated packings that was given in Figure 1 showed the strong impact that the packing mode had on the particle arrangement. This can best be seen for spherical and cylindrical particles. While for the loose packing configuration, although the confining walls exerted an ordering effect on the particles, to some extent, random arrangement of the particles close to the wall can be seen, the compacted beds were characterized by a high degree of order. Especially the spherical particles tended to build band-like structures at the wall, whereas cylindrical particles built stacked structures and were mostly oriented parallel or perpendicular to the wall. From a fluid dynamics and reaction engineering point of view, the most important effect was the significant reduction in bed voidage that was caused by bed densification. By this, the pressure drop, local flow phenomena, hydraulic residence time, and the active catalytic surface area per reactor volume were significantly affected. The evaluated bed voidage, listed in Table 2, shows that for spherical particles, the bed voidage was reduced by 10%. An extreme reduction of 20% was found for cylindrical particles. For particles with inner voids, like rings and four-hole cylinders, the effect was less pronounced, giving a drop of 10% and 6%, respectively. However, this reduced impact was only a result of the overall higher bed voidage for these particles. For the configuration of spherical particles in the reactor with macroscopic random wall structures, the densification-induced reduction of bed voidage was 11%, which was similar to the reactor with plain walls.
The axial and radial void fraction profiles were good resources to understand the packing morphology of the different designs. Strong and regular oscillations are indicators of ordered particle arrangements and a loss of randomness in the system, whereas low non-regular fluctuations in bed voidage point towards an increasing randomness of the particle arrangement. For an ideally random packing arrangement, the void fraction profile should end in a constant value. Distinct peaks in the void fraction profile are indicators of additional voids that are a result of a non-appropriate filling strategy, which leads to jamming of particles. The axial void fraction profiles of all investigated packings are given in Figure 6. Since the bed rested on a bottom plate, the lowest layers of particles experienced a certain ordering effect, which was induced by the adjacent wall. For spherical particles, only a point contact was possible between the particles and the bottom plate, leading to a value of ε = 1 at z/d p = 0. Particles of the cylindrical shape type may have a point, line, or face contact with the wall. If face contacts are present, it is possible that ε < 1 at z/d p = 0. However, for most of the investigated packing, it can be seen that the ordering effect of the bottom plate led to regular oscillations in the void fraction that flattened out after a distance of 3-5 d p and ended up in random oscillations of lower magnitude, indicating a stochastic axial distribution of the particles. The only exceptions were the compacted packing of spherical particles and the loose packing of spheres in the reactor with macroscopic wall structures. For the dense packing of spheres, regular oscillations were observed between 0 ≤ z/d p ≤ 22. This indicated a pronounced layer formation in the bottom part of the reactor. In the remaining part of the reactor as well, regular oscillations were observed, albeit to a lesser extent. In the wall structured reactor, high fluctuations were observed that suddenly appeared and flattened out. A probable reason for this was jamming of particles during the filling process that led to additional voids. This hypothesis was strengthened by the fact that this effect vanished for the densified packing. Of fundamental interest for the understanding of the fluid dynamics are the radial void fraction profiles and the radial profiles of the circumferentially averaged axial velocity, given in Figure 7. Here, the axial velocity was normalized to the local interstitial velocity u 0 /ε(r). With the exception of the structured wall reactor, for all particle shapes, directly at the wall, a void fraction of ε = 1 was found due to the presence of point and/or line contacts, only. For spherical particles, a first minimum in the void fraction was reached after the distance of one particle radius away from the wall, indicating that the majority of spheres were in direct contact with the wall, forming a closed particle layer. Furthermore, local minima and maxima occurred at positions corresponding to multiples of the particle radius, whereas the oscillations slightly decreased. The global minimum of the void fraction was located in the center of the bed, indicating that an almost stacked arrangement of spheres was present. This was the result of odd tube-to-particle diameter ratios [40,41]. A strong correlation could be found between the void fraction and the velocity profile. Close to the wall, the velocity reached its maximum, known as the wall channeling effect. The position of further minima and maxima corresponded directly to the position where high/low void fractions were found. While the minima of axial velocity did not change with varying Re p , the maxima increased slightly in the center of the bed if Re p was lowered. This effect could be attributed to the gas expansion due to heating and to the decreasing wall effect if Re p was lowered. The center of the bed was almost completely blocked for the flow. The above findings were also valid for the densified packing of spheres; however, the effects were even more pronounced, resulting in a complete blockage of flow paths at r * = (R − r)/d p = [0.5, 1.5, 2.5], and strong channeling was observed at r * = [0.1, 1.0, 2.0], whereas for Re p ≤ 500, the strongest channeling was not found at the wall, but at r * = 2.0, which is very uncommon.
For cylindrical particles, the trend was similar as for spheres; however, the minima/ maxima in the void fraction and velocity were slightly shifted towards the bed center, which indicated that some particles were diagonally aligned. For the densified packing, the minima/maxima were found at multiples of the particle radius, which was a result of the particles' preferred parallel/orthogonal alignment. In contrast to the packings of spheres, where the wall channeling was almost independent of Re p , for cylindrical particles, the wall channeling effect increased significantly if Re p was raised. This effect became very dominant for the compacted packing. The void fraction profiles for Raschig rings and four-hole cylinders looked pretty complex; nevertheless, especially for r * < 1, the inner voids of the particles were clearly reflected by corresponding additional maxima in the void fraction. However, no maxima in velocity could be found at void fraction maxima that corresponded to inner voids. This indicated that the flow through the inner particle voids was partially blocked, which might be because of an orthogonal particle alignment. Overall, the void fraction and velocity oscillations were less pronounced for those particle shapes, but heterogeneities increased if the beds were compacted. Similar to cylindrical particles, the wall channeling effect increased with Re p and became more pronounced for densified packings.
The use of macroscopic random wall structures for packings of spherical particles changed the void fraction and velocity profiles significantly. Due to the presence of the wall structure, the void fraction at the wall fell to a value of ε ≈ 0.56. As a result, the wall channeling effect was hindered, and fluctuations in the void fraction and velocity were qualitatively more comparable to the ones of Raschig rings than spheres. The densification of the bed led to slightly more pronounced minima and maxima; however, this effect was not as distinct as for spherical particles in a smooth walled reactor.
Heat Transfer Characteristic
A fair comparison of the thermal performance of different reactor concepts always depends on the process boundary conditions that are set. Figure 8 shows the global heat transfer coefficient U as a function of different parameters. Re-fitting of an existing unit that is integrated in a complex production process can lead to the necessity of keeping the throughput constant, which is equivalent to keeping Re p invariant. In this case, especially at low Re p , cylindrical particles showed the most beneficial heat transfer characteristic, followed by the wall-structured reactor, Raschig rings, and four-hole cylinders. Spherical particles performed worst over the complete range of investigated Re p . At high Re p , cylindrical particles still performed best; however, rings, four-hole cylinders, and the reactor with wall structures were close. For spheres, cylinders, and four-hole cylinders, U decreased if the packings were compacted. This is of special interest, since in industrial applications, most often, densified packings are used to ensure the same pressure drop in the different tubes of the tube bundle reactor. Interestingly the effect was less pronounced for Raschig rings and the wall-structured reactor. At high Re p , even a slight increase in thermal performance could be seen for those reactor types. In general, the performance gain induced by macroscopic random wall structures was significant.
Another valid process boundary condition can be the necessity of keeping the hydraulic residence time invariant. In this case, Re p /ε needs to be kept constant. Under this constraint, Raschig rings and four-hole cylinders performed best for moderate to high Re p , followed by cylinders and the wall-structured reactor. For the lowest investigated Re p , again, cylindrical particles seemed to perform slightly better than rings.
If a new plant is built and process-driven constraints are low, the most energy efficient particle shape might be an appropriate choice. In this case, the specific pressure drop ∆p/∆z can be one parameter that should be kept constant when comparing different designs. In this case, Raschig rings, the wall-structured reactor, and cylinders performed best. The comparison of the designs from this energetic point of view showed that bed densification led to a less energy-efficient thermal performance, whereas this effect was less pronounced for Raschig rings and the reactor with macroscopic wall structures.
Effective Thermal Transport Properties
As discussed, particle-resolved CFD is a valuable tool to support process intensification on the meso-scale level, e.g., by finding optimized particle shapes [4,[10][11][12] or new reactor concepts, e.g., by applying macroscopic wall structures [20,21] or using internals [19]. However, for process intensification on a macroscopic scale, e.g., by running plants under dynamic operation conditions, developing process integration strategies, or doing plant optimization, different numerical tools are necessary. Process simulation platforms often use pseudo-homogeneous two-dimensional plug flow models. Depending on the class of model used, certain effective transport parameters are needed, which are often not known. In this section, methods are presented for how those parameters can be extracted from particle-resolved CFD results.
λ eff,r -α w Model
Although its limitations are well known [31], the λ eff,r -α w model is still widely spread, due to its efficiency and simple implementation. Here, the radial heat transport was characterized by the wall heat transfer coefficient α w and the effective radial thermal conductivity λ eff,r , which was assumed to be uniform everywhere in the reactor. By extracting the axial core temperature profile and average inlet/outlet temperatures from the CFD simulations, both parameters were determined by using Equations (17), (19), and (20). The results are summarized in Table 2. The parameters were then used to calculate the temperature fields by using the pseudo-homogeneous model described by Equation (1) in conjunction with the boundary condition in Equation (2).
A one-to-one comparison of all investigated cases in terms of radial temperature profiles at different axial positions is provided in Supplementary Material, Section S4. A condensed visualization of the results is given in Figure 9. Here, the deviations of the circumferentially averaged temperature fields, predicted by the pseudo-homogeneous model, are given in relation to the particle-resolved CFD results. Deep red and deep blue colors indicate that the deviation was above or below 10 K. This critical cut-off temperature was chosen, motivated by the rule of van't Hoff, saying that the speed of a chemical reaction doubles to triples itself when the temperature is raised by 10 K [65]. The characteristic temperature drop at the wall that was a result of α w can only hardly be seen in Figure 9. The reader is referred to the radial temperature profiles given in Section S4. It can be seen that the temperatures close to the wall (r * ≤ 0.2-0.4) were systematically underpredicted by the simplified model. This drawback is well known and deeply discussed by many authors [31]. Furthermore, the model was not able to capture morphological and fluid dynamic heterogeneities, which led to step-like temperature profiles, as can be seen best for the radial temperature profiles of the dense spherical packing. Recently, this was also found by Moghaddam et al. [33], who introduced heterogeneities by increasing the solid thermal conductivity. Besides those systematic errors, the deviation in relation to the CFD results was relatively low for the majority of cases. For all investigated designs, the deviation was less than 5 K for the rear part of the reactor (z/d p ≥ 40). The threshold of 10 K was mostly exceeded in the entry zone (z/d p ≤ 20). Overall, there seemed to be a trend that deviations increased if Re p was raised. The method seemed to work equally well for loose and densified packings with a slight trend towards less deviations for dense beds. Considering the numerical effort that the simplified model needed in comparison to the particle-resolved CFD simulation, which was ≈10 s compared to ≈24 h, the accuracy was still remarkable.
λ eff,r (r)-Model
The obvious drawback of the λ eff,r -α w model, that the additional near wall thermal resistance is only captured with an artificial temperature drop directly at the reactor wall, can be circumvented by using a radially varying effective radial thermal conductivity. In this work, the correlation of Winterberg [61] (see Equation (23)) was used as the basis to determine the effective radial thermal conductivity. The three necessary parameters of the Winterberg correlation were determined by conducting a parameter optimization study. The basis of this study was the transport equation described by Equation (21). A summary of the optimized model parameters, including the mean squared error MSE, is given in Table 2.
The comparison of the radial temperature profiles for different axial positions can be found in Supplementary Material, Section S5. The spatially resolved deviations between the simplified model and the CFD results are given in Figure 10. It is obvious that in comparison to the results of the λ eff,r -α w model, the accuracy was significantly improved. The temperature close to the reactor wall was predicted with a high degree of accuracy by the model. Only sporadically, the temperatures were overestimated by more than 10 K in the vicinity of the wall, whereby the location was mostly limited to the entry zone (z/d p ≤ 20). A direct comparison of the λ eff,r -α w model and the λ eff,r (r) model in relation to the CFD results is given in Figure 11 for the loose packing of Raschig rings at Re p = 1000, showing the superior accuracy of the λ eff,r (r) model, especially close to the wall. Deep in the bed, deviations outside of the 10 K threshold were mostly found for packings that were characterized by a higher degree of morphological heterogeneity, like the packings of cylindrical particles, the dense bed of spheres, and the loose packing of spheres in the reactor with a random wall structure. While the former configurations were characterized by strong variations in the radial void fraction distribution, the latter showed big fluctuations in the axial void fraction profile. Bigger deviations were mostly limited to the entry zone, indicating that thermal entrance effects, which were not resolved by an axially invariant λ eff,r (r), might be the reason for this. In contrast to the λ eff,r -α w model, the deviations did not seem to increase if Re p was raised. Since the Winterberg correlation did not explicitly consider local variations in the void fraction or axial velocity, it was, similar to the λ eff,r -α w model, not able to capture the step-like effects of the temperature profiles.
Conclusions
In this work, it was shown in which way the particle-resolved simulation of fixed-bed reactors can play a central role in the process the intensification of this reactor type. After a brief validation study, showing that also under harsh industrial conditions, particleresolved CFD was able to predict the temperature field accurately, the heat transfer characteristics of different particle designs were investigated. The studied designs differed in the used particle shape and the bed density. The results showed that heterogeneities in radial void fraction distribution and axial velocity increased, if packings were compacted. As a result, the overall heat transfer coefficient U decreased for most particle shapes. Although the wall channeling effect was most pronounced for the fixed-beds of cylindrical particles, it was found that this particle shape was among the most efficient, with respect to U. Furthermore, a novel reactor tube design that used random macroscopic wall structures was investigated. For packings of spherical particles, it was found that macroscopic random wall structures can significantly decrease morphological heterogeneities, leading to a significantly better heat transfer characteristic. Taking into account various process-related boundary conditions, cylindrical particles, Raschig rings, and wall-structured reactors were identified as the most promising concepts to intensify the radial heat transport.
Methods were presented to determine effective thermal transport parameters, which are needed for simplified pseudo-homogeneous models, from the particle-resolved CFD results. Depending on the degree of morphologically induced heterogeneities, an excellent to fair agreement was found for the λ eff,r -α w model in comparison to the CFD results, whereas deviations became bigger if the morphology was more heterogeneous. The known problem of underestimated temperatures close to the reactor tube, as one of the biggest drawbacks of this model, was confirmed. To circumvent this problem, parameter optimization studies, based on the Winterberg correlation, were performed to predict the radially varying effective radial thermal conductivity, which was needed for the λ eff,r (r) model. A very good agreement regarding the radial temperature profiles was found between the λ eff,r (r) model and the particle-resolved CFD results. In comparison to the λ eff,r -α w model, the λ eff,r (r) model showed its superior accuracy close to the reactor wall. Nevertheless, it was found that both pseudo-homogeneous models became less accurate if step-like temperature profiles, which were a either a result of morphological heterogeneities or a high particle solid thermal conductivity, were present.
In terms of process intensification, this work showed that particle-resolved CFD can either directly be used to study improvements on the meso-scale through: • studying the impact of particle shape, internals, or reactor tube design on the performance; • investigating the effect of operating conditions and physical properties; • testing of novel reactor tube concepts, e.g., reactors with random macroscopic wall structures or heat fins; • identifying local phenomena as hot/cold spot formation or catalyst poisoning; or as a reliable source for parameters, and correlations of those, that are needed for process simulation. This allows a more reliable analysis of process intensification by studying: • dynamic operating conditions; • process integration concepts; • conducting process design optimization. Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to large file sizes, and, partly, restrictions that might apply.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: | 11,445 | 2021-05-18T00:00:00.000 | [
"Engineering",
"Physics"
] |
Analysis of Continuing Airworthiness Occurrences under the Prism of a Learning Framework
In this research paper fifteen mandatory occurrence reports are analysed. The purpose of this is to highlight the learning potential incidents such as these may possess for organisations involved in aircraft maintenance and continuing airworthiness management activities. The outputs from the mandatory occurrence reports are aligned in tabular form for ease of inclusion in human factors’ continuation training material. A new incident learning archetype is also introduced, which intends to represent how reported incidents can be managed and translated into lessons in support of preventing event recurrence. This ‘learning product’ centric model visually articulates activities such as capturing the reported information, establishing causation and the iterative nature of developing
Introduction
Structured and continuous safety management actions, such as collection of data, analysis and intervention can be enabled with the support of the necessary safety intelligence. High quality maintenance and management tasks are some of the essential inputs for safe operations. Continuous information 'harvested' from incident reporting arising from these tasks, is another major part of learning and preserving acceptable levels of safety [1]. Thankfully, serious incidents are becoming less frequent [2] but often because of environmental, cognitive and human centric demands, reportable and unreportable events do occur. The main underpinning aviation regulation in Europe, European Union (EU) regulation 2018/1139 [3] refers to 'management system' and mandates an operator to implement and maintain a management system to ensure compliance with these essential requirements for safe operations; it also aims for continuous improvement of the safety system through learning from incidents.
In the area of continuing airworthiness, the fundamentals of management systems are also extended to incident and occurrence reporting through the implementing conduit of EU regulation 1321/2014 [4]. It is common for incidents to be discovered within organisations and reported with the assistance of such 'systems of systems' [5]. On an operational level, initial human factors training, and company procedures are intended to specify and re-affirm the class and type of occurrence and incident that should be reported. Recent developments in Europe in the guise of EU regulation 376/2014 [6] empower voluntary and confidential reporting and are independent of all other individual obligations. The paper recounts an analysis of 15 occurrences drawn from a repository of reportable incidents. Each incident was assessed, and the report data interpreted to support potential primary and secondary causation factors. To translate these learning points into tangible lessons, causation factors are harmonised with a taxonomy for learning. This taxonomy is based upon the Transport Canada 'Dirty Dozen' [7] human factors terms which feature common aviation human error preconditions. Additionally, a framework is presented in the paper to demonstrate how learning from incidents can be leveraged with best effect in the industry
Learning from Incidents: Underpinning Theory
According to Leveson [13], a holistic view of an organisation's capability in terms of learning from incidents can be enhanced by shifting the focus from the individual to what is happening across the system. In the world of 'operational aviation' the concept of Safety Management Systems (SMS) has been for the most part successfully embraced and applied where mandated. Deming [14] the respected purveyor of quality assurance methodologies asks the question, 'what is a system?' He continues to answer, 'a system is a network of interdependent components that work together to try to accomplish the aim of the system'. This description of the system suggests that the process (in safety management parlance) is 'a network of interdependent components'. Safety management philosophy requires specific points to be formally addressed so that the safety management process of operational risk can be explicitly expressed and therefore effectively managed. One of these points is preventing the recurrence of incidents and occurrences through learning from past events to achieve an acceptable level of safety.
Today, in many jurisdictions it is a requirement for aircraft maintenance and continuing airworthiness management organisations to maintain an occurrence-reporting system. European regulatory requirements [6] and organisation procedures [4] normally require the event to be investigated, documented and the causal factors considered. Additionally, corrective and/or immediate actions are often necessary to prevent re-occurrence. Learning from these incidents can often provide potential solutions to preventing safety crises in the future by looking back at what has happened and deriving lessons learned and predicting probable future challenges, [15].
'Learning from incidents' (LFI) is a valuable tool in many domains. Much research has been devoted to understanding how this process can be expressed and measured, how worthwhile lessons can be learned through more efficient and effective learning, as proffered by Drupsteen and Guldenmund [16], Hovden et al. [17] Jacobsson et al. [18]. A main tenet of this reporting system is the ability to report any error or potential error in a 'free and frank' way. This philosophy is intended to be supported by what is termed a just culture, where the outcome for the individual is not based on punitive measures or being inappropriately punished for reporting or co-operating with occurrence investigations. The occurrence reporting system is also intended to be a 'closed-loop' system where feedback is given to the originator and effective actions are implemented within the organisation to address the embryonic or evident safety hazards. The concept is progressive in terms of its potential for contribution to identifying and addressing less than optimal performance of human, organisational and technical systems. Understanding that adverse and unwelcome events can be minimised through diligent reporting, event analysis and learning and subsequent necessary intervention is a positive trait with respect to improving acceptable levels of safety.
Argyris and Schön [19] (pp. 20-21) highlight the importance of learning to detect and address effective responses to errors. Their 'theory in action' concept is the focal point for this determination. The first of its two components, 'theory in use' is one that guides a person's behaviour. This is often only expressed in tacit form and is how people behave routinely. Very often these observed habits are unknown to the individual. The second element is known as 'espoused theory', namely what people say or think they do. Drupsteen and Guldenmund [16] mention that espoused theory comprises of 'the words we use to convey what we do, or what we like others to think we do'.
Enabling this learning channel, ICAO Doc 9859 [19] defines a template for aviation operators and regulators to support the application of a variety of proactive, predictive and reactive oversight methodologies. In addition to routine monitoring schemes, voluntary and mandatory reporting, post incident follow-up; there are regular safety oversight audits. These audits and inspections often set out to establish if there is a difference between espoused theory and the theory in use, e.g., is the task being correctly performed in accordance with the documented procedure/work instruction or is there a deviation from approved data and practice? However, Drupsteen and Guldenmund [16] caution auditors not to 'focus too much on the documentation of procedures' alone. In such cases the audit oversight may be ineffective because of its sole focus on espoused theories of the organisation only and not the theory-in-use. They progress to translate this idea of poor focus on theory in action and recommend a solution by suggesting a valid learning component arising from the incidents. They also highlight the 'espoused' aspect where those attempting to learn from incidents often fail to experience the desired learning because outcomes are not fully aligned with the practical objectives of an LFI initiative. For learning to be most effective, espoused theory and theory in use should be reasonably well aligned.
Aircraft maintenance and continuing airworthiness management activities that are performed in European member states are moderated by rules that mandate reporting of defined incidents and occurrences. Repositories of reported data tend to be populated only from sources predominantly aligned with mandatory incident/occurrence reporting requirements. Conventional safety oversight models only verify the presence of reporting media and repositories in this segment of the industry. Traditionally there has been a focus amongst organisations to ensure details of reports are submitted in line with state's mandatory reporting obligations. However, it is possible such a narrow focus on a single element (i.e., reporting alone) of an incident in its lifecycle could negate the potential learning benefits that might accrue from considering other likely related sources. As a result, the absence of clear regulatory requirements capable of augmenting learning from incidents could be considered an impediment to effective learning in the domains affected by EU regulation 1321/2014 [4]. The featured industry sector is regulated by the application and upkeep of numerous requirements in each jurisdictions of operation. In general, oversight duties tend to be carried by regulating states and operators in support of safe and profitable activity. However, a growing tendency to just increase some regulatory requirements across the segments may not always offer the same safety returns necessary for states in the future.
Up until some years ago, basic risk mitigation methods had remained unchanged. The previously reactive initiatives had largely been based on post-event analysis of accidents and incidents. At present, learning from past incidents, occurrences and accidents must be credited with playing a major part in helping evolve the paradigm to the more proactive means of risk management in many aviation segments we know today. Accident models (Heinrich and Reason) can sometimes inadvertently contribute to an over-simplification of how accident and incident contributing factors are perceived. This can result in striving to establish a singular root cause. Understandably the propensity for those tasked with accident and incident investigation is sometimes to establish a linear view based only on apparent causal factors. Proactively identifying precursors to events or potential conditions can greatly assist in averting latent or undiscovered conditions. Since the early 1990s, the potential for organisations to learn from incident precursors and conditions has been worthy of attention. Cooke [20] endorses a suggestion that increased reporting of incidents enhances continuous improvements in high reliability industries. In the continuing airworthiness segment of the industry, here is often a regulatory driven focus on establishing a single root cause. The importance of adequate resources and efforts to determine accurate incident causation and the measures to prevent reoccurrence should be a primary concern. Until ED 2020/002/R [21] is fully implemented, it is possible that the custodians of current regulatory requirements are satisfied once a root cause is established. Could it be that the current popular practice of pursuing (singular) root cause focus can be a lost opportunity when additional related sources exist?
The harvesting of information from incident reporting systems is a necessary input to continuously develop appropriate and effective recurrent training material. The inclusion of basic qualification criteria for human factor trainers in the regulatory requirements should also be addressed. However, it is questionable if the perpetuation of these measures alone would support more effective delivery and application of lessons learned throughout the segment. One means of addressing this impending issue is to remodel regulatory, oper-ational and training requirements to consider a new approach in the segment. Reflecting a combination of actions, events and conditions in a new basic model supporting human factor continuation training, may lay the foundations to better elucidate event causation and yield improved and sustainable safety recommendations in the featured segment.
Model Design and Description
Currently European measured levels of aviation safety are generally considered as acceptable. As domain activity is expected to increase in the coming decades, further steps to improve or at least preserve contemporaneous levels of safety will have to continue to be developed. One of the main facets of safety management is the reporting, collection, analysis and follow-up to incidents according to Annex 19 [22]. This is also highlighted in an EU communication COM/2011/0670 [23] and (EU) 376/2014 [6]. A primary reason for the emphasis on reporting and subsequent learning from incidents (LFI) is to enable and support a shift from prescribed safety oversight to a risk-based programme. This is seen as the best fit to enable and effect improvements in areas that will present the most risk [24]. Figure 1 presents one view of a generic incident lifecycle [25] integrated with an interactive framework arising from the researchers work. This 'proposed enhancement' could augment a learning dimension in the cycle of an incident.
The harvesting of information from incident reporting systems is a necessary input to continuously develop appropriate and effective recurrent training material. The inclusion of basic qualification criteria for human factor trainers in the regulatory requirements should also be addressed. However, it is questionable if the perpetuation of these measures alone would support more effective delivery and application of lessons learned throughout the segment. One means of addressing this impending issue is to remodel regulatory, operational and training requirements to consider a new approach in the segment. Reflecting a combination of actions, events and conditions in a new basic model supporting human factor continuation training, may lay the foundations to better elucidate event causation and yield improved and sustainable safety recommendations in the featured segment.
Model Design and Description
Currently European measured levels of aviation safety are generally considered as acceptable. As domain activity is expected to increase in the coming decades, further steps to improve or at least preserve contemporaneous levels of safety will have to continue to be developed. One of the main facets of safety management is the reporting, collection, analysis and follow-up to incidents according to Annex 19 [22]. This is also highlighted in an EU communication COM/2011/0670 [23] and (EU) 376/2014 [6]. A primary reason for the emphasis on reporting and subsequent learning from incidents (LFI) is to enable and support a shift from prescribed safety oversight to a risk-based programme. This is seen as the best fit to enable and effect improvements in areas that will present the most risk [24]. Figure 1 presents one view of a generic incident lifecycle [25] integrated with an interactive framework arising from the researchers work. This 'proposed enhancement' could augment a learning dimension in the cycle of an incident. Figure 1 also illustrates a view of the overall process employed to acquire, process and store incident data. The 'broken line' arrows signify an iterative action at each stage of processing the incident. The purpose of this is to ask and record what can be learned at each point? The motif of how a learning product originates from the regulatory perspective is also featured. The effectiveness of the learning from the event is considered in terms of how it can be gauged. This is evident from feedback originating from the actions in the cycle when the learning product is being developed. Closing the learning loop is also necessary and reflected in graphic form. In addition to this, assessing actions at each incident stage is intended to support an analysis of how effective resulting actions are in terms of preventing recurrence of the incident. Actions to prevent the recurrence of the same or similar events can be embodied as a result of how effective the learning was. As such the novelty of this framework exists in its clear visual representation rather than the actual arrangement of the specific stages recorded. Traditionally the industry focus on incidents and occurrences has pivoted solely around the reporting requirements. These obligations Figure 1 also illustrates a view of the overall process employed to acquire, process and store incident data. The 'broken line' arrows signify an iterative action at each stage of processing the incident. The purpose of this is to ask and record what can be learned at each point? The motif of how a learning product originates from the regulatory perspective is also featured. The effectiveness of the learning from the event is considered in terms of how it can be gauged. This is evident from feedback originating from the actions in the cycle when the learning product is being developed. Closing the learning loop is also necessary and reflected in graphic form. In addition to this, assessing actions at each incident stage is intended to support an analysis of how effective resulting actions are in terms of preventing recurrence of the incident. Actions to prevent the recurrence of the same or similar events can be embodied as a result of how effective the learning was. As such the novelty of this framework exists in its clear visual representation rather than the actual arrangement of the specific stages recorded. Traditionally the industry focus on incidents and occurrences has pivoted solely around the reporting requirements. These obligations are the backdrop against which mandatory reporting activity takes place. The establishment of causation is required by regulatory process but little or no suitability of same is mandated by requirement in support of any potential for learning. The featured framework serves to present the main elements of an incident during its lifecycle and highlight the aspects to be considered when incidents are being used in support of developing effective safety lesson delivery.
Model Implementation
The area of focus for this paper is aircraft maintenance and continuing airworthiness management [4] activities. It was decided to establish contact with an Irish Aviation Authority (IAA) European central repository for aviation accident and incident reports (ECCAIRS) focal point. Following a briefing, a specific permission was granted to review a data set of deidentified mandatory occurrence reports (MOR's) for the purpose of academic analysis. The operational theatre of activity involved licensed air carriers operating large aircraft on the Irish civil aircraft register. The permission allowed an initial physical database search to be performed from June 2019 to November 2019 using 'Part 145 (maintenance) and Part M (continuing airworthiness management)' as the search terms for de-identified report content. Approximately 200 results came back. The narrative and content of each report was reviewed by the researchers for applicability to the analysis. This exercise refined the reports under review to a data set of 85. Figure 2 presents an overview of the analysis framework, described in the sequel.
are the backdrop against which mandatory reporting activity takes place. The establishment of causation is required by regulatory process but little or no suitability of same is mandated by requirement in support of any potential for learning. The featured framework serves to present the main elements of an incident during its lifecycle and highlight the aspects to be considered when incidents are being used in support of developing effective safety lesson delivery.
Model Implementation
The area of focus for this paper is aircraft maintenance and continuing airworthiness management [4] activities. It was decided to establish contact with an Irish Aviation Authority (IAA) European central repository for aviation accident and incident reports (EC-CAIRS) focal point. Following a briefing, a specific permission was granted to review a data set of deidentified mandatory occurrence reports (MOR's) for the purpose of academic analysis. The operational theatre of activity involved licensed air carriers operating large aircraft on the Irish civil aircraft register. The permission allowed an initial physical database search to be performed from June 2019 to November 2019 using 'Part 145 (maintenance) and Part M (continuing airworthiness management)' as the search terms for de-identified report content. Approximately 200 results came back. The narrative and content of each report was reviewed by the researchers for applicability to the analysis. This exercise refined the reports under review to a data set of 85. Figure 2 presents an overview of the analysis framework, described in the sequel.
Model Validation: Report Causal Elements
A third round of full read screening of the set yielded 15 deidentified reports applicable to the exercise topic. Each featured event was considered under the following elements: the actual event, maintenance phase detected and likely potential causation factors. Table 1 contains an overview of this analysis output. Each of the 15 analysed occurrence reports provided a description of the featured event and some were helpfully contextualised with a chronological timeline when included in the report body. This later assisted
Model Validation: Report Causal Elements
A third round of full read screening of the set yielded 15 deidentified reports applicable to the exercise topic. Each featured event was considered under the following elements: the actual event, maintenance phase detected and likely potential causation factors. Table 1 contains an overview of this analysis output. Each of the 15 analysed occurrence reports provided a description of the featured event and some were helpfully contextualised with a chronological timeline when included in the report body. This later assisted with appreciating all the potential causation elements for each event. However, the reported verbiage tended to terminate mostly with a focus on consequential impact rather than causal information. For the sake of consistency across the analysis, the authors decided to apply a systematic approach to elicit and validate causal factors from the data. The process was based on a clear definition of root cause as proffered by Paradies and Busch [27] as: 'the most basic cause that can be reasonably identified and the management has control to fix'. Many analysis tools [e.g., Fault tree analysis (FTA), functional resonance analysis model (FRAM), systems theoretic accident model and process (STAMP), sequentially timed events plotting (STEP)] are available and can be applied in support of a systematic review aimed at establishing causal factors. However, each of the aforementioned is generally applied in support of more voluminous operational applications and a degree of familiarity and adequate resources are usually required to ensure an efficacious outcome. As the incident reports (n = 15) under review already had causal factors ascribed, the authors deemed a simple analysis tool to be appropriate. According to Card [28], the '5 Why's technique' is a widely used technique applied in support of root cause analysis and is used by many statutory organisations globally. Ohno [29] (p. 123) highlights that by repeating why five times, the nature of the problem as well as its solution becomes clear. As the authors of this paper were aware, sole reliance on a tool like the 5 Whys has limitations. In particular, exclusive operational reliance on its prowess as a revealing panacea could inveigle its users in to over-simplifying an event and thereby be seduced into pursuing an inappropriate singular cause. As a result, the tool was applied solely as a mechanism to validate the already operator ascribed event categorisations and causal factors.
Results
Each mandatory occurrence report (MOR) was thoroughly reviewed, and the content of the event and related actions carefully assessed. However, without an intimate knowledge of the operational environment, history of the aircraft reliability and related operational dynamic and contextual influences for example, it was not possible to definitively establish if the recorded causation and related factors were indisputably accurate for each event. Notwithstanding the foregoing, based on the authors experience and judgement the recorded causation factors were harmonised with a taxonomy derived from the elements of the Transport Canada [7] 'dirty dozen' terms associated with common error preconditions. The elements are generally identified as, Lack of communication, Distraction, Lack of resources, Stress, Complacency, Lack of teamwork, Pressure, Lack of awareness, Lack of knowledge, Fatigue, Lack of assertiveness, Norms.
The purpose of aligning the 'potential incident causation factors' with a known taxonomy is to assist with developing clear learning product content and learning objectives. Regulatory code or guidelines for the continuing airworthiness domain do not require a formal approach to learning such as those defined by Bloom [30] and Anderson and Sosniak [31]. Although the reports featured display similar activity profiles, recognition for the need to consider learning taxonomies and the importance of domains of learning (cognitive, affective and psychomotor) when designing continuation training programmes is considered essential. In addition, organisations are not required to have a formal mechanism of assessing efficacy, instead many take comfort in national, European and international holistic safety reports as a means of gauging their performance as part of the collective. Assuming the purpose of learning objectives is to assist with the delivery and measurement of the effectiveness of learning actions, developing an overview of a harmonised taxonomy is helpful in this regard.
In Table 1 above, potential causation factors for each of the 15 selected incidents were matched with the twelve elements of the 'Dirty Dozen' human factor taxonomy. In order to prevent an over-simplification of each event's contributing factors, the authors were careful not to be seduced into seeking a singular root cause. Therefore, it was decided to include both primary and secondary human factor elements so that causation could be considered in a holistic manner. The following paragraphs (a-h) and Figure 3 give a breakdown of the issues emerging from the assessment of the mandatory occurrence reports (MOR's) as seen through the lens of association with a taxonomy. a.
Lack of knowledge features as a primary element in 13 (87%) of 15 occurrences. This can be closely related to the competence required to perform the task as it relates to aircraft maintenance and continuing airworthiness management activities which are defined as comprising of 'knowledge, skills and attitude/ability' [4]. As a secondary potential contributing element, it relates to only 1 (7%) of the 15 occurrences. b.
Lack of awareness is highlighted as a primary potential causation factor in 9 (60%) of the 15 reviewed occurrences. This element can be closely related to competence, communication and teamwork. As a secondary contributing factor, lack of awareness was noted during the review in 5 (33%) of 15 reviewed occurrences. c.
Lack of resources were recorded in 3 (20%) of 15 events. Adequate resources are required in order for an operator to adequately staff an organisation so that an aircraft can be maintained to the correct standard and when required. EU 1321/2014 [4] mandates that a manpower plan is maintained in support of ensuring adequate levels of staff are consistently available. As a secondary issue, lack of resources appeared as an issue in 5 (33%) of 15 cases. Ultimately, accountable managers are the key to ensuring sufficient resources are made available so that the organisational elements continue to remain compliant and effective in this respect. d.
Norms accounted for 3 (20%) of 15 reports examined. Norms are often viewed as behaviours that are developed and accepted within a group. However, when the resulting behaviour requires a deviation from approved procedural function, the consequences are often unknown. Although such actions may offer short-term productivity gains, they may also introduce active and latent safety hazards. In the case of secondary causation, norms are associated with 8 (53%) of 15 assessed occurrences. e.
Lack of communication was found to be evident in 3 (20%) of 15 occurrences in the study. Communication in aircraft maintenance and management activities is a vital element in the release of a safe product. Poor communication can amplify many other elements of the human factors leading to a deterioration in human performance, Chatzi [32], Chatzi et al. [33]. 2 (13%) of the 15 reviewed communication-related occurrences were recorded as contributing to secondary event causation. f.
Complacency was revealed as a primary factor in the causation of 1 (7%) of 15 events studied. However, as a secondary contributing factor it accounted for 5 (33%) of 15 reports. Stress levels associated with a task can diminish performance if one becomes complacent. Its presence can contribute in concert with other elements capable of setting the scene for an unwelcome event. g.
Stress as a primary factor appeared in 1 (7%) of the 15 reviewed events. However, it was associated with 2 (13%) of 15 reports as a secondary issue. Stress can be both a by-product and an enabler of other Dirty Dozen elements. Fatigue for example can be closely coupled to stress and displayed similar pattern in the study with 7% and 13% respectively of prevalence in the reports reviewed. h.
Lack of assertiveness was evident as a primary and as a secondary causation factor in both cases and occurring at rate of 1 (7%) of 15 events under review. Distraction and lack of teamwork appeared in similar proportions in the review results.
by-product and an enabler of other Dirty Dozen elements. Fatigue for example can be closely coupled to stress and displayed similar pattern in the study with 7% and 13% respectively of prevalence in the reports reviewed. h. Lack of assertiveness was evident as a primary and as a secondary causation factor in both cases and occurring at rate of 1 (7%) of 15 events under review. Distraction and lack of teamwork appeared in similar proportions in the review results.
Discussion
Recalling the causal factors attributed to the featured occurrence reports in the paragraphs above, it is easy to appreciate their relationships with the 'Dirty Dozen' example of human factor elements. For example, lack of resources can be a major constraint when it comes to providing adequate levels of appropriately qualified competent staff. Pressures exerted upon staff in a dynamic industry sector to absorb additional workload can of course have a potentially detrimental effect on safe operations. Competent and available supervision of maintenance and inspection staff is a core requirement of a quality mission in aircraft maintenance and continuing airworthiness management operations. In many regions the maintenance requirements (e.g., EU regulation 1321/2014 [4]) stipulate a process whereby all staff must meet the qualification criteria and be deemed competent before unaccompanied work can take place. For the purpose of the discussion, key elements of the incident cycle components are examined through pertinent elements identified during the analysis. The iterative approach suggested during the management of the incident information is supported by the context outlined below. Understanding the relevance of each of the sections is intended to support more effective learning outcomes. The following paragraphs discuss the incident cycle from the perspective of developing a sound learning product.
Acquiring, Processing and Storing Incident Data
According to Garvin [34], a clear definition of learning has proven to be elusive over the years. Garvin suggests 'a learning organization is an organization skilled at creating, acquiring and transferring knowledge and at modifying its behaviour to reflect new knowledge and insights'. Figure 1 illustrates the evolution of an incident as it is managed through its cycle. The incident/occurrence will need to be detected if it is to possess any potential for learning. Acquiring information in support of learning is one of the key actions.
Discussion
Recalling the causal factors attributed to the featured occurrence reports in the paragraphs above, it is easy to appreciate their relationships with the 'Dirty Dozen' example of human factor elements. For example, lack of resources can be a major constraint when it comes to providing adequate levels of appropriately qualified competent staff. Pressures exerted upon staff in a dynamic industry sector to absorb additional workload can of course have a potentially detrimental effect on safe operations. Competent and available supervision of maintenance and inspection staff is a core requirement of a quality mission in aircraft maintenance and continuing airworthiness management operations. In many regions the maintenance requirements (e.g., EU regulation 1321/2014 [4]) stipulate a process whereby all staff must meet the qualification criteria and be deemed competent before unaccompanied work can take place. For the purpose of the discussion, key elements of the incident cycle components are examined through pertinent elements identified during the analysis. The iterative approach suggested during the management of the incident information is supported by the context outlined below. Understanding the relevance of each of the sections is intended to support more effective learning outcomes. The following paragraphs discuss the incident cycle from the perspective of developing a sound learning product.
Acquiring, Processing and Storing Incident Data
According to Garvin [34], a clear definition of learning has proven to be elusive over the years. Garvin suggests 'a learning organization is an organization skilled at creating, acquiring and transferring knowledge and at modifying its behaviour to reflect new knowledge and insights'. Figure 1 illustrates the evolution of an incident as it is managed through its cycle. The incident/occurrence will need to be detected if it is to possess any potential for learning. Acquiring information in support of learning is one of the key actions. Such learning material originates from compliance audits, amended regulatory requirements, best practice, and incidents and occurrence reports. Within the greater area of aircraft maintenance and continuing airworthiness management, details of incidents and occurrences tend to be reported soon after an event. Reporting requirements are normally timebound (i.e., 72 h). Most organisations endeavour to notify the necessary stakeholders as soon as possible, often by telephone in the first instance. As many airline staff are employed on a shift work basis, the window of 72 h is useful in support of administering the reporting function. It is not unusual to have numerous points of contact for reporting within organisations. However, reporting generally follows a consistent route regardless of who the initial point of contact is. Some organisations appear to empower and encourage the submission of reports by any individual. Other organisations appear to endorse reporting through a 'chain of command'. Regardless of the chosen initial reporting route input, all reports are progressed to a 'gate-keeper' within an organisation. The people responsible initially for examining the validity and completeness of submitted reports often hold a key position in either the quality assurance, technical services or maintenance departments. Generally, there is a strong awareness of the need to report incidents and occurrences classified as mandatory. There may be numerous motivational reasons to report, such as ethical, safety, compliance with regulatory requirements and best practice for example. Those submitting reports embrace mandatory reporting as an obligation underpinned by the cultural norms of aircraft maintenance and continuing airworthiness management. When an issue is discovered, it is progressed through the reporting system regardless of its status. Many organisations welcome all reports including non-mandatory events that are highlighted through voluntary reporting streams. They evidently see value in including them in their analysis of events and the subsequent learning opportunities the reports may offer.
Single, Double and Triple-Loop Learning
From an organisational point of view, single-loop learning can be experienced when an error is detected and corrected but little else changes, Argyris and Schön [19] (p. 18). In aircraft line maintenance environments where a 'find and fix' ethos prevails, singleloop learning is often evident. It is not unusual for technical issues to befall an aircraft's departure time. Such pressure points often associated with fulfilling contractual obligations may have a negative impact on the potential for learning from a related event. In such cases, if issue arises the matter may be resolved without any further recorded action. Because of the terse nature of the experience for an individual concerned, the opportunity for further learning may not extended beyond the single loop. Argyris and Schön [19] (p. 21) and Lukic et al. [35] proffer double-loop learning as learning that takes place and results in organisational norms and theory in use being altered. Presently, aircraft certifying, and support staff are obliged to continuously preserve an adequate understanding of the aircraft being maintained and managed along with associated regulations and procedures. A desired outcome of double-loop learning is often witnessed for example through the adjustment of environmental, behavioural and procedural norms. Instances of double-loop learning can be evident following unsuccessful attempts through singleloop learning. In-service continuation training is an effective enabler that is capable of supporting double-loop learning. Organisations are also required by EU 1321/2014 [4] to establish and maintain a continuation training programme for staff. A primary pillar of continuation training syllabi is the use of incidents and occurrences as lesson content for influencing organisational norms and behaviour in support of preventing recurrence of incidents and occurrences. Deutero-learning (triple-loop) relates to when members of an organisation reflect upon previous learning and sets about to improve how the organisation can refine and improve the process of learning from events, Argyris and Schön [19] (p. 29), Bateson [36]. This could also be stated as learning how to learn by seeking to improve single and double loop learning. Although deutero-learning may be considered as a natural extension of other levels of learning, the concept does not feature as a requirement in aircraft maintenance and continuing airworthiness management regulatory codes.
Learning Product
Aircraft maintenance and management regulatory codes require reporting of 'any identified condition of the aircraft or component that has resulted or may result in an unsafe condition that hazards seriously the flight safety' [4]. Generally, a learning product can originate from numerous information sources within the aircraft maintenance and continuing airworthiness management arena. Specifically, GM1 145.A.30(e) [4] requires the use of accident and incident reports in support of the mandatory human factors training content. The intent of this material is to ensure information is imparted upon the organisations' staff in support of preventing the subject event reoccurrence. Such continuation training is mandated by European requirements for all aircraft maintenance and continuing airworthiness management organisations. Continuation training is also a product as well as a medium for imparting learning from incidents. Inputs to continuation training syllabi often feature learning from incidents and experience augmented by safety notices, toolbox talks and are recognised as a means of presenting the learning product to operational staff. Drupsteen and Guldenmund [16] cite, 'Lampel et al. [37] where they use the term "learning about events"'. This is further explained as 'information about events is shared and diffused to help create new ideas', in this case in the support of safe operations.
Effectiveness of Learning
The evaluation of any initiative's success is much more straight forward when clear objective indicators (learning outcomes) are employed. In the case of learning in an aircraft maintenance and management environment, organisations can generally employ indicators such as inspection non-compliance, audit findings and rates of incident reoccurrence in support of gauging the effectiveness of learning. Probing salient aspects such as timely investigation of incidents, assessing the learning content and feedback are a starting point for assessing effectiveness. Cooke [20] concludes the absence of or poor information can compromise the effectiveness of feedback. He also suggests that if the feedback cycle is ailing, the climate may deteriorate and have a negative impact upon organisational safety. From a commercial viewpoint, it is perhaps understandable that aircraft tend to only generate revenue when flying. However, airline operators need to maintain a balance between safe operations and productivity. It is essential that incident causal factors are fully identified and adequate time and resources are available to support this important aspect of learning. Cooke [20] endorses a suggestion that increased reporting of incidents enhances continuous improvement in high reliability industries. However, establishing adequate causation is also an attribute capable of supporting effective learning from an event in dynamic environments.
The importance also of just culture as an enabler for incident reporting and subsequent effective learning cannot be ignored. Under-reporting of events resulting from a single-loop learning experiences amongst operational maintenance staff and production pressures can also impact negatively upon efforts to propagate a learning environment. McDonald [38] suggests from their analysis, 'that there is a strong professional sub-culture, which is relatively independent of the organization. One implication of this finding is that this professional subculture mediates the effect of the organizational safety system on normal operational practice'. Von Thaden and Gibbons [39], conclude safety culture 'refers to the extent to which individuals and groups will commit to personal responsibility for safety; act to preserve, enhance and communicate safety information; strive to actively learn, adapt and modify (both individual and organizational) behaviour based on lessons learned from mistakes . . . . . . . . . '. A just culture is defined in the affecting regulation EU 376/2014 [6] as, 'a culture in which front line operators or other persons are not punished for actions, omissions, or decisions taken by them, that are commensurate with their experience and training, but in which gross negligence, wilful violations and destructive acts are not tolerated'. Accordingly, a just culture is a fair culture. The effectiveness of the learning system can also be compromised by its efficiency as well as its inadequacies. The volume of information that staff must process and assimilate is continually increasing. Guardians of learning outcomes should be mindful that staff risk becoming information weary as a result of the ever-increasing demands on their cognitive abilities.
Types of Knowledge
This relates to; conceptual, dispositional, procedural and locative knowledge forms [40]. One of the key objectives of learning from incidents is to identify the type of knowledge needed to prevent an issue recurring. When a reportable issue for example is discovered, the submitted report will identify 'what' happened. Subsequent follow-up will set out to determine 'why' the issue occurred. The guiding principles of 'how' to perform the task or operation are often contained in procedures or data particular to the task. The information contained in procedures will enable a person to utilise other forms of knowledge. Prevailing culture within an organisation will have an impact on learning from incidents. If a strong commercial culture exists, this may have an impact on for example the depth and breadth of learning from incidents within the company. Induction and initial training for new staff is an important element for demonstrating where organisational sources of information can be accessed. Accident data repositories contain many well documented examples of human factor related precursors to incidents. Many of which may have originated in poor access to approved data and culminated in serious and possibly preventable incidents. Acknowledging and addressing the limitations related to the types of knowledge when developing continuation training programmes would have a positive impact on participants. The enabling industry requirements do not specify any discernible differences in how the types of knowledge are differentiated. A review of the human factors syllabus requirements did not highlight a need to appreciate or account for these human centred limitations when designing and delivering training lessons.
Conclusions
It has been highlighted during this research that the opportunity to learn from incidents is not being fully embraced in the aircraft maintenance and continuing airworthiness management segment of the industry. While the idea of eliminating all incidents is a fallacy, reducing their numbers and potential for harm is a reality. Air travel is on the increase and it is envisaged that current sectors flown will have doubled within the next two decades. If current levels of safety were to remain stagnant with a doubling in activity, twice the current fatality rate would surely not be acceptable. Many people relate safety to freedom from risk and danger [41]. Unfortunately, risk and danger are often ubiquitous in the presence of aircraft maintenance and continuing airworthiness management activities. Managing sources of risk and danger is a tall order for some organisations. Document 9859 [42] recognises that 'aviation systems cannot be completely free of hazards and associated risks'. However, the guidance does acknowledge that as long as the appropriate measures are in place to control these risks, a satisfactory balance between 'production and protection' can be achieved. Perrow [43] (p. 356) acknowledges that 'we load our complex systems with safety devices in the form of buffers, redundancies, circuit breakers, alarms, bells, and whistles' because no system is perfect.
Detecting and identifying hazards highlighted through incident reporting systems is recommended by ICAO standards and recommended practices as an effective means of achieving practicable levels of safe operations. Therefore, objective data mined from a reporting system offers the potential to enlighten aviation stakeholders and to illuminate weakness that may be present. Such information can assist with a better understanding of events and augment mitigating measures against the potential effects of these hazards. When incidents occur, this can be an indication of a failure in an organisation's process and/or practice. Because of continuous challenges faced by organisations in the aviation industry, there is still potential to learn from resulting incidents and pre-cursors. The learning is based on the potential new knowledge available from the associated collection, analysis and interventions for these events. Effective learning can be considered as a successful translation of safety information into knowledge that actively improves the operating environment and helps prevent recurrence of unwelcome events.
The paper features a brief exercise to demonstrate how safety information can be translated into lessons capable of augmenting knowledge within an aircraft maintenance and management organisation. To support this, fifteen occurrences drawn from an EC-CAIRS incident database portal were analysed. The result of the analysis along with potential causation factors are presented. Additionally, a simple mechanism in support of the delivery of associated safety lessons was developed and is presented in Table 1 above. Integrating the known causal factors with the 'Dirty Dozen' taxonomy which is already associated with this aviation segment provides a useful template for continuation training in the segment. The emerging incident/occurrence themes related to the featured events are briefly discussed and presented within the document. The publication also introduces a framework that assembles and explains the main elements of an incident within its lifecycle. The purpose of this is to illustrate tacit aspects of an incident that have the potential to augment learning within the process. In order to leverage the maximum benefit from details of an incident, learning processes must recognise the existence of these event components. There can therefore be a formal approach to gauging the effectiveness of learning and a means of identifying underperforming elements of the learning process.
This publication could assist subject organisations with a review of their management of incident information when developing continuation training material and learning outcomes. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest. | 10,599.4 | 2021-02-05T00:00:00.000 | [
"Engineering"
] |
The conductance of porphyrin-based molecular nanowires increases with length
High electrical conductance molecular nanowires are highly desirable components for future molecular-scale circuitry, but typically molecular wires act as tunnel barriers and their conductance decays exponentially with length. Here we demonstrate that the conductance of fused-oligo-porphyrin nanowires can be either length independent or increase with length at room temperature. We show that this negative attenuation is an intrinsic property of fused-oligo-porphyrin nanowires, but its manifestation depends on the electrode material or anchor groups. This highly-desirable, non-classical behaviour signals the quantum nature of transport through such wires. It arises, because with increasing length, the tendency for electrical conductance to decay is compensated by a decrease in their HOMO-LUMO gap. Our study reveals the potential of these molecular wires as interconnects in future molecular-scale circuitry.
: A schematic of a generic molecular junction and fused-oligo-porphyrin (FOP) monomer, dimer and trimer molecular wires. (a) Shows the schematic of a generic molecular junction containing a fused porphyrin trimer. (b) a porphyrin monomer connected to electrodes from m and m' connection points (c) A fused porphyrin dimer, comprising two monomers connected to each other through three single bonds (red bonds) and connected to electrodes from d and d' connection points and (d) A fused porphyrin trimer connected to electrodes from t and t' connection points. Figure 1 shows the molecular structure of a porphyrin monomer, a fused dimer and a fused trimer, in which two or three porphyrins are connected to each other through three single bonds (shown by red lines in fig. 1c, 1d). We first consider molecular junctions in which the carbon atoms labelled (m,m'), (d,d') and (t,t') respectively are connected to electrodes via acetylene linkers (see SI for the molecular structure of junctions). Figure 2a shows an example of the junction with graphene electrodes (see fig. S1a-c in the SI for the detailed molecular structure) where the porphyrin wires are connected to the edges of rectangular shaped graphene electrodes with periodic boundary conditions in the transverse direction. To calculate the room temperature electrical conductance G, we calculate the electron transmission coefficient T(E) using the Gollum transport code 27 combined with the material specific mean field Hamiltonian obtained from SIESTA implementation of density functional theory (DFT) 28 and then evaluate G using the Landauer formula (see methods). Results for the monomer, dimer and trimer attached to graphene electrodes (see figure 2a) are shown in figure 2b. Fig. 2: Transport through monomer, dimer and trimer molecular wires attached to two graphene electrodes. (a) A fused porphyrin molecular wire connected to graphene electrodes via acetylene linkers. (b) the room temperature electrical conductance for the porphyrin monomer (blue curve), porphyrin dimer (red curve) and porphyrin trimer (green curve) as a function of the electrode Fermi energy E F , in units of the conductance quantum G 0 = 77 micro siemens.
For these highly-conjugated wires, the energy level spacing decreases as their size increases. Therefore, the energy gap between the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) of the dimer is smaller than that of the monomer and in turn, the HOMO-LUMO (HL) gap of the trimer is smaller than that of the dimer. This behaviour is reflected in the conductance resonances of figure 2b, which are furthest apart for the monomer (blue curve) and closest together for the trimer (green curve). This can be understood by starting from a chain of N isolated monomers. Since each monomer has a HOMO energy and a LUMO energy , the isolated chain has N-fold degenerate HOMO and N-fold degenerate LUMO. When the monomers are coupled together to form a fused wire, the degeneracies are lifted, to yield a HOMO, N-tuplet with molecular orbital energies < < ⋯ < … < and a LUMO, N-tuplet < < ⋯ < … < . Consequently the new HL gap ( ) = − is lower in energy than that of the monomer. figure 3b shows that for the thiol-anchored wires, if − is lower than the mid-gap (0.18 eV) of the trimer, β is zero or slightly positive, otherwise β is negative. show that for a value α = -0.65γ, the curves overlap and for more negative values of α, the transmission coefficient increases with length for energies within the HL gap of the trimer ( fig. 4a), in agreement to the above DFT results. To demonstrate that the decrease in the HL gap is due to a splitting of the HOMO and LUMO degeneracies, figure 4b shows the transmission curves of the trimer over a larger range of energy, for a series of values of the coupling α. For small α, the HOMO and LUMO are each almost triply degenerate and as the magnitude of α increases, the degeneracy is increasingly lifted, leading to a reduction in the HL gap. semiconductors, meaning that eventually the conductance will begin to decrease with length. In practice, this decrease is likely to be slower than exponential, because at room temperature and large enough length scales, inelastic scattering will become significant and a cross-over from phase-coherent tunnelling to incoherent hopping will occur 10 . For comparison, figure S6 of the SI shows the transmission curves for butadiyne-linked porphyrin monomer, dimer and trimer molecular wires, for which the attenuation factor β is clearly positive for a wide range of energies within the HL gap of the trimer in agreement with the reported measured values 21 . The fact that fused porphyrin ribbons are narrow-gap semiconductors means that for a finite oligomer, when electrons tunnel through the gap there will be contributions to the transmission coefficient from both the HOMO and the LUMO bands. Figure S9 of the SI shows that the qualitative features of figure 4a and figure 2 can be obtained by summing these two contributions.
The tight-binding results of figure 4 and the DFT results with a non-specific anchor (figure 3) suggest that a negative beta factor is a generic feature of the fused porphyrin core, provided the centres of the HOMO-LUMO gaps of the monomer, dimer and trimer are coincident. However whether or not it is measured experimentally depends on level shifts of molecular orbitals after attaching to the electrodes. This is illustrated by the calculations shown in figure S10 in the SI using direct C-Au covalent anchoring to gold electrodes, where the HOMOs of the monomer, dimer and trimer coincide and therefore the centres of their HOMO-LUMO gaps are not coincident. This spoils the generic trend and leads to a positive beta factor.
In summary, we have demonstrated that the electrical conductance of fused oligo porphyrin molecular wires can either increase with increasing length or be length independent in junctions formed with graphene electrodes. This is due to alignment of the middle of the HOMO-LUMO gap of the molecules with the Fermi energy of the graphene electrodes. In addition, we show that in junctions formed with gold electrodes, this generic feature is anchor group dependent. This negative attenuation factor is due to the quantum nature of electron transport through such wires and arises from the narrowing of the HOMO-LUMO gap as the length of the oligomers increases.
Computational Methods
The Hamiltonian of the structures described in this paper was obtained using DFT (as described below) or constructed
Supporting Information
The conductance of porphyrin-based molecular nanowires increases with length Then if we assume no interference between the HOMO and LUMO, the total transmission coefficient is | 1,730.4 | 2018-04-11T00:00:00.000 | [
"Physics"
] |
MOOC and SPOC , Which One is Better ?
The research process has established according to the norms of MOOC and SPOC respectively, set up learning support platform, and arrange two groups of classes of no significant differences in knowledge level and learning styles, based on these two types of platform to start learning respectively. Using data analysis methods to explore the key factors that affect learning effectiveness. The results are that: 1. MOOC and SPOC are not alternative, but parallel. MOOC is fit for non-campus, large-scale education resource sharing, with education fairness as the concept. And SPOC is a special education mode on campus. 2. MOOC well adapts for basic theory education, while SPOC applies to professional skills education. 3. MOOC is more suitable for people with self-learning ability, and SPOC is suitable for students with weak ability to control themselves.
INTRODUCTION
"Internet+" age allows information users to study and communicate on the internet at any time, MOOC (largescale open network courses) and diversity of new teaching methods and other ways also develops vigorously along with it (Rolf, 2015). Under the continually reform of information environment, teaching of information retrieval course is also exploring its own development continuously (Alraimi, Zo, & Ciganek, 2015). There is no doubt that the beginning of the MOOC development brings the superiority, greatly promoted the reform and development of course teaching in colleges and universities in our country, but also inevitably exposes some problems. The SPOC (small scale restricted online courses) mode, proposed by professor Armando Fox of the university of California, Berkeley starts to attract people's attention, and the opening of the SPOC mode makes profound changes in the course teaching. MOOC and SPOC which is better? It has become the concern of educational sector.
MOOC (Massive Open Online Courses), refers to the use of the Internet platform sharing, free from time and place restrictions, widely disseminated free curriculum resources. MOOC is based on its large-scale, open, online features, to share the teaching resources of high quality all over the world for the idea (Goldberg et al,2015), combined with educational needs, to provide quality services to learners around the world. MOOC has been widely concerned since its rise (Jordan, 2015). In 2012 MOOC burst development, so the 2012 is also known as the "Mu year". Three platforms Coursera, edX and Udacity came into being, becoming the most influential MOOC platform. Many universities of the American Ivy cooperated with the three platforms, to participate in the process to export high quality teaching resources to the world.
5962
In 2013, professor Fox at the University of California Berkeley first proposed a SPOC (small private online course) concept that is small scale private online courses, transliterated as "private broadcast class" (Armando & David, 2014). SPOC is considered as a mixed teaching mode developed in the university classroom application process after the rise of a large scale open online course MOOC (massive open online course) wave . There are also views that, SPOC = Classroom + MOOC (Armando & Berkeley, 2013). Based on the research results of many scholars, SPOC can be defined as a curriculum education model which applies MOOC teaching resources to the physical campus ("Definition of Small Private Online Course SPOC", 2013). It aims to realize the organic integration of MOOC high-quality curriculum resources and campus traditional classroom teaching in order to give full play to the advantages of MOOC, make up for the shortages of MOOC and traditional teaching, and reverse the teaching process, change the teaching structure, and improve the quality of teaching (Tim, 2014). SPOC platform as a "school-based" learning platform based on MOOC has been constantly emerging nearly three years(Sean, 2013), such as the Tsinghua University ZhiXue Academy teaching service platform ,school online "school cloud" platform, Zhejiang University CNSPOC cloud curriculum platform, love course "school cloud services "platform, superstar PanYa network teaching platform. And in many colleges and universities it plays an important role in the curriculum construction of SPOC.
METHODOLOGY
The research process has established by the study: first, according to the norms of MOOC and SPOC respectively, set up learning support platform, and arrange two groups of classes of no significant differences in knowledge level and learning styles, based on these two types of platform to start learning respectively. Both of classes have a dedicated class teacher responsible for students' daily management. Then data analysis methods were used to explore the key factors that affect learning effectiveness. Finally, based on the above conclusions of the research, the paper summarized the existing problems and doubts, carried out the second round of teaching practice, in order to verify the conclusions of the research and guarantee the rigor and scientific of the study.
The Completion Rate in Different Types of Learning
In the courses of 2016, the system set up a maximum of 200 students in the teaching platform to apply for registration, all the applications have been approved by the teachers. At sixth weeks, 176 students withdrew and received 24 students; at the end of the semester, a total of 19 students participated in the final exam, 19 students finished the class. The final completion rate is only 9.5%.
State of the literature
• MOOCs has achieved great success all over the world, and has brought a great impact on the traditional college teaching.
• But in the teaching practice in universities, the MOOC model does not necessarily have good results. The SPOC model is the revised form of MOOC model in Higher Education.
• In theory research MOOC has begun to cool down, and in universities SPOC is replacing MOOC now.
Contribution of this paper to the literature
• By MOOC and SPOC practice teaching, it found that, in higher education, SPOC is better than the MOOC, the student participation rate and completion rate are high.
• The advantages and disadvantages of MOOC and SPOC are analyzed, and it is pointed out that SPOC will not replace MOOC.
• MOOC and SPOC each have their own characteristics. We can't say who is good and who is bad. They tow play different roles in basic theory education, on -the -job learning and continuing education.
5963
In the courses of 2016, a total of 80 students registered for this course because of the prior regulations "only the sophomore students should be allowed to take this course ", and the teacher approved 80, to sixth weeks, there were 47 students enrolled in the course. At the end of the semester a total of 47 students participated in the final exam and 47 students have passed this achievement, namely 47 students have finished class. Therefore, in the selected students, the completion rate reached 59%.
After the interview found that many students' elective enthusiasm is high. But after the registration of the course, you will find that due to lack of prerequisite courses, resulting in their difficulty in course content, and finally only rejected. In addition, some students is very interested in the beginning of the course, the enthusiasm is very high, but because of the ability of individual time management is not strong, leading to insufficient investment in course in its early time. This part of the students' final rate of withdraw is also very high.
Video on Demand Rates in Different Types of Learning
MOOC platform in the management of students, the teacher only in the form of announcement requires students to self-on-demand the teaching platform within the micro-video or classroom recording video, and does not put the quality of students' on-demand video and the end of the comprehensive evaluation linked.
SPOC has limited the selection and qualification of students, and clearly pointed out to students: the length of time that students watch the video, the quality and quantity of students participate in the teaching forum are automatically recorded, and as the sub-index comprehensive evaluation at the end of semester.
In Table 1, on-demand ratio, the number of students who are on demand is divided by the number of students selected and the number of videos. The full view ratio is the number of students who have completely watched divided by the number of students and the number of videos. More than 50 hours long ratio is the number of students who watch more than 50 times divided by the number of students selected and the number of videos.
Through the comparison of two types of students' learning patterns, it is found that all the learning indicators of SPOC are better.
The Rate of Effective Interactions in Different Types of Learning Patterns
According to the situation of student posting and replies under the type of learning mode, from the side it can reflect from students' participation in collaborative knowledge construction in level and depth. The topic posts and reply posts related to curriculum content are defined as "effective post", other types of posts are called "invalid posts." According to the situation that students participate in online discussion, the main data are obtained as followings. Table 2 shows that the number of MOOC students posting although a lot, but the proportion of effective posts is low, only 5.20%. The number of SPOC valid posts has accounted for 93.28% of the total amount of posts, which means that in the forum posting, most posts are effectively related to the curriculum content. In addition, the number of MOOC theme posts is relatively larger, the content of the discussion is also very scattered. While SPOC post focus is higher, each topic posts have a relatively high response.
DISCUSSION
Although MOOC and SPOC are not very different in terms of the technical platform, learning resource type and curriculum system structure, but in the teaching design and teaching management idea, teaching process organization, teaching operation method and so on, MOOC and SPOC have their own characteristics, the difference between the two is found, it mainly reflects in the following aspects, such as Table 3 shows.
Now, let's analyze the Advantages and disadvantages of MOOC and SPOC.
Share quality resources anytime, anywhere
MOOC provides a large free interactive learning platform for students to facilitate the exchange of students in various colleges and universities to carry out learning and exchange activities (Armando & David, 2014). The network space narrows the distance between people, to meet the needs of students at any time learning exchange. Students' learning is no longer limited by geographical location, class time, classroom space, etc., as long as you have the will to learn, you can join the learning platform, share your learning experience for the course, and put forward your own doubts, get answers in communication. Moreover, MOOC breaks through the traditional teaching process of the number of restrictions (Tapson, 2013), allowing all students with course will to join the classroom, learning together, study together.
Active learning
MOOC makes students change from passive to active. Firstly, MOOC relies on the internet platform, save the whole process of the teaching, the students are allowed to review it at any time; The learning model can solve the most of the doubts in student learning process. Secondly, after the completion of the course video viewing, students need to complete a small test, and put forward problems in learning forum according to their own needs, get the answer in the discussion. Online communication discussion effectively helps students change passive acceptance of the learning pattern, the learning results is entirely in their own hands.
Multiple evaluation mechanism
The multi evaluation system of MOOC uses to solve the problem of traditional teaching mode "one paper will be life and death", which can fully reflect the students' learning effect. First, complete evaluation of the learning process. Students in the process of watching the video, MOOC background will record the learning behavior, according to the completion of video viewing and small test results, the usual results will be given. Next, it is the comprehensive evaluation of final examination. Students participate in online final test; MOOC background will be the corresponding record. Once again, truthfully reflect the teachers' evaluation. Teachers according to the performance of students in class, the enthusiasm of the question on forums, the quality of the questions evaluate students, and record in the background of the MOOC.
Instant feedback
If students encounter doubts in the learning process, they can put forward their own problems at any time in the online learning forum. Teachers online, assistants or students who participate in the same period can participate in the discussion, give the answer. In addition, after the students finish their homework, they can carry out mutual correction. Such a model ensures that students encounter problems can be immediately answered, without doubt, ensure the understanding of focus and difficulties courses timely, improve the learning efficiency.
The rate of returning classes remains high
It was found that returning class rate of MOOC compared with the traditional model has been higher. Firstly, learners are inert. At the beginning of the elective course, the number of registrars was large, the learners were interested and have a strong desire to learn; with the passage of time, the enthusiasm of learners involved in the rapid decline, adherence is poor. Secondly, the exit threshold is low and the exit cost is low. Because there is no limit to exit the MOOC class, do not need to pay the price, it will also lead to a certain extent, the reduction of enthusiasm and the lack of persistence.
The number of students elect courses is too large
In addition to students can not adhere to lead to high rate of retreat, MOOC's large scale also bring some problems to the teachers. Because MOOC allows interested students to participate in the course, which may cause the phenomenon of excessive number of students, it brings too heavy burden of the curriculum to teachers. Compared to the traditional teaching model, an elective course can only accommodate about hundreds of people, and some popular MOOC breaks through this limit, the number of students can reach thousands of people, or even more. Although the basic knowledge points have been passed to the students in the form of the video, but Q & A, management forums and other related parts will also bring some the pressure to the teachers.
Authority questioned
After the students completes the MOOC learning, the platform will give students the corresponding credits, and award a certificate, the students' whole learning process will be recognized. However, due to the popularity of MOOC is not enough, some of the students after the completion of the MOOC study, cannot use MOOC credits deduce school elective credits. In addition, some employers do not admit the authority of the MOOC curriculum, and question the learning effect and rating system are, these affect the enthusiasm of the students to learn MOOC. Coupled with the lack of network learning monitoring, resulting in "instead of learning" "cheating" and other behavior in the learning process, it also affects the credibility of MOOC.
SPOC advantages
1.Learning resources is personalized SPOC do not emphasize the various and complete of resource, but emphasize on personalized characteristics of resources (Armando & David, 2014). According to the student's age characteristic and the cognitive style, respectively, provide students learning resources with the text type, PPT type, video type of, and organize teaching activities accordingly.
2.Strong severability of the division Video resources knowledge
In the platform based on SPOC concept, should use less big classroom activity type video as far as possible (more common in the platform of MOOC), but the micro video oriented to knowledge. Emphasis on the short, small, fine of the video resources, emphasizes the targeted contents to the case, and learners' adaptability.
3.Prevent hidden guest, real-time management
SPOC request all students use real name, which is helpful prevent the appearance of Implicit guest (students who browse only without reply), for a learning community study BBS. Abound guest existence is not conducive to the communication among students is not conducive to the collision of ideas. And thus, the purpose and meaning of BBS will be lost (Brinton, Rill, & Ha, 2015), so to prompt the communication, and the discussion between students. It is necessary to solve the hidden guest phenomenon. Encourage students to publish their views freely, actively and boldly, enhance interaction and communication among students. The real-time supervision and management to students, can write down every learning steps for each student's, and can feedback the records to students in time, to guide the students' learning process, to give full play to teachers' leading role, and stimulate students' external learning motivation.
Restricted number of students
SPOC is the network resources enjoyed by part of the people who pay for it, not for other students who could not or unwilling pay for it, which is different from the resource sharing concept of MOOC.
The unitary thought of students
SPOC has limitations to student's qualifications, most of the time the students were differentiated between the liberal arts and science. Although the first-choice rate had very great enhancement benefit from this, but it is bad for some cross major students' learning.
CONCLUSIONS
We could not simply say that MOOC is better than SPOC, or say that SPOC is better than MOOC. While can only say that MOOC or SPOC is more suitable for different teaching content, teaching goals and teaching objects.
1. MOOC and SPOC are not alternative, but parallel. MOOC is fit for non-campus, large-scale education resource sharing, with education fairness as the concept. And SPOC is a special education mode on campus.
2. MOOC well adapts for basic theory education, while SPOC applies to professional skills education. MOOC builds a new platform for large-scale education popularization. Through the carefully designed video in MOOC, learners can quickly understand the theoretical content in books. And the MOOC is beneficial to minimize the rural-urban education divide (Ming, 2017). Professional education requires a great deal of interaction and practice, which cannot be experienced by the learners in person online, so SPOC is more appropriate.
3. MOOC is more suitable for people with self-learning ability, and SPOC is suitable for students with weak abilities to control themselves. Discovered by analyzing the user data in 2012, 10000 students from 113 countries won the MOOC all courses at the universities of California, Berkeley, about three-quarters of the learners have a college degree and have a full-time job. People need a variety of knowledge in their career, but the professional education in university cannot meet all needs. But MOOC's massive sharing remote education can achieve the employees' career target. And on-the-job students and continuing education students generally perceive themselves as sufficient in the curriculum development via distance education (Deniz & Zeynep, 2016). Lots of students in campus don't know what actual needs of knowledge in future career, even if the designed perfect MOOC course within sight, they will not necessarily go to choose, so that the pattern of SPOC is more appropriate for students who need to guide. | 4,397.4 | 2017-08-23T00:00:00.000 | [
"Education",
"Computer Science"
] |
Prolonged reorganization of thiol-capped Au nanoparticles layered structures
Prolonged reorganization behaviour of mono-, di-, tri- and multi-layer films of Au nanoparticles prepared by Langmuir-Blodgett method on hydrophobic Si(001) substrates have been studied by using X-ray scattering techniques. Out-of-plane study shows that although at the initial stage the reorganization occurs through the compaction of the films keeping the layered structure unchanged but finally all layered structures modify to monolayer structure. Due to this reorganization the Au density increases within the nanometer thick films. In-plane study shows that inside the reorganized films Au nanoparticles are distributed randomly and the particle size modifies as the metallic core of Au nanoparticles coalesces.
Prolonged reorganization behaviour of mono-, di-, tri-and multi-layer films of Au nanoparticles prepared by Langmuir-Blodgett method on hydrophobic Si(001) substrates have been studied by using X-ray scattering techniques. Out-of-plane study shows that although at the initial stage the reorganization occurs through the compaction of the films keeping the layered structure unchanged but finally all layered structures modify to monolayer structure. Due to this reorganization the Au density increases within the nanometer thick films. In-plane study shows that inside the reorganized films Au nanoparticles are distributed randomly and the particle size modifies as the metallic core of Au nanoparticles coalesces. C Metal nanoparticles exhibit interesting optical, 1-3 electrical, 3, 4 magnetic 5, 6 and catalytic 7,8 properties and for that reason can be used in nanotechnology by forming suitable architectures in different dimensions onto a chosen substrate. [9][10][11] Nanoparticles surrounded by dodecanethiol ligand shell have been used extensively for making such assembly 12 and their structures, patterns and morphologies on water and solid surfaces under different experimental conditions have been studied. [13][14][15][16] Monolayer formed by the Au nanoparticles exhibits reversible buckling on water surface up to a certain surface pressure (π ) but due to more compression irreversible monolayer-to-bilayer transformation occurs. 17,18 A trilayer structure has also been observed due to the irreversible transition of the monolayer 14 and with the compression of the monolayer the folding, wrinkling and then wrinkling to folding transitions have been observed. 19 Moreover, layer-by-layer assembly of thiol-capped Au nanoparticles has also been observed with the compression of the monolayer. 20 On solid substrate, deposition of such Au nanoparticles is easily possible by using Langmuir-Blodgett (LB) method. 13,21,22 During deposition from water to solid surface a two-dimensional short-range structural reordering of Au nanoparticles occurs and the packing symmetry changes from triangular to square-like. 23 Both in-plane and out-of-plane restructuring of monolayer of Au nanoparticles have been observed due to the evaporation of the trapped water. 16 Substrate surface conditions also play an important role in the growth and stability of any nanolayer on it. [24][25][26][27] For the organic capped nanoparticles, which are effectively hydrophobic in nature the growth, structure and stability on differently passivated Si surfaces have been studied. 28 Differently passivated Si surfaces have different hydrophilic/hydrophobic nature, which effectively controls the growth and stability of the nanoparticle films. Due to the reorganization for nearly two months, close packed layered structure has been observed. 29 In this paper, we have shown the prolonged reorganization of the thiol-capped Au nanoparticles LB films deposited on hydrophobic Si (001) substrates. The structures of the LB films and their evolution with time nearly for twelve months have been monitored using x-ray reflectivity (XRR) technique. After prolonged reorganization, the structure of the films has also been studied with the grazing incidence small angle x-ray scattering (GISAXS) techniques. Our study shows that after reorganization all layered structures become monolayer structure and inside the monolayer gold nanoparticles are distributed randomly. Due to such structural modification the Au density largely enhances within the nanometer thick films. Moreover, the particle size modifies and on the average nanoparticles of three different sizes form due to the coalescence of the metallic core of the Au nanoparticles.
Dodecanethiol-encapsulated Au nanoparticles were synthesized by a phase-transfer redox reaction mechanism using the Brust method. 30 Methanol was added to the toluene solution containing the capped nanoparticles to remove excess reagents and the nanoparticles were filtered out from the solution. The particles were then redispersed in toluene and the desired concentration of the nanoparticles (0.95 mg/mL) was obtained. Transmission electron microscopy (TEM) measurements were carried out with a FEI electron microscope model Tecnai G2 20S Twin operated at 200 kV with a resolution of 2 Å to obtain the average diameter of the nanoparticles. The average diameter of the metallic core of the nanoparticle was determined by the particle size distribution and found to be around 34 ± 7 Å. The metallic core is encapsulated by dodecanethiols of about 14 Å, so the average diameter of the encapsulated nanocrystal is about 60 Å.
Au nanoparticles were spread from a 0.95 mg/mL toluene solution (600 μL) using a micropipet on the surface of Milli-Q water (resistivity 18.2 M cm) in a Langmuir trough (Apex Instruments). It was kept undisturbed for 15 min to let the solvent evaporate. The π -A isotherm was recorded at 25 o C. π was measured with a paper Wilhelmy plate and the monolayer was compressed at a constant rate of 47.5 mm 2 /min. Films were deposited by the LB method on Si(001) substrates at three different surface pressures (π = 8, 16 and 21 mN/m) at room temperature (25 o C). Depositions were carried out using one down-up cycle, i.e., by using two strokes, where in the down stroke substrate goes from air to water and in the up stroke it goes from water to air through the nanoparticle monolayer. Two films were also deposited by the LB method at π = 8 mN/m of the monolayer at room temperature by using two and four down-up cycles. The speed for both up and down strokes were 2 mm/min. Prior to the deposition, Si(001) substrates were made H-terminated by keeping it in a solution of hydrogen fluoride (HF, Merck, 10%) for 3 min at room temperature (25 • C). 26,28 Immediately after the chemical treatment, all the substrates were kept inside the Milli-Q water until LB deposition.
XRR measurements were carried out by using both laboratory and synchrotron x-ray sources. In the laboratory, a versatile X-ray diffractometer (VXRD) setup was used. Details about the setup and instrumental resolution has been described earlier. 20,26 XRR and GISAXS measurements after prolonged reorganization were performed at ID 10B beamline at the European Synchrotron Radiation Facility (ESRF) in Grenoble, using a high-energy (8 KeV, λ = 1.55 Å) synchrotron source. Data were taken in specular condition, i.e., the incident angle (θ ) is equal to the reflected angle (θ ) and both are in a scattering plane. Under such conditions, a nonvanishing wave-vector component, q z , exist which is equal to (4π /λ)sinθ . XRR technique essentially provides an electron-density profile (EDP), i.e., in-plane (x-y) average electron density (ρ) as a function of depth (z) in high resolution. 20 From EDP it is possible to estimate the film thickness, electron density and interfacial roughness. Analysis of XRR data has been carried out using Parratt's formalism. 31,32 For the analysis, each film has been divided into a number of layers including roughness at each interface. 20,32,33 In GISAXS measurements a 2D X-ray detector PILATUS 300K was used to record the scattered X-ray radiation. GISAXS images were taken at 0.2 • , 0.3 • and 1.55 • grazing angles of incidence. The direct beam was stopped and the specular reflected beam was attenuated to avoid the saturation of the detector.
XRR data and the corresponding analyzed curves of the Au nanoparticle LB films deposited by one down-up cycle at three different π values (8, 16 and 21 mN/m) and their time evolutions are shown in Fig. 1. EDPs obtained from the analysis are shown in the inset of the corresponding figures. EDPs obtained from the reflectivity analysis clearly show that only a monolayer of Au nanoparticles has been deposited at π = 8 mN/m which is shown in the inset of Fig. 1(a), whereas bilayer and trilayer has been deposited at π = 16 and 21 mN/m which are shown in the inset of Fig. 1(b) and Fig. 1(c) adjacent particles occur. Time evolution EDPs obtained from the analysis show that with time, reorganization of the nanoparticles takes place and as a result the total film thickness decreases and the Au density inside the layer increases. EDPs shown in the inset of Fig. 1(a) implies that due to prolonged reorganization of the monolayer film, the Au layer density increases from ≈ 0.94 el/Å 3 to ≈ 1.25 el/Å 3 . EDPs shown in the inset of Fig. 1(b) and Fig. 1(c) show that for the bilayer and trilayer films two distinct reorganization process takes place. After ≈ 60 days, due to reorganization of bilayer film, Au layer density increases from ≈ 1.05 el/Å 3 (top) and ≈ 0.78 el/Å 3 (bottom) to ≈ 1.17 el/Å 3 and ≈ 0.96 el/Å 3 respectively and the total film thickness decreases, but the bilayer structure is maintained. However, in the next step, i.e., after prolonged reorganization (≈ 12 months) the bilayer structure modifies to monolayer structure where the Au layer density increases to ≈ 1.54 el/Å 3 . For the trilayer film, like bilayer, in the first step the layer density increases with time and the film thickness decreases maintaining the trilayer structure. However, after prolonged reorganization for ≈ 12 months, the trilayer structure collapses into monolayer structure like bilayer. EDPs also show that after prolonged reorganization the average monolayer, bilayer and trilayer film thickness decreases by ≈ 9Å, 22Å and 45Å respectively. XRR profiles and the corresponding analyzed curves of the Au nanoparticle LB films deposited by two and four down-up cycles at π = 8mN/m and their time evolutions are shown in Fig. 2. EDPs obtained from the analysis are shown in the inset of the corresponding figures which implies that three and seven layer structures have been formed from the two and four down-up cycles respectively. Time evolution (≈ 12 months) EDPs obtained from the analysis show that due to the reorganization the Au layered structures become monolayer and the Au density increases and the total film thickness decreases. EDPs shown in the inset of Fig. 2(a) implies that the Au layer density increases from ≈ 0.91 el/Å 3 to ≈ 1.7 el/Å 3 and the total film thickness decreases by ≈ 54Å. EDPs shown in the inset of Fig. 2(b) implies that for the seven layer film the Au layer density increases from ≈ 0.77 el/Å 3 to ≈ 1.8 el/Å 3 and the total film thickness decreases by ≈ 62Å. Thus, out-of-plane structural analysis shows that from the layered structures a denser monolayer like structure has formed. However, from the XRR study it is not clear whether the size of the nanoparticles is modified or not due to the time evolution. It is also not possible to know whether the particles make any equilibrium two dimensional patterns or not. To obtain the particle size and in-plane structural information from the prolonged reorganized films we did the GISAXS measurements. GISAXS data obtained from the monolayer, bilayer and trilayer films deposited by one down-up cycle are shown in Fig. 3(a)-3(c) respectively, while the GISAXS data obtained from the three and seven layer films deposited by two and four downup cycles are shown in Fig. 4(a) and 4(b) respectively. The beam-stopper stops the direct beam together with specular and Yoneda peaks, which are related with the incident angle and critical angle respectively. The absence of spots in the q x -q z plane implies that there is no in-plane ordering among Au nanoparticles after prolonged reorganization. The intensity variation as a function of q x at q z = 0.062 Å −1 are shown in the inset of the corresponding figures. The fitting of all the intensity profiles as a function of q x implies that after the reorganization there different Au sizes are present inside the five different films and their diameters are ≈ 26-28Å, ≈ 46-62Å and ≈ 118-162Å respectively. The errors of each diameter values are 5-9%. Thus after reorganization on the average two relatively bigger sizes Au nanoparticles are formed from the pristine nanoparticles and all these particles are distributed randomly on the solid surface.
The reorganized structures, two different reorganization processes and their final in-plane organization have schematically shown in Fig. 5. In the initial stage of reorganization the layered structures of Au nanoparticles are maintained, i.e, mono-, bi-and tri-layer structure is maintained but the film becomes more compact due to the interpenetration of the ligand shells and due to the filling of the defects that were formed during the film growth. These structures and reorganization process are shown in Fig. 5(a) and 5(b). However, due to prolonged reorganization for ≈ 12 months, the layered structure destroys and monolayer likes structure forms (shown in Fig. 5(c)). As the generated monolayer films are more compact so the Au density becomes very high inside such thinner films. The metallic core of Au nanoparticles takes on the average three different sizes, i.e., pristine, little bigger and more bigger size. As the surface energy, i.e., surface to volume ratio of the smaller nanoparticles are higher in comparison with the relatively bigger particle for the same total volume, so it is most probable that to minimize the surface energy the particles coalesce and becomes . Two reorganization processes are shown where in the first process layered structure is maintained but film thickness decreases (b) and in the second process layered structure collapses in to a monolayer structure having three different metal cores (c). bigger one. However, it depends upon the coating layer and the number of the available particles. Probably the room temperature and long enough time for reorganization helps the thiol molecules to redistribute on the surface of the modified Au nanoparticles. These modified Au nanoparticles are distributed randomly on the silicon surfaces, not forming any two-dimensional ordering, although in some other experimental conditions 2D ordering have been observed. 34,35 Thus, it is clear that multilayer to monolayer like transformation is the most probable trend of such organo-coated metallic nanoparticles irrespective of the substrate surface nature (hydrophilic, hydrophobic, etc) or initial layer numbers if sufficient time is given for the reorganization. In addition, the metallic core coalesces to form bigger size particles to minimize the surface energy.
In conclusion, reorganization behaviour of different layered structure LB films of Au nanoparticles deposited on hydrophobic Si(001) substrates have been studied by using XRR and GISAXS techniques to get out-of-plane and in-plane structural evolution with time. Structural information obtained from the studies show that at the initial stage the reorganization occurs through the compaction of the films keeping the layered structure unchanged but after prolonged reorganization all layered structure modifies to monolayer structure. Due to such reorganization the Au density inside the nanometer thick film increases. Moreover, the particle size modifies as the metallic core of the Au nanoparticles coalesces due to the reorganization and these particles are distributed randomly inside the monolayer film. Such reorganization behaviours and restructuring irrespective of substrate surface are very informative in nanotechnology. | 3,529.4 | 2013-09-25T00:00:00.000 | [
"Materials Science"
] |
High-fidelity composite quantum gates for Raman qubits
We present a general systematic approach to design robust and high-fidelity quantum logic gates with Raman qubits using the technique of composite pulses. We use two mathematical tools -- the Morris-Shore and Majorana decompositions -- to reduce the three-state Raman system to an equivalent two-state system. They allow us to exploit the numerous composite pulses designed for two-state systems by extending them to Raman qubits. We construct the NOT, Hadamard, and rotation gates by means of the Morris-Shore transformation with the same uniform approach: sequences of pulses with the same phases for each gate but different ratios of Raman couplings. The phase gate is constructed by using the Majorana decomposition. All composite Raman gates feature very high fidelity, beyond the quantum computation benchmark values, and significant robustness to experimental errors. All composite phases and pulse areas are given by analytical formulas, which makes the method scalable to any desired accuracy and robustness to errors.
Composite pulse sequences feature a unique combination of ultrahigh fidelity similar to resonant excitation and robustness to experimental errors similar to adiabatic techniques. Moreover, CPs offer a great flexibility unseen in other control technique: they can produce broadband (BB), narrowband (NB), passband (PB), and virtually any desired excitation profile. These features render CPs ideal for applications in quantum computation and quantum technologies in general [33].
Quantum technologies use qubits which are implemented either as a directly or indirectly coupled two-state quantum system. For example, in trapped ions, the electronic states of the ions are used as qubits of two types: optical and radio-frequency (rf) qubits. Optical qubits consist of an electronic ground state and a metastable state, with lifetimes of the order of seconds, while the rf qubits are usually encoded in the hyperfine levels of the electronic ground states of the ion, with lifetimes of thousands of years. Either of these come with their advantages and disadvantages. It has been shown that by using dressing fields in the rf-qubits configuration, one can suppress decoherence, caused by magnetic-field fluctuations, by as many as three order of magnitude [34,35]. Rf qubits can be manipulated directly [36], by Raman transitions [37], or by combinations of these [34,35]. For a directly coupled qubit, just two states suffice and CPs are implemented directly to them. For indirectly coupled qubit states, via an ancillary middle state, CPs are scarce, if any, because their construction requires to control more complicated multistate quantum dynamics.
In this Letter, stimulated by the advances described above, we develop a general systematic approach for creating robust and high-fidelity quantum gates in Ramantype qubits. Our method is based on the use of composite pulse sequences, adapted for a three-state system. While the vast amount of literature on composite pulses has been focused on two-state systems, studies of CPs in higher-dimensional systems also exist [11,[38][39][40][41][42][43][44][45]. Our method uses two powerful mathematical techniques: the Morris-Shore transformation [46] and the Majorana decomposition [47], which map the three-state Raman system onto equivalent two-state systems. Below we briefly introduce these two techniques and then design the single-qubit gates used in quantum computing.
We assume that the Raman qubit consists of two ground states, |0 and |1 , coupled to an excited state |2 , as illustrated in Fig. 1 (left). Both the Morris-Shore transformation [46], and the Majorana decomposition [47] allow one to reduce the three-state Raman system to a two-state problem, Fig. 1 (right). We note that when the one-photon detuning in Fig. 1 (top left) is large, we can adiabatically eliminate state |2 and obtain an effective two-state system {|0 , |1 }. However, the large detuning reduces the effective coupling |0 ↔ |1 and increases the gate time. The Morris-Shore and Majorana decomposition work for any detuning, and even on resonance, when the dynamics, and the gates, are fastest.
Majorana decomposition. The Majorana decomposition reduces a multistate system with the SU(2) symmetry to a two-state problem. Explicitly, it maps the three-state Hamiltonian [11,42,47]. If the two-state propagator is parameterized as then the three-state propagator is (4) We shall use this mapping to design high-fidelity composite Raman gates.
We are now ready to construct composite Raman implementations of the basic single-qubit quantum gates: the X, Hadamard, rotation, and phase-shift gates.
X gate. The X gate is defined as the Pauli's matrix X =σ x = |1 0|+|0 1|, and it is the quantum equivalent of the classical NOT gate. One way to produce the Raman X gate, as seen from Eq. (2), is to choose the Rabi frequency amplitudes as ξ 0 = −ξ 1 = √ 2. Then a = −1 and b = 0, and the propagator reads which is the X gate for the qubit {|0 , |1 }. This operation, however, suffers from the drawbacks of resonant excitation: errors in the experimental parameters (Rabi frequencies, pulse durations, detuning) reduce the fidelity. The composite pulses overcome these drawbacks. For Raman transitions, instead of a single pair of pulses, we use a sequence of N pulses pairs with well-defined relative phases. The overall propagator reads where U(φ) is the propagator for a single pulse pair, Eq. (2) or Eq. (4). We measure the performance of the X gate in the figures below by the infidelity D = , defined as the distance between the target gate X and the actual propagator U (N ) . Morris-Shore. As it is evident from Eqs. (1) and (2), a composite sequence in the original basis transforms into a composite sequence in the MS basis. This feature allows us to use the vast library of composite pulses in two-state systems to design Raman composite pulses. We note that the π pulse (5) in the original basis corresponds to a 2π pulse in the MS basis. Hence, our goal is to obtain a robust 2π pulse in the MS basis, which will map onto a robust π pulse in the original basis.
2π CPs are not as ubiquitous in the literature as π CPs. We propose here to create a 2π CP by merging two broadband (BB) π CPs B N , each consisting of N pulses, where B N are the BB composite sequences [48] Here B = π(1 + ǫ) is a nominal π pulse (i.e., for error ǫ = 0). For N = 3, we have the famous se- The CP (7) features error compensation in both the populations and the phases of the propagator, which makes it suitable for gates. This is because this type of broadband 2π-pulses is in fact a special case of the phase-gate composite pulses, derived earlier [49]. This sequence applies to the MS basis. To obtain each of the nominal π pulses, we choose ξ 0 = −ξ 1 = 1 √ 2 . Therefore, each π φ k pulse in the MS basis corresponds to the pulse pair (Q 0 φ k , −Q 1 φ k ) ≡ Q φ k in the original basis, where Q 0,1 = π √ 2 (1 + ǫ) are nominal π/ √ 2 pulses, and φ k corresponds to the same phase in the two fields. Therefore, the first two BB composite Raman sequences read X 10 = Q 0 Q 2 5 π Q 6 5 π Q 2 5 π Q 0 Q 0 Q 2 5 π Q 6 5 π Q 2 5 π Q 0 . (9b) The infidelity of the resulting X gate for such sequences is shown in Fig. 2 (top, solid lines).
Nonzero detuning. When one-photon detuning is present, as illustrated in Fig. 1 (top), we can proceed in the following way. Instead of a = −1, as in the resonant case, now we need to obtain a = −e iδ , as seen from the propagator (2). This can be done by producing a phase gateF = exp[iησ z ], with a phase η = π + δ. A robust composite phase gate can be produced by a sequence of two BB π CPs, the first one with a zero phase and the second with a phase η [49].
If the detuning is small, |∆| ≪ Ω, 1/T , where T is the pulse width, we can replace the sequence of two broadband CPs (7) with phased CPs, and in such way obtain an approximation to the phase gateF . Explicitly, the total sequence for N = 3, analogous to the sequence (9a), is The infidelity of the X gate for such sequences is shown in Fig. 2 (top, dashed lines). If the detuning is moderate, |∆| ∼ Ω, 1/T , one can use CPs with double compensation in the pulse area and the detuning, and produce the X gate as in Eq. (10). For instance, the five-pulse universal CP [50] U 5 = B 0 B 5 6 π B 1 3 π B 5 6 π B 0 produces a composite X gate in the presence of moderate detuning by applying the sequence The performance is illustrated in Fig. 3 (top frame). Finally, if the detuning is large, |∆| ≫ Ω, 1/T , one can adiabatically eliminate the excited state |2 and obtain an effective two-state system with the effective two-photon coupling Ω eff = −Ω 0 Ω * 1 /(2∆) [53]. We can directly apply CPs in this system. Therefore, by applying a composite π-pulse with phase stabilisation, we achieve an X gate, up to a global phase, due to the present Stark shift after the adiabatic elimination. One prominent example of such pulse sequence is the BB1 composite pulse [51], where ζ = arccos(−1/4). We note that now B φ denotes a nominal π pulse associated with the effective two-photon coupling Ω eff , the implementation of which requires very large pulse areas, A k ≫ π (k = 0, 1). The infidelity in this case is plotted in Fig. 3 (bottom). Although the latter sequence contains only 5 pulse pairs, compared to 6 and 10 in the previous ones, it requires much larger total pulse area in order to have effective nominal π pulses, and hence this X gate is much slower.
Majorana. The X gate can be also produced by using the Majorana decomposition. As seen from Eq. (4), if the Cayley-Klein paramaters a and b correspond to complete population transfer in the two-state system (a = 0, |b| = 1), the same will be valid in the Raman system as well. Hence, we can again use the BB1 composite pulse (13). However, each of the π φ k pulses corresponds to a pair (R 0 Explicitly, the Majorana composite Raman X gate reads We note that this composite sequence has the same total pulse area (5 × √ 2π) as the N = 5 sequence (9b) used in the MS approach (2 × 5 × π/ √ 2), and therefore, we can compare the performance of the two methods, see Fig. 2. As seen in the figure, by using the CPs, adapted for the Raman qubit, we obtain a robust and high-fidelity X gate in either cases. We also see that by using the MS approach, we achieve a higher fidelity for the same total pulse area than the Majorana method.
Hadamard gate. It reads 1 √ 2 (σ x + σ z ). As seen from Eq. (4), we cannot generate this gate by using the Majorana decomposition, hence we only use the MS approach. It follows from Eq. (2) that for ξ 0 = 2 + √ 2 and ξ 1 = 2 − √ 2, we have which, up to an irrelevant global sign, is the Hadamard transform of the qubit {|0 , |1 }. As for the X gate, the MS propagator corresponds to a 2π pulse, since ξ = 2. Therefore, we can use the same phases (8b) to build our composite sequence. This time, however, π φ k in the MS basis corresponds to the pulse pair [(ξ 0 π/2) φ k , (ξ 1 π/2) φ k ] ≡ S φ k in the original basis, instead of the (Q 0 φ k , Q 1 φ k ) pair used for the X gate. Therefore, we can use the composite sequences for the X gate, and only change the Rabi frequencies. For example, the X gate CPs (9) are replaced by which produce composite Raman Hadamard gates. In Fig. 4 we plot the infidelities of the composite Hadamard gates, produced by sequences, consisting of 2, 6, and 10 pairs of pulses.
In the presence of detuning, one can proceed in the same way as for the X gate. The 2π pulse, which produces a propagator with a = −1, is replaced by a phase gate, which produces a = −e iδ , or alternatively, a universal CP can be used, in the case of moderate detuning. For large detuning, again adiabatic elimination and the half-π BB1 pulse [51] can be used.
Rotation gate. In order to produce composite rotation gates, we proceed in a way similar to the X and Hadamard gates. Let us set ξ 0 = 2 sin(θ/2) and ξ 1 = −2 cos(θ/2). This choice produces a 2π pulse in the MS basis, just as before. Then the propagator reads This propagator describes a qubit rotation, although not in the usual form e iθσy . As an example, for a robust π/3 rotation, we use a composite sequence with the same phases as in the previous two subsections, but each MS π φ k pulse is produced by the pair [(π/2) 0 φ k , (−π √ 3/2) 1 φ k ] in the original basis. It can be shown that the infidelity of the rotation gate does not depend on the angle θ and is therefore the same as the infidelity of the X and Hadamard gates. This is because these can be considered as rotations with θ = π/2 and π/4, respectively. Even more, an analytic formula for the infidelity of the X, Hadamard, and rotation gates can be derived, which demonstrates that the robustness of each of the composite gates, produced by the MS approach, is of the order O(ǫ 2N ). Phase gate. It readsF = exp[iησ z /2]. The composite version of this gate cannot be produced by the MS approach but only by the Majorana decomposition. We notice that if the propagator (3) is a phase gate with a phase η/2, then the propagator (4) is a phase gate with a phase η. Therefore, in order to produce a composite Raman phase gate we can use the available two-state CPs. A number of composite phase gates have been presented in Ref. [49] and we can directly implement them for the Raman qubit following the above argument. As a specific example, the sequence of two three-pulse π CPs B 3 (0)B 3 (η/2), i.e., produces a composite Raman phase gate of phase η. This approach can be applied for arbitrarily long sequences by using longer BB CPs of Eq. (8a) with the phases (8b). In Fig. 5 the infidelity of these sequences up to N = 5 is plotted for η = π/4. This value of η corresponds to the T gate, which is widely used in quantum computing [33]. As seen from the figure, composite pulses implement robust and high-fidelity phase gates. Discussion and Conclusions. In this work, we developed a systematic framework for creating robust and high-fidelity quantum gates in Raman qubits. Our approach uses composite sequences of pulse pairs and is based on two transformations, the Morris-Shore transformation and the Majorana decomposition. These allow the three-state Raman problem to be treated as a twostate system and hence to benefit from the vast amount of broadband composite pulses developed for simple twostate systems. We have constructed and numerically demonstrated the X, Hadamard, rotation, and phase (including the S, T , and Z gates) gates. The X, Hadamard, and rotation gates, in particular, are constructed in the same manner, using composite sequences with the same phases and the same RMS pulse area of 2π of each pulse pair, but different ratios of the Raman couplings. Implementations for both on-resonance and arbitrarily detuned (small, medium, and large) ancilla middle state are presented. One could easily apply this general approach to other commonly used gates like the Y gate. The proposed composite Raman gates can allow one to implement single-qubit operations at ultrahigh fidelity and resilient to experimental errors, as required for efficient quantum computation. | 3,845.8 | 2020-04-27T00:00:00.000 | [
"Physics"
] |
The Quantum Critical Higgs
The appearance of the light Higgs boson at the LHC is difficult to explain, particularly in light of naturalness arguments in quantum field theory. However light scalars can appear in condensed matter systems when parameters (like the amount of doping) are tuned to a critical point. At zero temperature these quantum critical points are directly analogous to the finely tuned standard model. In this paper we explore a class of models with a Higgs near a quantum critical point that exhibits non-mean-field behavior. We discuss the parametrization of the effects of a Higgs emerging from such a critical point in terms of form factors, and present two simple realistic scenarios based on either generalized free fields or a 5D dual in AdS space. For both of these models we consider the processes $gg\to ZZ$ and $gg\to hh$, which can be used to gain information about the Higgs scaling dimension and IR transition scale from the experimental data.
Introduction
The Higgs boson mass has been measured to be around 125 GeV by the LHC experiments. The appearance of a light scalar degree of freedom is quite unusual both in particle physics and in condensed matter systems 1 . While there is no previous particle physics precedent, some condensed matter systems can produce a light scalar by tuning parameters close to a critical value where a continuous (second order) phase transition occurs. As the critical point is approached the correlation length diverges, which is an indication that the mass of the corresponding excitation approaches zero. At the critical point the system has an approximate scale invariance and at low energies we will see the universal behavior of some fixed point that constitutes the low-energy effective theory.
If the system approaches a trivial fixed point then we find "mean-field" critical exponents associated with the Landau-Ginzburg effective theory, and a light scalar excitation. The Higgs sector of the standard model (SM) is, in fact, precisely analogous to a Landau-Ginzburg theory. However, if the system is in the domain of attraction of a non-trivial fixed point then we find non-trivial critical exponents, and potentially no simple particle description.
Phase transitions that occur at zero temperature as some other parameter is varied are referred to as quantum phase transitions (QPT's) since quantum fluctuations dominate over the more usual thermal fluctuations (see e.g. [1] and reference therein), and this is the case of interest for particle physics. Experimentally we know that the Higgs is much lighter than our theoretical expectations.
In the SM varying the Higgs mass parameter in the Lagrangian provides for a continuous phase transition where the physical Higgs mass (and VEV) goes through zero, therefore in the SM we are extremely close to the quantum critical point [2] of a QPT with mean-field behavior. Indeed, if the SM is correct up to the Planck scale, then a change in 1 part in 10 28 can push us through the phase transition, so we are very, very close to the critical point. If there is new physics beyond the SM then the relevant questions are: "Does the underlying theory also have a QPT?" and "If so, is it more interesting than mean-field theory?" At a QPT the approximate scale invariant theory is characterized by the scaling dimensions, ∆, of the gauge invariant operators. In the SM we have only small, perturbative corrections to the dimensions of Higgs-operators: ∆ = 1 + O(α/4π), corresponding to mean-field behavior. The purpose of this paper is to present a general class of theories describing a Higgs field near a nonmean-field QPT, and explore the observable consequences. In such theories, in addition to the pole corresponding to the recently discovered Higgs boson, there can also be a Higgs continuum, which could potentially start not too far above the Higgs mass. The continuum represents additional states associated with the dynamics underlying the QPT, which we assume is described by a strongly coupled conformal field theory (CFT) 2 . The Higgs field can create all of these states, both the pole and the continuum. The pole itself could be just an elementary scalar that mixes with some states from the CFT. In this case the hierarchy problem will be just like in the SM. Another interesting possibility would be if the pole itself was a composite bound state of the CFT similarly to composite Higgs models [6]. Most of the discussion here will be general and will not make a distinction between these cases.
One result of the presence of the continuum will be the appearance of form factors in couplings of the Higgs to the SM particles. Furthermore, associated with the dynamics of the non-trivial fixed point there will generically be extra states, which will decouple from low energies at some cutoff scale. Depending on how close these states are to the electroweak scale, their effects on the effective theory will be more or less important. This is just as in the SM supplemented by higher dimensional operators, with the extra information that the states of the QPT are expected to couple strongly to the Higgs field and, because of the Higgs' large anomalous dimension, a generic operator with a given number of Higgs insertions will be more irrelevant than the analog one in the weakly coupled case.
The phenomenology of these models will share some features with scenarios where the Higgs is involved with a conformal sector [7][8][9][10][11], however here we will try to formulate a general low-energy effective theory consistent with a QPT and no new massless particles.
As in the SM, our effective theory will not allow us to address the question of how Nature ends up tuned close to the QPT critical point. However, since we seem to be near such a critical point, it is worth considering what effective theories can accommodate a light Higgs and still offer phenomenology distinct from the usual perturbative Higgs models. This is the case of a quantum critical Higgs (QCH) that we will explore in this paper.
The paper is organized as follows: we first discuss the general phenomenology of Higgs form factors, we then present two general classes of models for a Higgs near a QPT, and finally we discuss observable signals at the LHC that can allow us to extract the Higgs scaling dimension and IR transition scale.
Form Factors for Higgs Phenomenology
Our ultimate goal is to investigate scenarios where the Higgs is partially embedded into a strongly coupled sector. We envision that such a sector is approximately conformal at scales well above the weak scale, as expected at a quantum critical point. This would allow the type of scenario outlined in the Introduction: the Higgs has a significant anomalous dimension and a continuum contribution to n-point functions. In order to understand what types of new physics effects could appear, we will first present a model-independent parametrization, based on form factors, of the various amplitudes controlling the main Higgs production and decay processes at the LHC. We will assume that the SM fermions, the massless gauge bosons, and the transverse parts of the W and Z are external to the CFT, or in other words, they are elementary states, while the Higgs (along with the Goldstone bosons associated with the longitudinal components of the W and Z) originates from or is mixed with the strong sector, corresponding to a theory with spontaneously or explicitly broken conformal symmetry. Note that in the case of a fully spontaneous breaking of conformal symmetry an additional massless scalar called the dilaton emerges as the Goldstone-boson for broken scale invariance. The idea that the Higgs pole at 125 GeV itself could be a dilaton has been entertained previously in [12][13][14][15]. However, for realistic non-supersymmetric examples one also needs an explicit breaking of scale invariance to stabilize the symmetry breaking minimum, which generically pushes the dilaton mass to high values [13,14]. We will not consider the case of a Higgs-like dilaton pole at 125 GeV in this paper. This strong CFT sector is characterized by its n-point functions, which will be denoted by blobs in the diagrams below. The scale of conformal symmetry breaking will be parametrized by a single parameter µ. For Higgs decays to fermions (i.e.bb andτ τ ), the form factor for the coupling hf f is given by where µ represents the parametric dependence on the scale of conformal symmetry breaking, and p 1,2 are the four-momenta of the two external fermions. In general a form factor involving n fields depends on n − 1 four-momenta, which correspond to n(n − 1)/2 Lorentz invariants. With the three external particles on-shell, the three Lorentz invariants are completely fixed, hence this form factor is simply an effective coupling constant, with p 1 · p 2 = m 2 h /2. For production of the Higgs via gluon fusion, there are modifications due to a non-perturbative sector that is coupled to the top quark: where p 1,2 are the (incoming) four-momenta of the two external gluons, and 1,2 are their polarization vectors. The restriction to on-shell external states for the Higgs and the gluons implies that the form factor is simply an effective coupling constant.
There is an analogous formula corresponding to the form factor for the interactions of the Higgs with two photons. However, several contributions, such as the loop of vector bosons, change its value relative to the hgg factor: Of particular interest is the last class of diagrams, where electrically charged states in the strong sector contribute to the low-energy hγγ interaction. This type of contribution is always present when there are states in the non-perturbative sector carrying charges under the SM SU (2) L ×U (1) Y gauge group, like when the Higgs doublet operator mixes with or emerges from the strong dynamics.
Once again, restriction to on-shell states reduces this form factor to an effective coupling constant.
Off-shell behavior: Momentum-dependent form factors
The previous form factors encode information about the new dynamics beyond the SM. Scaling dimensions of operators and n-point correlators in the strongly coupled sector enter into the corrections to the effective coupling constants, which can be measured at collider experiments. However, in order to fully probe the nature of the quantum critical point we need to uncover the momentum dependence of the form factors, i.e. when the Higgs is off-shell. Next we will outline the general structure of these off-shell observables.
The amplitude associated with the production of the Higgs through the fusion of massive vector bosons (VBF) has five independent contributions: where p 1,2 are the (incoming) four-momenta of the two vector bosons V = {W, Z}, and the on-shell Higgs condition fixes p 2 1 + p 2 2 = m 2 h − 2p 1 · p 2 . The fermionic currents J 1,2 contain the polarization states for the external fermions, with the appropriate Dirac structures that couple to the internal W and Z propagators, G V (p i ). N V is an overall normalization set to the SM value. Because of Bose symmetry, the Γ i form factors are symmetric under the exchange p 1 ↔ p 2 , except Γ 4 which is anti-symmetric. The gauge bosons' propagators include contributions from the CFT, which are determined to leading order once the Higgs two-point function is known. Typically the W 's or Z's are off-shell; the on-shell limit (a.k.a. the effective W limit) is relevant for high momenta, where the Higgs is far off shell. The Γ 1 form factor is the only one that appears at tree-level in the SM, where i =1 = 0. The form factor Γ 2 is singled out as the term transverse to the momenta p 1 and p 2 , thus it is generated by operators involving transverse polarizations.
For Higgsstrahlung, where an off-shell electroweak vector boson produced via Drell-Yan radiates an on-shell Higgs and a massive vector, the amplitude involves the same form factors (up to p 2 → −p 2 ) as VBF, since the two processes are related by crossing symmetry: 2 where J 1 is the current of initial state fermions, 2 is the vector boson polarization vector, and where we used 2 · p 2 = 0. Therefore, Higgsstrahlung contains three form factors, in agreement with [16]. Note that p 2 2 = m 2 V and −2p 1 · p 2 = m 2 h − m 2 V − p 2 1 . Double Higgs production involves two form factors: where p 1,2 and p 3,4 are respectively the four-momenta of the two gluons (incoming) and of the two Higgses (outgoing). Note that p 1 · p 2 = s/2 and p 1 · p 3 = (m 2 h − t)/2. Because of Bose symmetry between the gluons, Ξ i (p 1 · p 2 , p 1 · p 3 ; µ) = Ξ i (p 1 · p 2 , p 2 · p 3 ; µ). The two different structures are generated already in the SM, although Ξ 2 is suppressed in the large top mass limit [17].
The last and perhaps most promising case is gg → V V , in particular when V = Z → + − : where p 1,2 are the four-momenta of the two gluons, with polarizations 1,2 , and p 3,4 are the fourmomenta of the two massive electroweak vectors, with 3,4 their polarization. Note that p 1 · p 2 = s/2 and p 1 · p 3 = (m 2 V − t)/2. The latin indices run over a set of three independent particle momenta (e.g. 1 to 3). We have used a slightly redundant notation: some of the form factors vanish identically because of the anti-symmetry of the tensor, Θ 1j 2,3 = Θ i2 2,3 = 0; moreover, because of Bose symmetry between the gluons, Θ 3,5 can be expressed in terms of Θ 2,4 respectively. More . Analogous Bose symmetry constraints apply to other Θ's as well. For recent analyses of this process at the LHC, in the limit p 2 µ 2 where the resulting effective theory is the SM perturbed by higher-dimensional operators, see e.g. [18].
Eventually, with a large integrated luminosity, we will be able to probe V V scattering, V = {W, Z}: 1 In general there are many form factors in this channel, even restricting to on-shell V 's. Likewise for the Higgs associated production with a top quark pair: 1 2 either gg or qq initiated. We leave the analysis of these form factors for future work.
Estimation of form factors: A case for a simple parametrization of the leading Higgs processes
Next we will argue that most form factors remain small and can be estimated using the insertions of the smallest n-point functions, even though the Higgs is (partly) embedded into a strongly coupled sector. In the framework of a QCH we assume that the origin of the corrections to Higgs observables is from a strongly coupled sector that plays some role in electroweak symmetry breaking, and perhaps produces the light Higgs boson resonance (possibly along with a scalar continuum, as we will discuss later). In this paper we assume all other SM fields are external to this nonperturbative sector, with the Higgs acting as a portal to the strong dynamics. Naive dimensional analysis (NDA) provides a guide for our expectations of the size of the form factors discussed above.
We begin our estimates from an effective field theory perspective, where the only light degree of freedom surviving from the strongly coupled sector (below the scale µ) is the Higgs particle. In this limit, the form factors are expected to have asymptoted to their zero-momentum values 3 . In this case, we can estimate the size of the n-point Higgs correlators by considering the effect of loops on its renormalization. In NDA the loop corrections should be roughly of the same size as the original n-point function. The example of n = 6 is explicitly depicted here: 3 3 ∼ Following the rules of NDA, assuming that the n-point function is described by the coupling a typical loop contribution with two insertions of this operator that contributes to the same n-point amplitude would be one in which each vertex has n/2 external lines, and n/2 propagators exchanged in n/2−1 loops that are cut off at the scale µ. In this case, the quantum correction to α n is expected to be roughly For this correction to be comparable to the initial coupling we must have α n ∼ (16π 2 ) n/2−1 4 .
With this NDA estimate of the n-point amplitude in hand, we can see that the n-point contribution to, e.g., the gluon fusion process is suppressed by insertions of the perturbative coupling of the top-quark to the strongly coupled sector, along with a loop factor that is only partially cancelled by the large coefficient α.
If the shaded region corresponds to the n-point function, there are n − 1 insertions of the top Yukawa, and n − 2 loops. There are n − 1 scalar propagators and n − 2 fermionic propagators running in these loops. Computing the loops with a hard cutoff at the scale µ in Eq. (12), yields an estimate for the contribution of the n-point correlator to the htt coupling: This is one of the crucial results in this paper: one that allows us to use very simple parametrizations to estimate the leading corrections to Higgs processes in QCH models. This NDA does not rely on whether or not the strong sector is conformal. What this means is that the n-point amplitude contributions are increasingly suppressed by perturbative loop factors for increasing n, and therefore the leading contribution to the form factor in this case will be due to the Higgs two-point function (in the electroweak broken phase). In this case, the dominant contribution to the form factor is "tree-level", and involves only a single insertion of the top Yukawa coupling along with the full nonperturbative two-point function of the Higgs. For example, non-standard momentum-dependent effects in double Higgs production through gluon fusion would be dominated by the following diagram 3 which involves the ggh form factor in Eq. (2), now with the Higgs off-shell (p 1 · p 2 = s/2), and the trilinear Higgs form factor F hhh : where G(p 1 + p 2 ) is the Higgs propagator. We emphasize again that this result is based on the crucial (but reasonable) assumption of the applicability of our NDA to estimate contributions from the strong sector.
The same power counting implies that the form factor F ggh with the Higgs on-shell reduces to the SM value up to corrections of O(λ t /4π) associated with higher-point correlation functions, and up to corrections of O(m 2 h /µ 2 ) due to the non-trivial momentum dependence of such higher-point correlators.
Analogously, the strong sector's contribution to the gg → V V amplitude is dominated by the following diagram with the associated form factor: Finally, the new physics effects in V V scattering factorize into the product of the VBF form factors.
The Higgs contribution to V V scattering is singled out, even off-shell, and the s-channel Higgs amplitude is given by This probes both the Higgs propagator and the two gauge boson form factors off-shell.
Let us comment on the UV scaling behavior of the form factors for the case of a strong sector that is conformal at high energies. For large momenta, the Higgs propagator scales as p 2∆−4 , where ∆ is the dimension of the Higgs-operator. The amputated form factors F h...h scale asymptotically as p 4−n∆ · v m∆ , where n is the number of legs with large momenta while m is the number of VEV insertions at zero momentum. For example, F hhh in Eq. (14) requires one VEV and three Higgs legs, so that F hhh ∼ p 4−3∆ v ∆ . Analogously, the form factor F µν V V h µ ν scales at large momenta as F hhh because of the equivalence theorem that relates the Goldstone bosons inside the Higgs field to the longitudinal polarizations of V . For the transverse polarizations, the estimate of the asymptotic behavior involves instead the insertion of the weakly gauged conserved currents of the CFT, schematically H † . . . HJ µ J ν , so that the scaling of the amputated form factor is p −n∆+2 · v m∆ . We stress that our power counting implicitly assumes that the IR corrections from insertions of several Higgs VEV's are suppressed, such that the n-point function with the least number of critical Higgses (but with unsuppressed number of derivatives) captures already the leading contribution to the form factors. Such a scaling is realized e.g. in the power counting of [19], where operators with fewer derivates are suppressed due to a shift symmetry h → h + c.
We emphasize that these results depend crucially on how the SM fields are embedded into the (strongly interacting) conformal sector. If the top quark were part of the strong dynamics, as it is often the case in extensions of the standard model, then the higher-point correlation functions are expected to give contributions that are unsuppressed relative to the leading term in the λ t expansion. In this case, all n-point correlators are important for estimating the form factors. In fact, this is the case for the longitudinal W ± bosons, which are part of the strong sector (along with the Higgs and the longitudinal Z) and may hence strongly affect the rate of h → γγ. Of course, the argument above is model dependent. For example, in composite Higgs models where the Higgs emerges as a Goldstone boson, corrections to h → γγ (and gg → h) are protected by the associated Goldstone symmetry. Such a symmetry is preserved by the strong sector and therefore the generation of terms such as B 2 µν |H| 2 (and G 2 µν |H| 2 ) is suppressed by insertions of the explicit breaking parameters [20]. In the next section we present a different class of strongly interacting theories that can reproduce the SM predictions at low energies, e.g. in h → γγ, even though no global symmetry is at work. Rather than invoking a symmetry, we consider generic theories that are perturbations around generalized free fields [21]. For example, below we consider a strong sector where only the n = 2 correlators are non-vanishing, and where higher-point correlators are present only due to the perturbative SM couplings. Another example we consider is that in which higher n-point correlators are suppressed due to a large N expansion being possible in the strong sector. In either case, these benchmark theories admit a perturbative expansion, where the smallness of the SM couplings or of 1/N suppresses deviations from the new physics sector. Effectively, the scale suppressing operators associated with higher n-point functions is larger than the scale that controls the two-point function. Hereafter, this latter scale alone will be denoted as µ.
Modeling the Quantum Critical Higgs
The range of possible phenomena associated with generic models of electroweak quantum criticality is large, with only a few constraints imposed by consistency of the theory (e.g. unitarity bounds) on the form factors discussed above. To make concrete predictions we need to make some additional assumptions about the QPT. In this section we will present two such models: one based on generalized free fields and a second one based on a 5D realization of those using the AdS/CFT correspondence. These models serve as illustrations of the general arguments presented in Sec. 2.3, and serve as toy models for performing concrete calculations in the upcoming section for LHC phenomenology. While not forbidden by experimental results, strong self-interactions in the low-energy effective theory for the Higgs would hinder our ability to make quantitative statements. Of course there is no requirement that strongly coupled dynamics produces a strongly coupled effective theory, and in fact there are many counter examples where a strongly coupled theory produces a weakly coupled effective theory for the low-energy degrees of freedom: Seiberg duality, the ρ meson at large N , and the AdS/CFT correspondence. In the latter example, assuming that the low-energy composites of the broken CFT have weak interactions is tantamount to assuming that there is a weakly coupled AdS dual. In the strict large N limit there are no bulk interactions in the AdS dual, but for large yet finite N we expect that there are perturbative bulk interactions that are inherited by the 4D low-energy effective theory.
The infinite N limit in AdS/CFT yields a subclass of models that generalize to a broader category of strongly coupled theories: it is possible that the strongly coupled sector is completely specified by the two-point functions of that theory, with or without a large N expansion. Such theories are referred to as models of generalized free fields [21]. Weakly coupling a fundamental light Higgs to such a theory would produce the type of dynamics we are interested in. Of course, such a construction would not on its own resolve the hierarchy problem, however our motivation is to explore the possible variations of Higgs phenomenology rather than solving the hierarchy problem.
Generalized free field theory
In theories where n-point functions with n > 2 vanish, one obtains what is called a "generalized free field theory" [21]. For a more recent discussion of generalized free fields see e.g. [22] and references therein. Since the theory is quadratic in this case, a 1PI effective Lagrangian density can be constructed whose path integral generates the two-point functions of the theory. As an example, we consider an unbroken CFT with a scalar operator h with scaling dimension ∆. The two-point function is then fixed by conformal invariance: 5 The Lagrangian that reproduces this two-point function is Phenomenological constraints suggest that if there is a strongly coupled sector mixing with the Higgs, then there must either be a gap, or the mixing must be highly suppressed. A simple IR deformation of the above Lagrangian provides a two-point function that features a gap, yet reduces to conformal behavior at high momentum: Here µ reduces to a mass term (a pole in the two-point function) as ∆ goes to 1, but for other values of ∆ represents the beginning of a cut. The µ term gives a contribution to the potential energy (the p → 0 limit) and removes the massless degrees of freedom. In terms of a fundamental CFT description this would correspond to the continuum shifted to start at p 2 = µ 2 rather than at There are other possibilities [11,24] for the structure of the above quadratic Lagrangian that correspond to differently shaped spectral density functions. The different shapes correspond to different ways in which the behavior of the theory makes the transition from the IR, where conformal symmetry is broken, to the UV, where it is restored. For the sake of simplicity we will use examples based on this simple model, but want to emphasize again that this is not a necessary or unique choice.
The appearance of a continuum in conformal theories is generic; such a CFT continuum does not admit, generically, an interpretation in terms of weakly interacting multiparticle states. Whether or not this continuum survives the spontaneous breaking of the conformal symmetry (or becomes a discretuum or a continuum with a mass gap) depends strongly on the mechanism of conformality breaking and the corresponding CFT dynamics. We will see in the section below that for CFT's with AdS duals a continuum theory generically corresponds to soft-wall type constructions, even though soft walls may also support a mass gap. Hard walls correspond to a discretuum, as in the original RS models, although finite coupling quantum effects in the 5D theory restore the continuum at high scales (where the mode separation becomes small in comparison with the widths).
In general, the two-point function can be formulated in terms of a spectral density function. Of course, with the discovery of the Higgs particle, we insist that the spectral density includes at least a pole at 125 GeV, with features that closely resemble those of the SM Higgs. A general two-point function with a pole at m 2 h , and a cut beginning at the scale µ is A simple Lagrangian that yields a two-point function of the above form is where the physical pole mass, m h ≈ 125 GeV, and the quadratic terms in Eq. (21) are obtained by expanding the Higgs potential around its minimum, such that h = 0, h being the real part of the fluctuation around the VEV. Note that while the scaling dimension of h is ∆, its engineering dimension is 1. Since h is assumed to be weakly coupled, the scaling dimension of h 2 is 2∆ to first order, so the bounds of [25] are not relevant. Perturbative corrections will also give additional contributions to the spectral density, shifting both the location of the branch cut, and the overall shape. These effects, though, are both sub-dominant, and we neglect them in our estimation of form factors. It is convenient to work with a canonically normalized Higgs field. To achieve this we can set the residue of the pole to 1 by choosing the normalization to be The propagator for the physical Higgs scalar can then be written simply as This type of propagator has been studied in a variety of papers [7,26,27], including its AdS 5 description [28,29].
Besides modified propagators, the Higgs Lagrangian Eq. (21) While a detailed derivation of the bounds on the parameters µ, ∆ is beyond the scope of this paper, one can get a good idea of how weakly constrained these parameters are by going into the limit of large µ compared to the momentum scales relevant for a particular process, and expand the Lagrangian Eq. (21) in powers of p 2 /µ 2 . The leading operator (after rescaling the Higgs doublet to have a canonical kinetic term) will be a dimension six Higgs operator − 1 , which as expected vanishes both for ∆ → 1 and µ → ∞. Using the equations of motion (or field redefinitions) we can gain more insight into the effects of this operator: it will induce 4-Fermi operators, which are however strongly suppressed by the SM Yukawa couplings, a modification to the Higgs potential (which is very weakly constrained by current data), while the leading effect will be a modification to the Yukawa couplings given by where Y are the Yukawa couplings and V H is the Higgs potential. This will then give rise to a correction to the top Yukawa coupling, which will in turn modify the Higgs production rate via gluon fusion. The resulting correction (expressed in terms of the physical Higgs mass) together with the experimental limit at 1σ CL, obtained from [31] assuming no direct new physics contributions to the Higgs coupling to gluons, is given by leading to an experimental bound on the parameters µ/ √ ∆ − 1 335 GeV. Going beyond the quadratic terms we can also include small Higgs self-interactions with their associated form factors, but these are model dependent in that they require information beyond simply the two-point function. The simplest models that provide this kind of detail come from the AdS/CFT correspondence, which we discuss in what follows.
Generalized free fields and AdS/CFT
If we insist that the dynamics is conformal in the UV, as in the examples above, then the highmomentum behavior of all such two-point functions is purely a function of momentum and the scaling dimensions of the fields. In the IR, where the conformal dynamics is presumed to be broken to produce a gap, the specifics of the breaking determine the transition from SM-like mean-field behavior to exhibiting sensitivities to the scaling dimensions associated with the strongly coupled conformal theory.
The AdS/CFT correspondence [32] offers a framework where the strongly coupled CFT with generalized free field behavior, its perturbative coupling to the fundamental fields of the SM, and its breaking can all be understood in terms of a weakly coupled 5D theory. We consider a class of 5D models in which a scalar field carrying the same quantum numbers as the Higgs propagates in the bulk of the extra dimension. 5D gauge fields are required for consistency, such that local gauge transformations of the bulk Higgs are compensated for appropriately. A soft-wall is included to truncate the extra dimension, producing a gap in the spectrum near the TeV scale. The rest of the SM fields are taken to be localized on the UV brane. In general, these can propagate in the bulk as well, but we take the simplifying assumption that only the minimal bulk field content be added to generate non-trivial behavior for the Higgs QPT.
The 5D theory that we consider has the following action: An SO(4) global symmetry is gauged in the bulk, introduced in order to preserve the custodial SU (2) L ×SU (2) R symmetry [33] of the SM. Smaller groups are possible, but are difficult to reconcile with electroweak precision constraints without a large separation of scales. The electroweak singlet φ is a background field whose expectation value determines the bulk Higgs mass, and whose profile determines the properties of the soft-wall and associated gap. The absence of L int , with terms higher than quadratic, in the 5D description would correspond to the generalized free field limit.
Their inclusion allows for form factors for non-trivial, n > 2 point, correlation functions.
Via the AdS/CFT correspondence, this action, with a constant background field φ = m 2 and neglecting L int , encodes the physics of a large N 4D strongly coupled CFT containing a scalar operator with a scaling dimension given by The 5D gauge fields correspond to the global symmetries of the approximate CFT. At a minimum, it must contain the global symmetries that are gauged in the SM, and phenomenological viability typically forces invariance under custodial SU (2) L × SU (2) R . Since we are interested in fields with dimensions ∆ < 2, we need to choose boundary conditions [28,34] that project out the solution with larger root in Eq. (27), which results in the boundary value (H 0 ) of the bulk field (H) playing the role of the 4D effective field rather than the source of the CFT operator as it does when ∆ > 2.
For the metric, g, we presume the space is asymptotically AdS, with metric in a region of the space z ∼ R. Deviations from AdS grow with increasing z, forcing a finite size for the extra dimension and a resulting mass-gap for the 5D modes. The precise details of these deformations of AdS determine the spectrum -whether there is a discretum or a continuum, and the detailed shape of the spectral density. A simple classification of the characteristics of such spacetimes has been given in [29] for the case when the metric is modified by an additional overall soft-wall factor, ds 2 = a(z) 2 (η µν dx µ dx ν − dz 2 ), and a bulk Higgs potential V (H) is included. The Higgs spectrum is determined by the Schrödinger-type equation [29] [ whereV (z) = 3 2 a a + 3 4 a 2 a 2 +M 2 (z) withM 2 = a 2 RV (H). It was found that the asymptotic behavior of the potential determines the qualitative features of the Higgs spectrum. IfV (z) → ∞ for z → ∞ there is a discretuum, which is the case for all hard walls as well as soft walls where the warp factor decays sufficiently fast, (i.e. a ∼ e −(ρz) α , with α > 1). A continuum without a mass gap is obtained for cases whereV (z) → 0 for z → ∞, like for AdS without an IR brane. Finally, the case of interest here is where a continuum appears with a mass gap µ. This corresponds toV (z) → µ for z → ∞.
The example corresponding to this case, which will be our canonical example for the AdS dual of the QCH, is It would be very interesting to understand what 4D CFT's and their necessary deformations are that correspond to such AdS duals, as well as how generic the case of a mass gap with a continuum is. These problems however are beyond the scope of this paper.
In the following we will be focusing on the case of Eq. (30). By integrating over the bulk and using the solutions of the bulk equation of motion, and rescaling by a factor to convert from 5D normalization to 4D normalization, we obtain a 4D boundary effective theory for H. Generically with a bulk potential, H has a VEV, which we will shift away as usual. In the unitary gauge we can write: With the appropriate background field, φ(z), we can reproduce Eq. (19) [28,29]. For the soft-wall metric Eq. (30) this corresponds to where ν = √ 4 + m 2 R 2 . In this case the normalized boundary-to-boundary propagator will be given by where K α is the modified Bessel function, and In the limit pR, µR 1 Eq. (34) reduces to Eq. (19). The brane-to-bulk propagator, relevant for computing n-point correlators due to bulk interactions, is given by Different background fields will naturally yield different two-point correlators and different effective actions, corresponding to different models of IR breaking of conformality. We choose this background as it results in an analytic two-point function, thus making the following discussion as transparent as possible.
To obtain the appropriate Higgs VEV a bulk potential V (H) must be included in L int , and other operators are allowed as well. Once the two-point function is known, gauge invariance fixes the gauge interactions required by minimal coupling [7], i.e. the gauge interactions that saturate the Ward-Takahashi identities.
To obtain more general form factors we can include gauge invariant higher dimension operators in L int . For example, if we include a higher dimension bulk operator that couples two gauge field strengths to the bulk Higgs, then we will have the corresponding 4D interaction in the 1PI effective action (a.k.a the boundary effective theory) In the limit p i µ, the form factor F V V h (p i ; µ) must become conformally invariant, and hence a falling function of momentum. The coupling should also vanish as µ → 0 if we want to recover a pure CFT. Setting one Higgs field to its VEV (p = 0) yields an effective 4D vertex with two gauge bosons and one Higgs, that is a form factor F V V h (p i ; µ) that can contribute to VBF, Eq. (4). In a soft wall AdS model with a conformally flat metric, taking flat zero-mode gauge bosons and with the boundary of AdS 5 at z = R, one finds that the effective 4D vertex is where This is obtained by propagating the Higgses from the boundary to the bulk using (36) and inserting a VEV at zero momentum for one of them.
Another example of the type of form factor that can arise can be found in a generalized AdS model with a bulk quartic interaction, which yields a quartic H 4 4D coupling constant, In order to get the correct value of the the Higgs mass we must have (by equating the zero momentum limit of Eq. (21) to the negative of quadratic term of the shifted potential): which reduces to the SM relation λ = m 2 h /2v 2 in the limit ∆ → 1, or in the limit µ → ∞.
After setting one Higgs fields to its VEV this also yields a cubic 4D interaction, h 3 , with a form factor: This is obtained by propagating the Higgses into the bulk using Eq. (36) and inserting one VEV at zero momentum. An example is shown in Figure 1.
Adding Yukawa interactions is also straightforward, as long as ∆ < 1.5 [35]. In this case the Yukawa coupling is just the fermion mass divided by V.
In both of these examples the form factor at low momentum is almost constant and then peaks for momenta around µ. It would be very inefficient to describe the form factor by introducing higher dimension operators if the scale µ is within reach of the collider.
Direct Signals of Quantum Criticality
A primary focus of Run II at the LHC will be to conduct detailed tests of Higgs phenomenology. The signatures of quantum criticality can manifest in these analyses in several ways, including modifications of on-shell Higgs production and decay, drastic changes in the off-resonance high-momentum behavior of the Higgs two-point function due to e.g. the continuum contributions, and finally modifications of n-point Higgs amplitudes which can result in sizable new physics contributions to, for example, double Higgs production. In the context of a non-mean-field theory description of electroweak symmetry breaking, collider studies of Higgs properties provide data on the scaling dimension of the operator that breaks electroweak symmetry, the threshold scale µ, and on n-point CFT correlators.
The production of new states above µ will modify the high-energy behavior p 2 µ 2 of cross sections that involve the exchange of any of the Higgs components. Both the neutral Higgs and the Goldstone bosons eaten by the W and the Z have propagators in the 1PI effective action that differ from the SM (see Eqs. (23) and (54)). In addition, the n-point correlation functions between neutral Higgs bosons and/or Goldstone bosons will have a form factor dependence that probes the manner in which the Higgs resonance arises, potentially distinguishing between models where the Higgs particle originates from or is mixed with the CFT.
An example of a potential signal for quantum criticality is the high energy behavior of the gg → ZZ process, which contributes to the "golden" four-lepton signature. At center of mass energies above the threshold for the cut in the Higgs two-point function, enhancements of the gg → h → ZZ amplitude are expected. As in the SM, the Higgs exchange diagrams interfere with a top-box diagram in which the Z bosons are radiated off of virtual top quarks. This process has been studied extensively in the context of SM Higgs analyses [36][37][38]. In Section 4.1, we describe an analysis of the differential rate of gg → ZZ that simultaneously probes the scaling dimension of h and the gap of the approximate CFT, µ.
Another avenue to search for quantum criticality would be studies of the production of multiple on-shell Higgs bosons. The high luminosity LHC run is expected to begin probing double Higgs production towards the close of the LHC program, with a few events expected given SM calculations of the cross section. If the Higgs originates as part of a CFT, or is perturbatively coupled to one, the continuum and/or the form factors associated with the CFT can give non-standard contributions to the double Higgs production amplitudes. We will examine this possibility in Section 4.2.
ZZ production via a quantum critical Higgs
The diagrams contributing to the gg → ZZ process are similar to the SM in the QCH framework.
There are two types of diagrams, one of which corresponds to a pure SM contribution [36] without a Higgs exchange, and the usual gluon fusion diagram with an s-channel Higgs exchange: 1 +crossings +crossings 1 The structures of the Higgs two-point function and of the hZZ form factor in the quantum critical case are modified, and as a consequence the interference between the two diagrams will be disturbed.
We have discussed two scenarios in which we can obtain the form factors corresponding to dynamics in which there are new physics contributions to Higgs observables off the mass-shell. We will focus on the case of minimal coupling, where a non-standard hZZ form factor is present, Eq.
Double quantum critical Higgs production
The rate of Higgs pair production is an experimental probe that can potentially reveal the intrinsic nature of the Higgs. For example, in models [12][13][14][15] where the Higgs arises from a conformal sector as the dilaton of spontaneously broken scale invariance, the Higgs cubic coupling could be 5/3 that of the SM, even if all linear Higgs interactions are tuned to be precisely SM-like [12]. The effects of including the QCH two-point function in the production of on-shell Z-boson pairs. The effects of varying ∆ (top two plots) and µ (bottom two plots) are shown. On the right hand side, data are shown as a fraction of the SM result. The effect of the cut associated with the non-standard two-point function is to dramatically enhance the production of Z pairs far away from the Higgs pole. In each plot, for comparison purposes, the effect of a second heavy 500 GeV Higgs with a 100 GeV width is included.
correlators due to the underlying strong dynamics. In contrast to other processes, for the QCH, double production of on-shell Higgs bosons offers the opportunity to probe the higher n-point correlators of the CFT. While it would be extremely interesting to see non-mean-field theory behavior in probes of the two-point function, the higher correlators encode information on the type of CFT we would be dealing with (e.g. large N theories, where the AdS/CFT correspondence offers a perturbative framework for estimating the higherpoint correlators). In order to study such potential effects of quantum criticality, we computed the diagrams relevant for hh production at the LHC. These diagrams are similar to those associated with ZZ production: Of particular note is the fact that the analysis of Z pair production distributions performed in conjunction with studies of the double Higgs final state could help to differentiate between the case of trivial and non-vanishing higher-n correlation functions. If electroweak symmetry breaking occurs via a QPT with non-mean-field behavior, some of the details of the CFT could be extracted from this data.
Conclusions
The puzzle of how the Higgs boson can be so light is still one of the greatest outstanding problems of particle physics, and recent LHC data have only made the problem more severe. In this paper we have taken a bottom-up approach: given that there is a light Higgs, what are the possible consistent low-energy theories? The SM is certainly the best known example; its crucial feature is that it can be tuned close to a quantum critical point. In general, being near a quantum critical point implies a hierarchy of scales, hence a long RG flow, and ultimately coming close to either a trivial fixed point (mean-field behavior) or a non-trivial fixed point (non-mean-field behavior). This suggests a large class of alternative possible theories: those with quantum critical points and non-mean-field behavior. We have presented an effective theory that describes the low-energy physics of a broad class of such theories, with an arbitrary scaling dimension for the Higgs field. Gauge invariance requires that this scaling dimension also appears in form factors of the gauge couplings. We further showed how such effective theories can be constructed from an AdS 5 description, including the (dσQPT/dMHH)/(dσSM/dMHH) gg→HH 14 TeV Figure 3: Potential deviations in double Higgs production at the 14 TeV LHC. We consider two different cases. The first (dashed lines) is when only the Higgs two-point function is modified, as when the strong sector that mixes with the Higgs has vanishing n-point correlators for n ≥ 2. The second (solid lines) is when the Higgs quartic coupling (and hence the cubic form factor) comes purely from a four-point correlator in a large N CFT, and thus both the two-and three-point functions for the Higgs boson carry non-trivial momentum dependence. We vary ∆ and µ for both cases, as shown.
generation of form factors that are not determined by gauge invariance alone. Finally, we described how specific processes, gg → ZZ and double Higgs production can be used to gain information on the Higgs scaling dimension and form factor dependence, or put bounds on the mass threshold of the broken CFT states associated with the quantum critical point. A Form factors for generalized free fields: A minimal example The Lagrangian corresponding to the QPT presented in Section 3.1 can be written as where H is the QCH complex doublet. The Higgs potential, V (|H|) + µ 4−2∆ |H| 2 , is such that the Higgs gets a VEV, breaking spontaneously the electroweak symmetry. The Lagrangian for the excitation around the vacuum, h, has been given in Eq. (21).
The propagators for the W and the Z in unitary gauge are given by, while that of the Higgs boson has been given in Eq. (23). In the limits µ 2 p 2 or ∆ → 1 we recover the SM propagators in the unitary gauge. | 11,425.4 | 2015-11-25T00:00:00.000 | [
"Physics"
] |
Exploring the Likelihood of a Country Being a Tax Haven Using MIMIC Models
Abstract The multiple indicators multiple causes (MIMIC) framework is used to analyze dimensions related to causation and indicators of tax haven status. Robust results were obtained that identify a country’s tax burden and area as causes of a country adopting policies usually observed in tax havens. The level of social security contributions as a proportion of public revenues and the ratio of indirect to direct taxes were found to be statistically significant indicators of tax havens. Data from 68 countries for more than twenty years were analyzed, enabling the results to contribute to a deepening of the current debate about tax havens and their socio-economic profiles.
Introduction
Classifying a country as a "tax haven" is not a task to be taken lightly. There are reputational problems that arise from labeling an economy as a tax haven. Furthermore, to undertake the classification, researchers must account for a multitude of socio-economic factors that change over time.
The study of tax havens has been based on three main dimensions. The first dimension relates to the worldwide consequences of tax haven practices, namely the ultimate consequences for the global financial system (Faith, 1984;Burn, 2006). The second dimension relates to the determinants that move a country to be classified as a tax haven as compared to an offshore location (Dharmapala and Hines Jr, 2009). The third dimension is associated with the suggested policies for regulating the practices of tax havens and offshores (Chavagneux and Palan, 2007). Keeping these three dimensions in mind, it was not possible to find a comprehensive set of empirical studies focused on the public finances of tax havens. This paper introduces the potential uses of multiple indicators multiple causes (MIMIC) models. These models approach the problem of measurability issues (e.g., the shadow economy or the propensity of a country to adopt a tax haven profile) as a latent variable problem. MIMIC models also allow the testing of the statistical significance of the causes of the latent dimension as well its indicators and consequences (Dybka et al., 2019). Therefore, this paper investigates for the first time, with the use of a MIMIC model, what are the driving factors of tax havens and in which indicators are tax havens reflected.
According to the OECD (2000), about half of international financial flows pass through tax havens. Maurer (1997) also observed that tax havens create local jobs and increase public revenues. The financial systems in tax haven economies tend to be more dynamic, also suggesting that the tax haven option leads to positive effects on local societies. However, the methodological potentialities of working with latent dimensions (namely, a 'hidden' pressure from dynamic neighboring countries or political goals toward rapid growth) have not been properly explored in the literature. Therefore, the intent of this paper is to enrich the debate around the identification of tax havens.
Based upon a literature review, as "causes" of tax haven status, I analyze such variables as the level of taxation as a percentage of GDP and mass dimensions (millions of resident persons; the area of an identified territory) as well as trade openness and GDP growth. As "indicators" reflecting the likelihood that a territory is a tax haven, I consider the ratio of social contributions to public revenues, the level of indirect taxation, the level of taxes on goods and services, the corruption perception index, and the weight of interest payments in budgetary expenses.
The remainder of this work is composed of a review of the literature (Section 2) which details the causes and the indicators of tax havens as described in research papers. Section 3 describes an empirical approach to the methodological discussion of MIMIC models including empirical procedures. Section 4 discusses the results and their robustness, and finally, Section 5 concludes the paper.
Causes and Indicators of Tax Havens
When working with structural equation models, authors like Bollen and Brand (2008) use the term "cause" (or causal variable) to classify certain dimensions which have been associated in the literature with the observed phenomenon. It is relevant to note that this issue -the "tax havens" issue -is a complex phenomenon that cannot be simplified to a unidirectional relation of the type "A causes B". There are three major reasons why there are not simplistic causes of tax havens. The first reason relates to the meaning of the term itself. As is widely recognized nowadays, the expression "tax haven" is a label attributed to a certain economy by a given source. This means that we can study tax havens following the literature of taxonomy. Consequently, the label "tax haven" can be understood as a human construction limited to a certain reality assumed by certain agents. This means that other sources may label the same reality with a different name or expression (Robinson, 1994). The second reason is that the identification of an economy as a tax haven must be the result of a well-defined profile for an area. We cannot just classify a jurisdiction as a tax haven because it has reduced its tax rates for a period making them more interesting than the neighbors' tax rates. This would be similar to classifying someone as a violent individual because he/she had talked louder than others on a single occasion. Finally, tax havens may be studied as a "dynamic set of social and economic dimensions" which, for a proper time track, may be correlated with the exhibition/revelation of certain indicators (Faith, 1984).
After this necessary initial explanation of the complex nature of the "causes of tax havens", I now proceed to a review of the literature focused on the set of dimensions that lead jurisdictions to adopt policies that bring them closer to the profile of a tax haven.
Causes
The several works focused on the history of tax havens and on the history of economic thought around tax havens tend to identify particular phases of development during two important moments of economic globalisation: the first occurred in the 19th century, with the expansion of capitalism; the second in the post-World War II period, with the creation of the eurodollar market in the 1950s (Chavagneux and Palan, 2007). It has only been over the last thirty years, however, that tax havens have grown exponentially in number and importance. The liberalization and deregulation of the financial sphere, which began in the early 1980s, have been discussed as major contributors to this growth (Mourao and Raposo, 2013). Therefore, we cannot neglect open trade as an important motivation for the development of tax havens in countries characterized by economies where exports and imports are of high importance.
Many of the territories classified as tax havens exhibit a low magnitude of "mass dimension" variables such as population size and area. Tax havens do not usually need a large local population, a large amount of land, or abundant natural resources. Therefore, I also consider mass dimension variables as causes. Several authors have discussed the rationale for this relationship. Mourao and Raposo (2013) argue that "small" jurisdictions' people tend to realize clearer the expected benefits from the policies that move them toward the status of tax haven; taking the reverse perspective, jurisdictions covering large areas tend to see delays in the expected benefits of a short-term shock (significant inflows of foreign investment, huge economic growth, or positive stimulus in the local employment). Similarly, the larger the population, the greater the difficulties in implementing the pro-tax haven policies. Rikowski (2002) notes that some particular characteristics of national public finances (namely, the existence of modest collected revenues) may drive some countries to develop a profile close to the usual profile of a tax haven. In this vein, tax revenues that represent a low percentage of GDP can be considered a cause of the development of policies related to tax havens. Obviously, the composition and the size of public revenues are considered in light of the composition and size of public expenditures. Jurisdictions are unlikely to maintain public revenues at a level below that of public expenditures; the maintenance of budget deficits is not sustainable for a long period, as the literature has often discussed (Hines Jr, 2004). However, the maintenance of deficits below 1%-2% of GDP and, simultaneously, of public revenues at 30% or less of GDP, has been associated with fiscally competitive countries, particularly those widely known as tax havens (Dharmapala and Hines Jr, 2009).
Finally, tax haven status is used as a vehicle to rapidly boost small economies. Thus small, open, and highly deregulated economies usually take advantage of tax haven activities as a source of foreign direct investment and for the development of their banking systems. Therefore, even though tax havens can reduce the amounts of available money and taxable income in some (medium or large) countries, they can ultimately stimulate the economic growth of small jurisdictions. Therefore, the growth rates of countries must be checked as proxies for alternative causes of becoming tax havens.
Indicators
In the literature, the term 'indicators' refers to variables that change following the occurrence of the phenomenon being analyzed (Dell'Anno and Mourao, 2012). Therefore, when discussing indicators of tax havens we are interested in variables that have been observed to change after the adoption of tax haven policies.
Tax havens tend to use fiscal instruments to attract investments and investors. Given a limited number of taxpayers, to achieve fiscal attractiveness a tax haven tends to generate incentives for capital by lowering income tax rates. Avoiding a concentration of taxes on financial services, tax havens tend to create higher taxation on goods and services (Becker and Fuest, 2010). Therefore, a good indicator of a tax haven is a low ratio of public revenues to income taxes (Becker and Fuest, 2010).
Focusing on the taxation of goods and services, tax havens accentuate indirect taxes over direct taxes. Although previous studies have identified that this indicator is also associated with "fiscal illusion" (Dell'Anno and Mourao, 2012), authors such as Dharmapala and Hines Jr (2009) have observed that nations with a higher value of indirect taxes, as compared to direct taxes, tend to assume practices related to tax havens. Although tax havens aim to lower direct taxation to enhance fiscal attractiveness and because of scale effects, they also exhibit a modest percentage of national income taxation for social security. To attract skilled workers, social security taxes tend to be reduced in countries identified as tax havens or offshores. Additionally, given the low influence of trade unions or lobbying groups, support for the welfare state tends to accompany this trend to lower social security taxation in tax havens, and is generally less significant than in other countries.
Looking at the public expenditures side of tax haven economies, there is a high proportion of expenses taken up by interest payments. As Hines Jr (2004) claimed, open economies like tax havens and offshores tend to manage high economic growth by raising indebtedness. In most tax havens, financial institutions are the core agents of the economy and this proximity of the financial sector to government decision making generates an accentuated exposure to indebtedness practices that raises interest expenses.
The expected effects on institutional issues, like fiscal transparency or the quality of democracy, are not clear. Dharmapala and Hines Jr (2009) commented that the success of tax havens is due to the protection of data related to investors and traders; this protection can mainly be guaranteed by processes that harm the traditional concepts of fiscal transparency (Biondo, 2012). However, most investors clearly prefer to allocate their investments in markets sustained by stable democratic institutions. Therefore, is it not possible to make a clear argument connecting tax havens with fiscal transparency, corruption perception or quality of democracy.
As Dell'Anno and Mourao (2012) have argued, MIMIC models constitute a particular case of a broader class of models identified as structural equation models (SEM), which are commonly used to model relationships between unobserved dimensions (Hair et al., 1998). MIMIC models have developed from the works of Zellner (1970) and Jöreskog and Goldberger (1975). They have been used in public finance (Dell'Anno and Mourao, 2012;Dell'Anno and Villa, 2013), in corporate finance (Chiarella et al., 1992;Jairo, 2008), and in the economics of institutions (Kuklys, 2004;Dreher et al., 2007).
MIMIC models are composed of two equations: a measurement equation (1) and a structural equation (2). The measurement equation can be described with the following matrix notation (Dell'Anno and Mourao, 2012): In equation (1), F identifies the unobserved latent variable, subject to the column vector of disturbances ε and causing the endogenous indicators y. λ is a (column) vector composed of the regression coefficients.
The structural equation relates F (the unobserved variable) to a set x of exogenous causes (Dell'Anno and Mourao, 2012).
The structural disturbance is identified by ς, and β is a vector of coefficients describing the relationship between F and the x causes.
The MIMIC models assume that all the variables (F , x, y) have have zero as the value for expected means (so the model uses de-meaned variables 1 ). It is assumed that E(ς) = E(ε) = 0 and that error terms are not correlated with the causes: E(xε ) = 0 and E(xς) = 0. It is assumed that the error term ε is not correlated with the latent variable E(F ε ) = 0 or the structural disturbance E(ες) = 0. The variance of the error term has been found to be positive in the software used for this analysis, STATA v.15.0 (note: this is contrary to some other statistical packages which would provide incorrect identification schemes).
Equations (1) and (2) are estimated by a maximum likelihood estimator. However, to obtain unique solutions to λ and β it is necessary to fix the scale of the unobserved variable by setting one of the coefficients in λ to a constant (usually +1 or −1). For instance, Dell'Anno and Mourao (2012) set −1 as the measurement coefficient of the measurement equation with the highest R-squared value.
The de-meaning of the variables makes it possible to consider heterogeneity across the (cross-sectional) units in the MIMIC model and to apply SEM using panel data analysis (Bollen and Brand, 2008). This process means that instead of the raw variables x and y, we use x * jit and y * jit . That is, for the most general model there are j = 1, 2, . . . , 11 observed variables; i = 1, 2, . . . , 68 countries, and t = 1990, . . . , 2015. The raw variables are transformed into the de-meaned ones by: Because of this standardization, the latent variable F is also estimated as a de-meaned variable, in other words, the MIMIC model estimates a new F * it = F it −F i . In this specific case, F * it represents an index of the likelihood of a country assuming the characteristics of a tax haven. (Mourao, 2008;Dell'Anno and Mourao, 2012), these 68 economies have been reported as comprising a significant sample of the heterogeneous territories that can be identified now and in the observed period. This includes developing and developed economies, OECD and non-OECD countries, established and new political regimes, etc. Additionally, they have been found to allow a reasonable availability of data for a significant number of socio-economic indicators.
For the model in this paper, I estimated various specifications ( Table 2). The most general specification of the MIMIC model is a MIMIC 5-1-6 (5 causes, 1 latent variable, 6 indicators). Therefore, for this specification, equation 1 can be rewritten using the 6 indicators: and equation 2 can be rewritten using the 5 causes: The following path diagram for the MIMIC 5-1-6 model concretizes the most general specification of this model. Table 2 displays the correlation matrix for the 10 variables being used as causes and indicators. The highest correlation coefficients relate to the correlation between the area of the country (size in km 2 ) and trade openness or between the weight of taxes on GDP and trade openness. Table 3 shows the results of estimating the different specifications of the MIMIC panel data models. Overall, we observe the following values for the RMSEA (root mean squared error of approximation): 0.06 (model 5-1-6), 0.07 (model 4-1-6), 0.08 (model 4-1-5), and 0.01 (model 4-1-4). The fourth model (4 causes and 4 indicators) is the most appropriate for further discussion given it has a lower RMSEA and higher R 2 . Values for other statistical tests such as the comparative fit index (CFI) or Tucker-Lewis index (TLI) are available upon request.
A special cautionary note is needed in explaining the "marginal effects" associated with the estimates in this paper. It can be observed that lower taxes as a percentage of GDP, a smaller area, and a smaller population size, combined with a high degree of trade openness (high values of the trade openness variable) have been estimated to be associated with a greater likelihood of a country being a tax haven. The estimated coefficients can be (1) 1.0000 (2) 0. interpreted following Dell'Anno and Mourao (2012): when the observed value for a given causal variable x is significantly higher than its mean, the effect is observed to be greater on y (the indicator vector of variables). Consequently, for instance, higher values of trade openness are related to significantly lower values observed for social security contributions as a proportion of public revenues as well as in reduced scores for a country's CPI. Notes: Numbers between parentheses are standard errors. Statistical significance is denoted by ***, **, and * at the 1%, 5%, and 10% levels, respectively.
Results and Discussion
The overall goodness of fit statistics are highly satisfactory. The test uses the RMSEA: a good fit is implied by a p-value higher than 0.05 (Browne and Cudeck, 1993), which is evident in Table 3. The chi-square values make it possible to reject the null hypothesis of non-conjoint significance of estimated coefficients for the causes and indicators in these models. 2 Smaller countries and a reduced share of tax revenues as a percentage of GDP tend to be conditions more suited to a country assuming tax haven characteristics. Additionally, countries more exposed to international commerce are more likely to be identified as tax havens. Independent of the number of causes (4 or 5) or indicators (4, 5, or 6) included, these models also reveal that tax haven economies tend to decrease the level of contribution to social security (as a share of revenues).
Robustness Checks
To assess the robustness of these results, I follow Buehn and Schneider (2008). Given the correlation between the variables "trade openness" and "taxes as a percentage of GDP", these dimensions were combined into a single measure and the output structure was reassessed. Using the chi-square distribution table, the statistical value for the test represented a non-significant change at the 0.01 level of significance. Thus, the combination of the two dimensions into a single cluster was not warranted quantitatively and the output of the respective MIMIC model is not shown.
Index of tax haven likelihood
These results generally follow the outcomes from previous studies (Mourao and Raposo, 2013;Chavagneux and Palan, 2007) but also introduce direct and relevant challenges for further development. In particular, MIMIC models make it possible to extract scores from the latent variable, enabling researchers to produce an "index of likelihood of being a tax haven" with values for each country.
The estimates in Table 3 produce observed scores for the latent variable -the likelihood of being a tax haven (LTH) -for each of the 68 countries between 1990 and 2015. In Table 4 we exhibit the mean values across the observed period for the estimated coefficients for each country in each year. For greater readability, we present only the average LTH score for each country in Table 4 (full results are available upon request). The results show that the average LTH score is low: 0.099 with a standard deviation of 0.356 (Table 4). This is a consequence of the de-meaning process used on the variables and for the latent dimension. However, it also means that positive values in the estimates of the index are more significant than negative values.
Nonetheless, some exceptional cases in Table 4 are worth discussing. For example, the minimum mean values are those of Venezuela, Nepal, and Germany. If the outflow of capital is clear evidence of the non-tax haven status of the first two, the presence of Germany in this group is explained by the level of German taxes as a proportion of public revenues as well as the high perception of corruption characterizing German society.
Although not all LTH scores are shown here, all were calculated for each year from 1990 to 2015, and for all 68 countries. However, the scores of Malaysia, Hungary, and Trinidad and Tobago serve as clear examples that validate the estimates of Table 4. In the period observed, these countries were reported as revealing serious issues in terms of tax competitiveness with reforms for enhancing the countries' attractiveness to foreign investors (Goldstein, 2009;Gravelle, 2015;OECD, 2015).
Other interesting cases relate to the position of economies like Denmark (15th) or Sweden (16th) which may seem high as those countries are among the countries with highest levels of taxation in the world whereas lower ranked countries such as Ireland (26th), Cyprus (42nd), and Luxembourg (56th) tend to be viewed as characterized by low income tax rates. However, note that the MIMIC framework is not based on just one dimension, for example, taxation level, and these index scores reflect variations in the latent variable.
Conclusion and Further Challenges
This paper researched the possibility of MIMIC models being used to contribute to the ongoing debate about the identification of a country as a tax haven. There is a current (and very diversified) effort to label countries in terms of more or less similarity to a pattern defined as that of a tax haven. This effort has mostly been undertaken by international organizations (e.g., the OECD or the IMF). However, this effort has been criticized because it has often identified a country as a tax haven by considering the set of agreements accepted by that country's ruling entities in terms of fiscal transparency, international delivery of data from banks' customers, and size of financial flows.
MIMIC models allow us to discuss the profile of a country in terms of its likelihood of being a tax haven. By considering the pressures from certain socio-economic dimensions such as the composition of public revenues or the pattern of international trade, this methodology allows us to work with latent dimensions (here, the propensity of a country to assume a profile of a tax haven) and with indicators, that is, variables which are changed because of the action of the latent dimensions.
Using data for 68 countries observed between 1990 and 2015, a panel data MIMIC approach was used. After the application of a robust set of procedures, the results of this work allow us to conclude that the tax burden, a country's area, its level of trade openness and population size are significant causes/sources of pressure to become a tax haven. The proportion of GDP corresponding to contributions to social security, the ratio of indirect to direct taxes, and the perception of corruption are robust indicators of tax havens.
One of the strengths of MIMIC models is the possibility of scoring a latent dimension. Therefore, for this paper, the MIMIC model methodology was used to generate an index describing to what degree a country is like a tax haven. The index provides values for scoring the likelihood of a country holding such a profile in each year of the observed period. Malaysia, Hungary, and Trinidad and Tobago had the highest likelihood scores across the years, whereas Venezuela and Nepal had the lowest scores on average. We can identify two major implications from these results. The first implication is a theoretical one: it can be inferred that the phenomenon of tax havens is not a mere labeling process that depends on the organization in charge of the classification. A country becomes a tax haven in a (long historical) process in which its ruling institutions react to challenges from the surrounding economic structure and from endogenous social patterns. The second implication is an empirical one: the estimated scores show dynamic movement, meaning that countries assume more (or less) intense propensities for tax haven behavior according to a more (or less) defined profile of a tax haven; these propensities are not stable across an observed period. This latter implication launches the first challenge: that it could be interesting to enlarge the focus of these results in order to detail the yearly evolution of the estimated scores for the latent dimension for each country. A second challenge relates to the opportunity to observe the degree to which these extractable scores correspond with reports from the OECD (2000) or International Monetary Fund (200). A third challenge regards the possibility of detailing the dimension of observed taxation, namely examining nominal corporate tax rates (or some measure of effective corporate rates) as causes. It must not be neglected that the overall value of tax revenues can be a problematic variable, as it can be associated with the level of overall economic development -in particular, less developed countries usually have low levels of tax collection (due to prevalent shadow economy activity and difficulties in collection). Therefore, as data become available to make it possible, it is suggested that the inclusion of detailed taxation data could serve as an enhancement to understanding the causes discussed in this paper. Additionally, dimensions like those proxying the banking sector size could also be explored, noting that current data availability does not allow this enhancement. Further research could also explore an enlarged discussion considering the possibility of reverse causation (observing, for instance, whether some of the indicators identified here can function as causes of the likelihood of being a tax haven). Finally, besides the opportunity to add other testable causes and indicators, a multi-way principal components analysis could be used as an alternative to the MIMIC models demonstrated here. | 6,095.6 | 2020-06-01T00:00:00.000 | [
"Economics"
] |
Complete spelling rules for the Monster tower over three-space
The Monster tower, also known as the Semple tower, is a sequence of manifolds with distributions of interest to both differential and algebraic geometers. Each manifold is a projective bundle over the previous. Moreover, each level is a fiber compactified jet bundle equipped with an action of finite jets of the diffeomorphism group. There is a correspondence between points in the tower and curves in the base manifold. These points admit a stratification which can be encoded by a word called the RVT code. Here, we derive the spelling rules for these words in the case of a three dimensional base. That is, we determine precisely which words are realized by points in the tower. To this end, we study the incidence relations between certain subtowers, called Baby Monsters, and present a general method for determining the level at which each Baby Monster is born. Here, we focus on the case where the base manifold is three dimensional, but all the methods presented generalize to bases of arbitrary dimension.
1. Introduction 1.1. Motivation. The Monster tower, also known as the Semple tower, lies in the intersection of differential geometry, non-holonomic mechanics, singularity theory, and algebraic geometry. Cartan ([2] studied the diffeomorphism group action on jet spaces, which led to developments in the fields of Goursat distributions and sub-Riemannian geometry. Jean [9], Luca and Risler [13], Li and Respondek [12], Pelletier and Slayman [20,21], and others have studied models of various kinematic systems (a car pulling n trailers, motion of an articulated arm, n-bar systems). Montgomery and Zhitomirskii [15] studied the relationship with curve singularities; later, so did we [5,24]. And we discovered in [7] that algebraic geometers have long studied these objects under different names. We have begun pursuing these connections [6] and working with algebraic geometers to consolidate understanding and improve existing terminology and techniques [4]. Here, we study the RVT code for the tower, which is invariant under the action of the diffeomorphism group. This is related to work on the classification problem studied by Mormul [16,17,18], Montgomery and Zhitomirskii [14,15], the authors [5], and others.
In the geometric theory of differential equations, we speculate that there may be some interesting connections between the singularity theory of the Monster tower and the general Monge problem for underdetermined systems of ordinary differential equations with an arbitrary number of degrees of freedom. In [10], the authors derive sufficient conditions, in terms of truncated multi-flag systems, for the existence of a Monge-Cartan parametrization of the general solution of such systems in the regular case. To our knowledge, no connection has been made with the singular theory of multi-flags presented in this note. Similar undetermined systems of ordinary differential equations are common in geometric control theory when studying flat outputs of nonlinear control systems [22]. A detailed account of the geometry of differential equations in jet spaces can be found in [11], where symmetry methods from contact and symplectic geometry are used to solve non-trivial nonlinear partial and ordinary differential equations.
It remains to investigate the correspondence between finite jets of spatial curves and normal forms of special multi-flags. One should explore the depth of the correspondence between Arnold's A-D-E classification [1] and the listing of normal forms of Goursat multi-flags.
Finally, current work with algebraic geometers [4] extends and generalizes the results of this paper to the case of an n-dimensional base. An interesting open question here concerns the existence of moduli in orbits of the action of the diffeomorphism group of the base space.
Thus, it is apparent that this object is of interest to a variety of pure and applied mathematicians, and that it presents a wealth of interesting problems which have potential to shed light in surprising areas.
1.3. Main Results. The diffeomorphism group of R 3 acts on the Monster tower, and the RVT code is an invariant labeling of orbits. Note that the combinatorial data in the RVT code forces a finite number of inequivalent classes at each level of the tower, but there may be moduli within a given class (see [15]. In [7], we stated the following incomplete spelling rules, which followed from [15]. Theorem 1 ([7]). In the Semple tower with base R 3 , every RVT code must begin with R, and T 1 cannot follow R.
Here, we add the missing rules, yielding the complete description of realizable RVT codes. Our alphabet is the set {R, V, T 1 , T 2 , L 1 , L 2 , L 3 }. Note that these seven letters correspond precisely to the seven possibilities found in Semple's original work [23]. We therefore have the following combinatorial description of the diffeomorphism group orbits.
Theorem 2 (Spelling Rules). In the Semple tower with base R 3 , there exists at point p with RVT code ω if and only if the word ω satisfies: (1) Every word must begin with R (2) R must be followed by R or V (3) V and T 1 must be followed by R, V, T 1 , or L 1 (4) T 2 must be followed by R, V, T 2 , or L 3 (5) L 1 , L 2 , and L 3 can be followed by any letter.
For example, the word RV V RV T 1 L 1 T 2 L 3 L 2 is admissible, but V T 2 T 1 RT 2 breaks rules (1)-(4). The following Table 1 summarizes this Theorem. 1.4. Outline. In Section 2, we give the requisite background material and references. We define the Monster tower, Baby Monsters, and the RVT coding system.
In Section 3, we describe our main tool, the method of critical hyperplanes. We begin our main example which will inform the rest of the paper. This example -the code RV L 1 T 2 -will lend itself to a model proof of one spelling rule, whose technique can be repeated to obtain the remaining rules. Moreover, this example will serve to demonstrate the ease with which our results could be extended to towers with bases R n for n > 3. We choose this code to focus on because it neatly demonstrates the general method as well as some of the subtleties which abound in this work and thereby necessitate a delicate touch. In particular, the code RV L 1 was studied extensively in [5], so we restate and build upon the work there. We then amend the code by adding T 2 , which is somewhat exotic and interesting but not overly complicated. Can be followed by Cannot be followed by In Section 4, we restate our main theorem and attend to its proof. We focus on one spelling rule, as the rest are proved in the same fashion, and the proofs are tedious. The main proof proceeds by induction on the number of letters appearing in the code which belong to the set S = {T 2 , L 2 , L 3 }.
Background
2.1. The Tower. The Monster/Semple tower is constructed through a series of Cartan prolongations. Begin with a smooth d-dimensional manifold M 0 and a rank r distribution (subbundle of T M 0 ) denoted ∆ 0 . The first prolongation is the fiber bundle whose elements have the form (p, l), where p is a point in M 0 and l is a line in the subspace ∆ 0 p . The distribution on M 1 is given by where π 1 0 : M 1 → M 0 is the bundle projection. Note that M 1 has dimension d + r − 1, and that ∆ 1 is a rank r distribution.
Iterating the prolongation procedure gives a sequence of manifolds Every point in M i has the form (p, l), where p is a point in M i−1 and l is a line in the distribution ∆ i−1 p . The dimension of M i is thus d + i(r − 1). The bundle projection map π i i−1 : M i → M i−1 has fibers diffeomorphic to P∆ i−1 p ∼ = RP r−1 . The rank r distribution on M i is given by The distributions ∆ i are sometimes known as Goursat multi-flags.
Definition 1. The Monster or Semple tower is the sequence of projective bundles Of particular interest is the case of M 0 = R n and ∆ 0 = T R n . We refer to the consequent tower as the R n -tower or the tower with n-dimensional base. The tower with base M 0 = R 2 and ∆ 0 = T R 2 has been studied extensively [15]. Here, as in [5], we focus on the case M 0 = R 3 and ∆ 0 = T R 3 . However, our methods generalize to the R n -tower for arbitrary n.
To be clear, in the remainder of this paper we are taking M 0 = R 3 and ∆ 0 = T R 3 .
2.2. Regular, Critical, and Vertical Directions and Points. By composing the projection maps π k k−1 , π k−1 k−2 , . . . , π i+1 i we obtain projections π k i : M k → M i , i < k. For p k ∈ M k , we denote π k i (p k ) by p i . The horizontal curves at level i (tangent to ∆ i ) naturally prolong (i.e., lift) to horizontal curves at level k. However, the curves coinciding with fibers of π i i−1 are special -they project down to points and are not prolongations of curves from below. They are called vertical and can themselves be prolonged to (first order) tangency curves, then prolonged again to (second order) tangency curves, and so on. Vertical curves and their prolongations are called critical. If a curve is vertical or critical then we say its tangent directions are as well.
Thus, at each level i ≥ 2 there are vertical directions, and, in addition, at each level i ≥ 3 there are tangency directions different from the vertical direction. At any level, all the remaining (non-critical) horizontal directions are called regular. Finally, we call a point (p, l) ∈ M i regular, vertical, or critical if the direction of l is.
Baby Monsters and Critical
Hyperplanes. Recall that one can apply the prolongation procedure to any smooth manifold F in place of R 3 . In particular, we will prolong the fibers F of the bundle projections π i i−1 , obtaining new subtowers of the Monster tower. We call these subtowers Baby Monsters. Let p i ∈ M i and consider the fiber This is an integral submanifold for ∆ i , so we can prolong the pair (F i (p i ), T F i (p i )). Denote the jth prolongation of this pair by ( Note that the Baby Monster is a subtower of the Monster tower, with dim F j i (p i ) = 2+j and dim δ j i (q) = 2. While the terminology hyperplane comes from a more general setting, here we will simply refer to critical planes.
KR Coordinates.
It is convenient to work in a canonical coordinate system, called Kumpera-Ruiz or KR-coordinates [8]. This is a generalization of jet coordinates for jet spaces, but that takes into account the projective nature of the fibers. These coordinates were described in detail for the R 2 -tower in [15] and for our current case, the R 3 -tower, in [7]. We briefly summarize here for completeness, and refer the interested reader to Section 4.2 of [7].
The KR coordinates for M k are of the form (x, y, z, u 1 , v 1 , . . . u k , v k ). They satisfy: The coordinates u k , v k are affine coordinates for the fiber F k ; (3) There are 3 k many charts covering M k , corresponding to the three affine charts needed to cover each coordinate from a lower level. The covector df i is called the uniformizing coordinate in [5]. Dividing the entries in [df i : du i : dv i ] by one of the nonzero covectors yields local affine coordinates for the fiber F i+1 . By convention, we always take u i+1 to be the first (left-most) affine coordinate.
To illustrate (4), consider the following two examples.
Detailed examples are worked below.
2.5. RVT Codes. We observed in [7] that there are only three critical planes within each distribution ∆ i . The tangent space to the fiber is called the vertical plane; the other two arise as prolongations of vertical planes and are called tangency planes. In the most general setting, a tangency hyperplane is any hyperplane with nontrivial intersection with the vertical hyperplane. In our setting, we have the following characterization. Figure 1. The three critical planes V, T 1 , and T 2 , and their intersections, the distinguished lines L 1 , L 2 , and L 3 .
In this definition, we often drop the explicit dependence on q when the context is clear. Also, in homogeneous coordinates, we cannot have a and b both zero, and we will usually assume without loss of generality that a = 0. Finally, we clarify the terminology. Here V (q) is a linear subspace of ∆ i (q) ⊂ T q M i . When working in homogeneous coordinates, we are identifying this plane with PV (q) ⊂ P∆ i (q) ⊂ M i+1 . Similarly for the other planes and lines in this definition. Note again that this definition has analogue in [23]. Now a point p i+1 = (p i , l i ) is assigned a letter from {R, V, T 1 , T 2 , L 1 , L 2 , L 3 } according to whether l i lies in one of the critical planes or distinguished lines given in Definition 3. Here, the lines L j take precedence, so l i lying in L 3 is assigned the letter L 3 , even though it also lies in both V and T 2 . If l i does not lie in any of these, then it is regular (see above) and assigned the letter R. If l i is assigned the letter α, then we say that p i+1 is an α point. Note that in [7], the letters T 2 , L 2 , and L 3 were unknown, and the notation was T = T 1 and L = L 1 . All letters besides R are called critical letters. Example 1. Suppose p 3 ∈ M 3 has RVT code ω = RV L 1 . This means that p 3 = (p 2 , l 2 ) with l 2 = L 1 (p 2 ), and p 2 = (p 1 , l 1 ) with l 1 ⊂ V (p 1 ). Every direction in ∆ 1 is regular, so the leading letter R yields no information.
For convenience, sometimes we will also denote by ω the set of all points with RVT code ω. For example, we may write p ∈ RV L 1 T 2 to signify that p has RVT code RV L 1 T 2 .
This coding provides a coarse stratification of points in the Monster/Semple tower. Recall that finite jets of diffeomorphisms act on the tower. Points which lie in the same orbit must have the same RVT code. However, there may exist multiple orbits within the same RVT strata. For details, see [5] or [19].
3.1.
Configurations. This method relies on the non-trivial fact that certain critical planes appear over certain points, while others may not. In particular, there are four possible configurations over a point p ∈ M k ; these are shown in Figure 2. We will show how each configuration is possible only if p belongs to certain RVT classes. Specifically, we have Table 2, which is effectively equivalent to Theorem 2. Note that saying that p is an α point is the same as saying that α is the last letter in the RVT code for p.
The remainder of this paper will be dedicated to explaining why these possibilities are exhaustive.
We now describe the explicit method from which we derive all our results. This will be applied to specific examples shortly. The critical hyperplane method was implicit in parts of [15], made explicit in [7], exploited for the classification problem in [5], and is perfected here. This gives a blueprint for characterizing all Baby Monsters and determining all spelling rules for the R n tower for any n.
Begin with an RVT code ω of a point p ∈ M k . We wish to understand which critical letters can be added to the end of the code (one can always trivially add the letter R). In order to do so, we must understand which critical planes lie above p. Since critical planes live within Baby Monsters, we must determine which Baby Monsters are present, and for those which are, we seek to find the levels at which they were born.
We first determine the local KR-coordinate chart containing p. We can then describe the distribution ∆ k (p) in coordinates. We then choose a critical plane V, T 1 , or T 2 , write it in coordinates as in Definition 3, and trace the coordinate representations backwards, projecting down to lower levels of the tower, one at a time.
If at some level i we find that both fiber coordinates u i and v i are non-vanishing, then our critical plane must arise as the prolongation of the vertical plane V i . Our critical plane therefore lives in the Baby Monster born at level i, and is equal to δ k i (p). This would confirm that the critical plane we chose indeed appears in ∆ k (p).
If, however, we reach the base without finding such a Baby Monster, then the plane we chose cannot exist in ∆ k (p). We can shorten the procedure of tracing each plane back to the base by using previously established configuration possibilities and proceeding inductively.
While this is not an algorithm in the strictest sense, it can theoretically determine which configurations are possible above any given point. As one might suspect, this can at times become extremely tedious, and would not be particularly enlightening for the reader. For this reason, we will focus the remainder of the paper on a few specific examples to demonstrate the efficacy of the method for determining spelling rules, while skipping some of the routine verification that was required to complete our results.
It is obvious that the vertical plane V appears above every point -it is just the tangent space to the fiber. So in the method just described, we need only focus on whether or not T 1 and T 2 exist (here, since we are concerned with the R 3 -tower -one immediately sees how this method generalizes to the R n -tower). Some of our results here (those needed for the proof of Theorem 2) are summarized near the end of the paper in Table 4. We will prove some of these relations here -the rest are obtained by identical methods.
Example 2 (RV L 1 ). We continue investigating the case begun in Example 1. Suppose p 3 ∈ M 3 has RVT code ω = RV L 1 .
Appearance of T 1 . We now determine which of the critical planes T 1 and T 2 lie above p 3 in ∆ 3 (p 3 ), which is coframed 1 by [dv 2 : du 3 : dv 3 ]. First consider T 1 , given by [a : 0 : b] with a = 0. We assume for now that it exists within some Baby Monster, and we will either find this Baby Monster or derive a contradiction. Since [dv 2 : du 3 : dv 3 ] = [a : 0 : b] here with a = 0, we see that u 3 is identically zero on the Baby Monster, while v 2 and v 3 are not. Now, since ∆ 2 is coframed by [du 1 : du 2 : dv 2 ] near p 2 , and since u 3 = du1 dv2 and v 3 = du2 dv2 , this forces the Baby Monster to have the form [du 1 : du 2 : dv 2 ] = [0 : c : d]. Since this is the form of a vertical plane, we can stop and conclude that T 1 (p 3 ) exists, and lies inside the Baby Monster born at level 2. That is, the plane T 1 (p 3 ) = δ 1 2 , which is the first prolongation of the tangent space to the fiber F 2 (p 2 ). Appearance of T 2 . Next, we repeat this process for T 2 , given by [a : b : 0] with a = 0. We assume for now that it exists within some Baby Monster, and we will either find this Baby Monster or derive a contradiction. Since Note that unlike the previous case, this is not vertical, so we must continue searching another level down. Since ∆ 1 is coframed by [dx : du 1 : dv 1 ] near p 1 , and since u 2 = dx du1 and v 2 = dv1 du1 , this forces the Baby Monster to have the form [dx : du 1 : dv 1 ] = [0 : e : f ]. Since this is the form of a vertical plane, we can stop and conclude that T 2 (p 3 ) exists, and lies inside the Baby Monster born at level 1. That is, the plane T 2 (p 3 ) = δ 2 1 , which is the second prolongation of the tangent space to the fiber F 1 (p 1 ).
Summary. We conclude that both planes T 1 and T 2 occur above a point with RVT code ω = RV L 1 , so that both codes RV L 1 T 1 and RV L 1 T 2 are admissible and realized (assuming temporarily that ω is admissible). Compare this result with Theorem 2 and Figure 2. Also see Figure 3 for an illustration of this situation. We summarize the results of this example in Table 3.
Example 3 (RV L 1 T 2 ). We continue the work from the previous example, and consider the case of p 4 ∈ M 4 with RVT code RV L 1 T 2 . This is admissible by the preceding computations, and indeed, all results from that example hold here. As the general techniques were made explicit there, we omit some tiresome details here.
First, one finds affine coordinates u 4 = du3 dv2 and v 4 = dv3 dv2 for the fiber F 4 (p 4 ). Next, recall that ∆ 3 (p 3 ), is coframed by [dv 2 : du 3 : dv 3 ], and T 2 (p 3 ) locally satisfies dv 3 = 0, with dv 2 non-vanishing and du 3 not identically zero. This implies that v 4 = 0, but u 4 is non-zero. (If u 4 (p 4 ) were zero, then there would be no vertical component, and l 3 would lie in a regular direction instead of in T 2 .) Second, we show that T 2 does occur in ∆ 4 (p 4 ). This computation is nearly identical to those presented in the previous example, so we omit it. One finds that T 2 (p 4 ) = δ 3 1 . Finally, we show that T 1 cannot occur in ∆ 4 (p 4 ). If it did, it would have the form [dv 2 : du 4 : dv 4 ] = [a : 0 : b] with a = 0. But p 4 = (p 3 , l 3 ) with l 3 ⊂ δ 2 1 = T 2 (p 3 ). This implies du 4 | l3 = 0, so u 4 (p 4 ) = 0, which contradicts the fact that u 4 is non-zero in a neighborhood of p 4 .
We have shown that the T 2 critical plane occurs, but T 1 does not, in ∆ 4 (p 4 ) for p 4 in the class RV L 1 T 2 . We conclude that the code RV L 1 T 2 can be amended with letters R, V, T 2 , and L 3 , but not with T 1 , L 1 , or L 2 . Compare with Theorem 2, Figure 2, and the second row of Table 4. Also see Figure 4 for an illustration of this situation.
Spelling Rules
In this section we will outline the proof of Theorem 2 from the Introduction, which we restate here.
Theorem 2 (Spelling Rules). In the Semple tower with base R 3 , there exists at point p with RVT code ω if and only if the word ω satisfies: (1) Every word must begin with R (2) R must be followed by R or V (3) V and T 1 must be followed by R, V, T 1 , or L 1 (4) T 2 must be followed by R, V, T 2 , or L 3 (5) L 1 , L 2 , and L 3 can be followed by any letter.
Let us begin with an overview of the method of proof. The first two rules are well known and appear in [7] and [3]. Rule (3) can be checked by direct calculation; this is tedious but straightforward and we omit the computation here. The same can be said for the part of rule (5) concerning the letter L 1 . The technique is illustrated by examples in [5] and the three examples above. For example, one finds that for any point p ∈ λL 1 , the plane T 1 (p) is obtained by prolonging the vertical plane from one level below. In other words, Similarly, the plane T 2 (p) is the prolongation of the T 1 plane from one level below. This is independent of the code λ.
To prove the remaining rules, (4) and most of (5), we proceed by induction on the number of letters T 2 , L 2 , or L 3 appearing in the code. This proof is more delicate. Set S = {T 2 , L 2 , L 3 }. For the base case, we must prove that the spelling rules hold for an RVT code ω containing only one letter α ∈ S. For the inductive step, we must prove that the spelling rules hold for an arbitrary code ω, using the inductive hypothesis that the rules hold for any code containing fewer letters α ∈ S. In both steps, we assume without loss of generality that the letter α appears at the end of the code in question.
Unfortunately (but perhaps unsurprisingly given the examples above), this method requires investigating a large number of specific cases, as well as a considerable number of tedious calculations. We therefore spare the reader details of all cases, and the lengthy but routine computations which are required to prove each spelling rule rigorously. Instead, we will focus in detail on one particular rule: the fourth. We hope that this approach will yield sufficient detail to introduce the mechanics of the method to the reader, while sparing the reader dozens of pages of nearly identical calculations. We chose these particular cases as they exhibit generally typical behavior, but with a few of the subtleties which necessitate special care and patience. 4.1. Base Case. We assume rules (1) -(3) have been proved. Here we will provide details for rule (4); the remaining proofs are very similar. To this end, let ω be an RVT code of length k, ending with the letter T 2 . We will show that codes ωR, ωV, ωT 2 , and ωL 3 do occur at level k + 1, while ωT 1 , ωL 1 , and ωL 2 are impossible. We prove this by induction on the number of letters α ∈ S = {T 2 , L 2 , L 3 } appearing in ω.
We first prove the base case. Assume ω = λT 2 , where λ does not contain any letter from S. We will show that rule (4) holds for this ω. We prove this by considering the potential letters preceding T 2 . By rules (2) and (3), T 2 cannot be preceded by R or V or T 1 . Since we have assumed that λ contains no letters from S, we know T 2 cannot be preceded by T 2 , L 2 , or L 3 . We therefore consider the only remaining possibility: T 2 is preceded by L 1 . Note that for convenience we will use λ to denote any sub-code of ω, regardless of its length.
So we proceed assuming our code has the form ω = λL 1 T 2 , where λ contains no elements from S. Thus, the predecessor of L 1 can only be V, T 1 , or L 1 . We have three possible cases.
Assume our code has length k and is of the form ω = λV T m 1 L 1 T 2 with m ≥ 0. If m = 0, then V precedes L 1 ; if m > 1, then T 1 does. The third possibility, where L 1 precedes L 1 , is treated as a separate case below.
In fact, we can assume without loss of generality that ω = RV T m 1 L 1 T 2 . This is valid because the plane T 2 (p k ) is the (possibly multi-step) prolongation of some vertical plane from a lower level. That is, T 2 (p k ) = δ j i for some Baby Monster, and this subtower could not have been born at a level below the last letter V in the RVT code. Now consider ω = RV T m 1 L 1 T 2 . We have k = m + 4. We wish to show that the spelling rules hold for ω. This is to show that the codes ωα are realized for α = R, V, T 2 , L 3 , but are impossible for α = T 1 , L 1 , L 2 . Now there are regular and vertical directions in each distribution plane, so it is clear that α = R or V are possible. Recall from Definition 3 that L 1 = V ∩ T 1 , L 2 = T 1 ∩ T 2 , and L 3 = V ∩ T 2 . It is therefore sufficient to simply show that α = T 2 is possible, while α = T 1 is not.
The proof here is nearly identical to that provided in Example 3. In fact, that example gives precisely the case where m = 0. Recall that in that case, T 1 could not appear and T 2 (p 4 ) = δ 3 1 . For m > 1, we easily verify that, again T 1 cannot appear, and T 2 (p m+4 ) = δ m+3 Case 2: ω = λL 1 T m 1 L 1 T 2 , m ≥ 1. This case is nearly identical to the previous. Here, one finds again that the vertical plane in ∆ k−m−3 prolongs m + 3 times to give the plane T 2 (p k ).
The method here is the same as in Case 1, so we will omit some of the readily checked details. Again suppose the length of ω is k. Then ∆ k is coframed by [df k : du k : dv k ], and T 2 (p k ) would have the form [df k : du k : dv k ] = [a : b : 0] with a = 0 and df k = dv k−2 . Its projection in ∆ k−1 will have the form [df k−1 : du k−1 : dv k−1 ] = [a : b : 0] with a = 0 and df k−1 = dv k−2 . Its projection in ∆ k−2 will have the form [df k−2 : du k−2 : dv k−2 ] = [a : 0 : b] with a = 0 and df k−2 = dv k−3 . Finally, its projection in ∆ k−3 will have the form [df k−3 : du k−3 : dv k−3 ] = [0 : a : b] with a = 0. At this point, we can see that this is the vertical plane V (p k−3 ), so we find that T 2 (p k ) does indeed exist in ∆ k , and that it is equal to δ 3 k−3 . A computation similar to this one and those found in Example 3 shows that T 1 (p k ) cannot exist. In short, one repeats this computation beginning with T 1 (p k ) of the form [df k : du k : dv k ] = [a : 0 : b] with a = 0, and at some point a contradiction is obtained in that some coordinate is forced to be both zero and nonzero.
This establishes the base case for the proof of rule (4) by induction. We showed that rule (4) holds for any RVT code containing a single member of S (which, in the context of rule (4), must naturally be the letter T 2 .) These three cases comprise the top three rows in Table 4. The remaining cases are displayed as the lower six rows in Table 4; their proofs are similar.
Step. We now take ω to be an arbitrary RVT code of length k. We assume that ω ends with some letter from S, and we will show that the spelling rules hold for ω. Our inductive hypothesis states that the spelling rules hold for any code which contains fewer letters from S than ω does. As above, we will focus on rule (4), so our code should end with the letter T 2 . So we have ω = λT 2 and our inductive hypothesis allows the assumption that λ satisfies the spelling rules. We wish to show that, at level k + 1, the codes ωα are realized for α = R, V, T 2 , L 3 , but are impossible for α = T 1 , L 1 , L 2 . Now there are regular and vertical directions in each distribution plane, so it is clear that α = R or V are possible. Recall from Definition 3 that L 1 = V ∩ T 1 , L 2 = T 1 ∩ T 2 , and L 3 = V ∩ T 2 . It is therefore sufficient to simply show that α = T 2 is possible, while α = T 1 is not. Now since λ clearly has (exactly one) fewer letters from S than ω does, it must obey the spelling rules by assumption. So T 2 must be preceded by either T 2 , L 1 , L 2 , or L 3 . There are four cases here, but we will give details for just the first and second. The other two are nearly identical.
Since p k ∈ λT 2 T 2 , the same argument shows that T 2 (p k−1 ) is coframed by [df k−2 : du k−1 : dv k−1 ] = 1 : where v k = 0 and u k is not identically zero. Moreover, we see that df k = df k−1 = df k−2 . Now as an ansatz, suppose T 2 (p k ) indeed appears in ∆ k . Then it would have the form [df k−2 : du k : dv k ] = [a : b : 0] with a = 0. Its projection one level down would have the form [df k−2 : du k−1 : dv k−1 ] = [a : b : 0] with a = 0. We recognize this as T 2 (p k−1 ), which we know exists in ∆ k−1 . Therefore T 2 (p k ) indeed exists as it is the prolongation of T 2 (p k−1 ), and our ansatz is justified.
Finally, assume for sake of contradiction that T 1 (p k ) appears in ∆ k . It would have the form [df k−2 : du k : dv k ] = [a : 0 : b] with a = 0. Its projection one level down would have the form [df k−2 : du k−1 : dv k−1 ] = [a : 0 : b] with a = 0. This forces du k−1 = 0. But we saw above that a local fiber coordinate at p k−1 is u k = du k−1 df k−2 , and u k is not identically zero. This contradiction disproves the existence of T 1 (p k ) in ∆ k .
Since p k ∈ λT 2 T 2 , the we can similarly see that T 2 (p k−1 ) is coframed by [dv k−2 : du k−1 : dv k−1 ] = 1 : where v k = 0 and u k is not identically zero. Moreover, we see that df k = dv k−2 . Now as an ansatz, suppose T 2 (p k ) indeed appears in ∆ k . Then it would have the form [dv k−2 : du k : dv k ] = [a : b : 0] with a = 0. Its projection one level down would have the form [dv k−2 : du k−1 : dv k−1 ] = [a : b : 0] with a = 0. We recognize this as T 2 (p k−1 ), which we know exists in ∆ k−1 . Therefore T 2 (p k ) indeed exists as it is the prolongation of T 2 (p k−1 ), and our ansatz is justified.
Finally, assume for sake of contradiction that T 1 (p k ) appears in ∆ k . It would have the form [dv k−2 : du k : dv k ] = [a : 0 : b] with a = 0. Its projection one level down would have the form [dv k−2 : du k−1 : dv k−1 ] = [a : 0 : b] with a = 0. This forces du k−1 = 0. But we saw above that a local fiber coordinate at p k−1 is u k = du k−1 dv k−2 , and u k is not identically zero. This contradiction disproves the existence of T 1 (p k ) in ∆ k . | 9,191 | 2015-12-01T00:00:00.000 | [
"Mathematics"
] |
Geodetic Seafloor Positioning Using an Unmanned Surface Vehicle—Contribution of Direction-of-Arrival Observations
Precise underwater geodetic positioning remains a challenge. Measurements combining surface positioning (GNSS) with underwater acoustic positioning are generally performed from research vessels. Here we tested an alternative approach using a small Unmanned Surface Vehicle (USV) with a compact GNSS/Acoustic experimental set-up, easier to deploy, and more cost-effective. The positioning system included a GNSS receiver directly mounted above an Ultra Short Baseline (USBL) module integrated with an inertial system (INS) to correct for the USV motions. Different acquisition protocols, including box-in circles around transponders and two static positions of the USV, were tested. The experiment conducted in the shallow waters (40 m) of the Bay of Brest, France, provided a data set to derive the coordinates of individual transponders from two-way-travel times, and direction of arrival (DOA) of acoustic rays from the transponders to the USV. Using a least-squares inversion, we show that DOAs improve single transponder positioning both in box-in and static acquisitions. From a series of short positioning sessions (20 min) over 2 days, we achieved a repeatability of ~5 cm in the locations of the transponders. Post-processing of the GNSS data also significantly improved the two-way-travel times residuals compared to the real-time solution.
Precise underwater geodetic positioning remains a challenge. Measurements combining surface positioning (GNSS) with underwater acoustic positioning are generally performed from research vessels. Here we tested an alternative approach using a small Unmanned Surface Vehicle (USV) with a compact GNSS/Acoustic experimental set-up, easier to deploy, and more cost-effective. The positioning system included a GNSS receiver directly mounted above an Ultra Short Baseline (USBL) module integrated with an inertial system (INS) to correct for the USV motions. Different acquisition protocols, including box-in circles around transponders and two static positions of the USV, were tested. The experiment conducted in the shallow waters (40 m) of the Bay of Brest, France, provided a data set to derive the coordinates of individual transponders from two-way-travel times, and direction of arrival (DOA) of acoustic rays from the transponders to the USV. Using a least-squares inversion, we show that DOAs improve single transponder positioning both in box-in and static acquisitions. From a series of short positioning sessions (20 min) over 2 days, we achieved a repeatability of ∼ 5 cm in the locations of the transponders. Post-processing of the GNSS data also significantly improved the two-way-travel times residuals compared to the real-time solution.
INTRODUCTION
In plate tectonics, precise positioning of points on the seafloor is a key for applications ranging from precise in situ plate motion to local-fault loading assessment. Since Spiess et al. (1998), numerous studies have demonstrated that combining surface GNSS positioning with underwater acoustic positioning, known as the GNSS/A approach, is an adequate methodology for this purpose.
GNSS/A positioning is generally performed from research vessels, which are precisely positioned by GNSS, offer facilities to deploy acoustic transponders on the seafloor, and are often equipped with an acoustic modem and an inertial system to monitor the ship's motions. However, such vessels may generate unwanted acoustic noise, particularly when maintaining a fixed position above transponders; in addition, the offsets between the GNSS antennas on a mast, the underwater acoustic modem and the inertial system may not be known accurately enough to correct for the lever arms between them. Since GNSS/A data usually need to be simultaneously acquired for several hours above a network of transponders (e.g., Gagnon et al., 2005;Yasuda et al., 2017;Ishikawa and Yokota, 2018), using a large vessel may also not be cost-effective.
To palliate these inconveniences, we tested a GNSS/A experiment with a small Unmanned Surface Vehicle (USV). Such devices are now commonly used in marine surveys, to retrieve data from seafloor instruments or to directly acquire data (e.g., Berger et al., 2016;Chadwell et al., 2016;Penna et al., 2018;Foster et al., 2020). Our USV was equipped with a GNSS antenna mounted directly above an Ultra Short Baseline System (USBL) integrated with an inertial system (INS). So, we combined a silent vehicle (electrical propulsion) with a compact GNSS/A system with a lever arm reduced to ∼1 m. Here we report the results from an experiment conducted with this autonomous system in shallow waters to position transponders laid on the seafloor. The acquired data allowed us to test and improve a method for positioning a single transponder that takes advantage of the use of an USBL instead of a simple acoustic modem, the former providing more information than just two-way-travel times.
EXPERIMENTAL SETUP
The principle of a GNSS/A experiment is to use a surface platform as a relay between surface positioning relative to satellites and underwater acoustic positioning relative to transponders fixed on the seafloor. In this experiment, carried out in the Bay of Brest, France, in July 2019, we tested a set of new instruments: • four CANOPUS transponders developed by iXblue company (section 2.1); • a small USV catamaran designed by L3 Harris-ASV company, equipped with a GNSS receiver and a GAPS integrated USBL/INS system also from iXblue (section 2.2).
The Underwater Transponders
The CANOPUS transponders (Complex Acoustic Network for Offshore Positioning and Underwater Surveillance) are a new generation of acoustic transponders developed by iXblue. These new transponders handle underwater acoustic communication, signal processing and algorithms and offer improved performances for acoustic positioning of submarine vehicles or for geodetic experiments. The CANOPUS transponders were developed with a long autonomy (up to 3-4 years) and operate up to 4,000 m depth. In July 2019, four CANOPUS transponders were deployed in the shallow waters of the Bay of Brest from R/V Albert Lucas (Figures 1, 2). The transponders were mounted on tripods, placing the acoustic heads 1.5m above the seabed, and immersed at an average depth of 38 m. The initial objective was to form a 30 m quadrilateral, but unfortunately one of tripod tipped over during deployment and the final geometry ended up being a nearly isosceles triangle of 30 m side.
The CANOPUS transponders are omni-directional, and measure inter-transponder two-way travel times. This information can be used to measure relative displacements between transponders (e.g., Sakic et al., 2016;Lange et al., 2019;Petersen et al., 2019) or to constrain GNSS/A multi-transponder array positioning (e.g., Sweeney et al., 2005;Sakic et al., 2020). They can also communicate with the surface for telemetry or positioning purposes using an USBL or an acoustic modem. The transponders were equipped with temperature and pressure sensors, but this information was not used here, since we collected sound-speed profiles during the acquisition sessions. The transponder inclinometers showed that the tripods remained stable throughout the experiment.
The Surface Platform and Positioning Systems
An innovative aspect of this experiment was to mount the GNSS/A positioning system on an Unmanned Surface Vehicle (USV) named PAMELi (Plateforme Autonome Multicapteurs pour l'Exploration du Littoral-Autonomous Multisensor Platform for Coastal Exploration). The PAMELi project was developed by La Rochelle University for repeated and multidisciplinary monitoring of shallow coastal areas (Chupin et al., 2020). The vehicle, built by ASV, is a small battery-powered catamaran (3 m-long, 1.6 m-wide, weighting 300 kg), remotely controlled from a mother-ship or land through Wi-Fi, GSM, or VHF communications. Capable of cruising at 3-4 kn, it has an autonomy of about 8 h. Profiles can be pre-programmed or set-up interactively by remote control; in addition, with a propeller on each of its floats, the USV can maintain a stationary position within a radius given by the operator. Data from the mounted sensors can be telemetered to the operator and/or stored internally.
The GNSS receiver was a Spectra SP80, able to track and record signals from several GNSS constellations. The sampling rate was set at 1 Hz during the whole experiment. The Real-Time Kinematic (RTK) positioning mode was used to provide real-time positions to the GAPS system. The GNSS antenna was mounted directly above the underwater acoustic system on the keel of the USV (Figure 2).
The acoustic system was a GAPS (Global Acoustic Positioning System) M7 integrated USBL/INS device, manufactured by iXblue. Such devices are commonly used on oceanographic vessels for precise positioning of underwater devices or vehicles. The GAPS is a 64 cm-high and 30 cm-wide cylinder with four legs (Figure 2). The acoustic signal is emitted by a central acoustic head and received by an antenna made of four hydrophones ca. 21 cm apart. This design allows to measure both the twoway travel times and the direction of the return signal from the interrogated underwater device, here the transponders. In an optimal configuration, i.e., for a SNR ≥ 20 dB, the GAPS has a range accuracy of 2 cm and a bearing accuracy of 0.03 • . The signal uses a frequency-shift keying modulation carried by a 26 kHz-signal. The GAPS is able to range every 0.8 s. For this experiment, it was configured to range the transponders every 2 s. Ship's motion are corrected for by the GAPS' inertial system (INS), which also filters out spurious real-time GNSS positions. This INS has an accuracy of 0.01 • on the heading, roll and pitch components. With a GNSS receiver connected to the GAPS, vertical and horizontal displacements of the selected center of mass of the system (acoustic head or INS center) were thus fully constrained (pitch, roll, latitude, longitude, heading). The acoustic system, quasi weightless in sea-water, was immersed in the front of the USV, away from the propellers, at ∼ 1 m depth (Figure 2). Despite such keel, the USV remained very maneuverable, and operated smoothly in winds up to 12 kn. The GNSS and GAPS data acquisition were monitored in real-time from R/V Albert Lucas. The recorded noise was most of the time below 60 dB re µPa/ √ Hz, whereas, on a regular vessel, the noise would range between 70 and 85 dB.
Experimental Protocols
The GNSS/A experiment was carried out from July 23 to 25, 2019 in the Bay of Brest during the GEODESEA-2019 experiment (Royer et al., 2021). Its goals were (1) to test the CANOPUS transponders and their auxiliary sensors, and (2) to test the feasibility of GNSS/A positioning from an USV. The Bay of Brest provided a convenient area, close to port and sheltered from the open-sea swell. The nearest permanent GNSS station (BRST) was only 8 km away from the deployment area (see also section 3.2). Five vertical sound-velocity profiles were acquired during the experiment using a CTD probe. The profiles are shown in Figure 4. To avoid strong tidal currents, the experience took place during a neap tide period (coefficients 50 to 44) 1 and weather conditions were sunny and calm. The tides had a 2-3 m amplitude (i.e., the depth of the transponders varied by that amount about an average depth of 38 m).
During deployment, transponder TP#4 tipped over, but despite its transducer on the ground, operated at nominal capacity. Still, this transponder will not be considered here. For the absolute positioning test, three different acquisition protocols were tested (Figures 3, 4): • In a Box-in mode, the USV navigated for about 20-30 min along repeated circles of 10 m diameter (about 1/4th the water depth) centered on the transponder of interest. Thus, the shooting angle w.r.t. to the vertical was ∼12 • and the average slant range was equal to the depth. The circles were traveled both clockwise and anti-clockwise; • In a Static-above mode, the USV remained stationary for 10-15 min within a 3 m circle centered at the apex of the considered transponder; • In a Static-slanted mode, the USV remained stationary during 1 h within a 3 m circle centered at the apex of the barycenter of the triangle made by the three vertical transponders. Acoustic rays were then slanted by about ∼20 • . In this mode, the GAPS ranged each transponder in turn.
Inputs and Outputs
The ultimate objective of GNSS/A positioning is to determine the coordinates of seafloor transponders. For each transponder i, we note its coordinates We assume these coordinates fixed and stable during the whole experiment, since the transponders are installed on rigid and ballasted metal tripods. These coordinates will be derived from the following observations (represented in Figure 5): FIGURE 5 | Components and vector representation of the GNSS/A system (description in section 3.1).
• The positions of the embarked surface devices X S = [x S , y S , z S ], provided by GNSS observations. Since the USV is moving, X S is a function of time and thus we have, for each epoch t of the experiment, X S (t). The embarked devices are, namely: -the GNSS antenna whose position is X GNSS (t).
-the GAPS acoustic head emitting the acoustic pings, whose position is X AHD (t). -and the GAPS four-hydrophones receiving the returned pings, whose positions are X HPj (t), where j ∈ [ [1, 4] ].
• The tie vectors X MEC (also known as lever arms), that link the different surface devices in the mechanical frame MEC of the USV. These vectors were measured manually on-shore before the USV deployment; • The attitude of the USV, i.e., the heading α, pitch β, and roll γ angles recorded by the Inertial Motion Unit (IMU) integrated in the GAPS. So for each epoch t, between the GAPS and the seafloor transponders for each acoustic ping i. Each TWTT is received at instant t TWTT,rec,i . The emission instant t TWTT,emi,i of the corresponding ping is determined by the relation t TWTT,rec,i = t TWTT,emi,i +τ i + τ TAT . The τ TAT (for Turn Around Time) is a preset delay before the transponder replies to an interrogating signal. Since there are four hydrophones recording separately, for one emitted ping i, there are four TWTT values τ i , j and four distinct reception instants t TWTT,rec,i,j ; • The direction of arrival (also known as direction cosines, called DOA hereinafter), corresponding to the vectors between the GAPS and each transponder. Here, we used the DOA values directly estimated by the GAPS interface. In addition to the TWTTs, for each ping i, the DOA vector is defined as , which we assumed normalized; • And a sound-speed profile (SSP) made of two vectors Z and C, respectively the depth and corresponding sound speed. Since this experiment took place in shallow waters over a short-time period, we here assumed that the sound velocity field was homogeneous (Sakic et al., 2018). From the soundspeed profile, an harmonic mean value can be determined: This simplification allowed us to estimate a correction δc to this mean value. When processing each acquisition mode, described hereafter, we applied the nearest available SSP.
To simplify the calculations, the coordinates of the transponders will be determined in a local topocentric reference frame with North, East, and Down axes, hereafter called NED. We arbitrarily chose the NED frame origin [x 0 , y 0 , z 0 ] as the center of gravity of the three vertical transponder array.
GNSS Processing
To determine the GAPS position at the emission and reception times in a terrestrial reference frame, the GNSS position of the USV must be transferred to the acoustic system. During the experiment, the former was determined from real-time kinematic (RTK) positioning. Through the cellular network, the SP80 receiver downloaded the real-time corrections from the French TERIA network and achieved a centimetric positioning accuracy (Chambon, 2019). This method is commonly used in geodetic field experiments, whenever the RTK network is accessible. The transfer of accurate real-time positions from GNSS to the GAPS also allowed the INS to be realigned and limit the effects of the USV drift.
To test the quality of our RTK real-time positioning, the GNSS data were post-processed with the RTKLIB software, using the double-difference method (Takasu and Yasuda, 2009). We use the BRST permanent station as the reference base (Figure 1); this station, located at ∼8 km from our working area, is part of the French permanent RGP-GNSS network managed by IGN (the French national mapping agency). Since real-time coordinates were given in the French national reference system RGF93 (Duquenne, 2018), for the sake of consistency, we computed all post-processed coordinates in this reference system. We considered the GPS data only (hereinafter called "GPS-only mode"), and all the available data including those of the Galileo and GLONASS constellations (hereinafter called "multi-GNSS mode"). In the absence of operational IGS multi-GNSS products so far (Mansur et al., 2020;Sośnica et al., 2020), we used the GFZ multi-GNSS products for orbit and clock corrections (Deng et al., 2017;Männel et al., 2020). The other GNSS processing parameters are summarized in Table 1.
Transfer of the GNSS Position to the GAPS
The objective is to determine the GAPS position at the emission and reception instants. This is the reason why it is necessary to transfer the GNSS-antenna position to the different GAPS components (emitter and receivers).
The input data involved in this operation are: • The positions of a main GNSS antenna in a global Earthcentered, Earth-fixed reference frame ECEF, either geocentric (x i , y i , z i ) or geographic (ϕ i , λ i , h i ), at sampling times t GNSS,i . We call them X ECEF,GNSS (t GNSS,i ); • The heading α, pitch β, and roll γ angles of the USV at the sampling time t INS,i ; • The tie vectors between these devices in the USV internal mechanical frame MEC. If we consider the GAPS IMU reference point as the origin of this frame, then X MEC,IMU = [0, 0, 0]. The coordinates of the GNSS-antenna reference point (X MEC,GNSS ), that of the acoustic head (X MEC,AHD ) and those of the four hydrophones (X MEC,HPj ) are thus expressed with respect to the IMU reference point. To simplify the notation, X MEC,AHD and X MEC,HPj are hereinafter assimilated to the same vector X MEC,S .
Thus, the objective is to get, in the NED topocentric reference frame, the coordinates of the GAPS (X NED,S (t)) at the ping emission t emi,i and reception t rec,i instants. The USV position X ECEF,GNSS is transferred into the NED frame using the formula described, for instance, by Grewal et al. (2007), and thus we have X NED,GNSS for any sampling instant t GNSS,i . We then performed a linear interpolation to obtain the exact positions of the platform at the ping emission and reception instants (t emi,i and t rec,i ). Meanwhile, the USV on-board device coordinates in the MEC frame are transferred to the "instantaneous topocentric frame" iNED. It corresponds to a transformation of the MEC frame where its axes are co-linear to the NED ones, i.e., by applying the USV attitude R to the tie vectors. To do so, we associated a quaternion q(t INS,i ) to each record of the IMU, R(t INS,i ) (Großekatthöfer and Yoon, 2012). We renamed t i the instants t emi,i and t rec,i since the procedure is the same for the emission and reception instants. Then, using the Slerp attitude interpolation method (Kremer, 2008), we determine the attitude of the platform at instant t i represented by the quaternion q i .
From this operation, the positions of the GNSS and of the GAPS acoustic head and hydrophones in the iNED frame at the transmission and reception instants are determined by: It comes that the vector T i between the iNED positions of the GNSS and the GAPS is: Then, the NED position of the GNSS can be transferred to the GAPS by translation:
Least-Squares Model
Finally, we used a least-squares (LSQ) inversion (e.g., Strang and Borre, 1997;Ghilani, 2011) to estimate the desired parameters, namely the transponder coordinates X R i and the sound speed correction δc. The observable, in the least-squares sense, are the TWTTs τ i and the DOAs D i . Thus, the associated observation functions f TWTT and f DOA are: We have: To establish the Jacobian matrix A, we need the partial derivatives of f TWTT and f DOA : Then, the problem is solved with an approach similar to the one described by Sakic et al. (2020). The adjustment δX on the a priori values X 0 of the transponder coordinates and the sound speed correction are given by the relation: where A is the Jacobian in the neighborhood of X 0 . B and P is the weight matrix. If the DOA are ignored, P is equal to the identity and corresponds to the differences between the observations and the theoretical quantities determined by f TWTT (X 0 ) and f DOA (X 0 ). In the end, the observation residuals are given by: Since the algorithm needs several steps k to converge, we used an iterative process where the estimated values become the new a priori values at step k + 1, so that X 0,k + δX k = X 0,k+1 . The iterations stop when the convergence criterion is met, in our case when δX k < 10 −5 m. It generally occurs after the fourth or fifth iteration.
Outlier Detections
To eliminate the outliers, both in the TWTTs and the DOAs, we use the MAD (Median Absolute Deviation) method (Leys et al., 2013). For a set of observation L, the MAD is defined as: Then, for each observation l i (where l i ∈ τ i , d x,i , d y,i , d z,i ), we have M i so as : where b is a coefficient related to the statistical distribution of the data considered (Rousseeuw and Croux, 1993). If the distribution is normal, b ≈ 0.67449. Then, if M i > s, l i is eliminated as an outlier for the next iteration, s is a threshold and typically we take s = 3 if the distribution is normal.
Parameterization
The data acquired during the experiment were processed with the model described in the previous sections. The observed TWTTs were weighted by an a priori standard deviation ς TWTT = 2 × 10 −5 s, corresponding to the time precision described in the GAPS data-sheet. We tested different configurations where the sound-speed correction δc is estimated and where it is not. We also tested the contribution of the DOAs in the precision and repeatability of the transponder position. We thus processed the DOAs in three different ways: (1) not used in the inversion, (2) taken into account with a loose a priori standard deviation ς DOA,l = 10 −2 or (3) considered as fully constraint ς DOA,c = 10 −3 . Like the DOAs, ς DOA are unitless and were chosen based on the direction cosine variations for signal's direction of arrival uncertainties δθ at ≈ 45 • . For δθ = 1 and 0.1 • , we found ς DOA = cos(45 • + δθ ) − cos(45 • ) ≈ 10 −2 and 10 −3 , respectively.
Test of Different Parameterizations in a Box-In Mode
To test whether the DOAs improve the positioning accuracy, we first exploited the observations made in box-in modes on Day 2. As we focused on the acoustic positioning algorithm, we used the post-processed GPS-only data as the surface positioning solution. We processed each box-in with the six parameterizations described in section 4.1, made up of three ς DOA and two δc modes (estimated or not). The acquisition duration were in the order of ∼25 min, as summarized in Table 2A along with the number of recorded TWTTs and the percentage of TWTT outliers. When errors are independent from epoch to epoch and follow a Gaussian distribution, having more data will reduce the final position error. However, noise in GNSS positioning is known to be colored (Williams, 2004), and the ocean properties, such as current, salinity, or temperature, do not evolve with a white noise either. Therefore, subsampling the data from the experiment may give results that are highly sensitive to the selected subsampled windows. To test the accuracy (or appropriateness) of the model used in the least-squares inversion, we extracted successive and non-overlapping subsets from the experimental dataset and observed how the resulting transponder position changed between subsets. We can then derive a standard deviation for each parameterization. For each transponder, we divided the total acquisition period into five data subsets, each containing the same number of USV circles around the transponder. Figure 6 shows the resulting locations for the three transponders. Each parameterization is represented in a different color and symbol. The "tripod" symbols ( , , ) represent runs where the δc estimation is disabled, and the triangles (◮, , ) represent runs where δc is estimated. The light tripods and empty triangles represent the LSQ inversion result for each of the five data subsets, and the bold/thick equivalent symbols represent the arithmetic mean of the five subsets, with their standard deviations. Star filled-symbols (5) in matching colors represent the results of the LSQ inversion for the entire period, with their formal standard deviation. Standard deviations for each parameterization are listed in Table 3.
First of all, solutions where δc is estimated and DOAs are not used or loosely constrained (◮ and ) yield high standard deviations and a poor compatibility with their complete period counterpart. This is due to a complete trade-off between the TWTTs and the sound speed. Such parameterization should thus be avoided. Solutions where δc is not estimated while the DOAs are either unused or loosely constrained ( and ) give almost equal values (within a millimeter). In a box-in mode, we can conclude that using loose or no constraints from the DOAs yields equivalent results, if δc is not estimated. It is worth noting that when DOAs are constrained ( and ), standard deviations are smaller compared to the two previous solutions. However, compatibility with the solution based on the whole period is also smaller, which shows the best stability among these parameterizations.
When DOAs are constrained, we note a difference for transponders 1 and 3 between solutions whether δc is estimated or not, even if the subset standard deviations are slightly higher when δc is estimated. This dispersion may be due to the relatively small number of TWTTs in each data subset (∼ 440), which prevents a reliable estimation of the δc parameter. Nevertheless, the residual sum of squares remains smaller by about 2-10% (which is expected since adjusting an additional parameter reduces the residuals). Thus, in a box-in mode, a solution with constrained DOAs and estimated δc is considered as the most optimal parameterization.
Note that even if the horizontal standard deviation is mostly below 10 cm, the standard deviation on the depth can reach the meter level. This is due to the high dependence of depth on sound velocity. Nevertheless, the parameterization with constrained DOAs and estimated δc also tends to reduce the dispersion on the vertical component.
Repeatability in Static Mode
The repeatability of the different parameterizations in a static mode can be evaluated from the data collected Day 2, where two sessions in a static-slanted mode were recorded, in the morning (AM) and in the afternoon (PM) (Figure 3B), along with a station above each transponder ( Figure 3C). Table 2B summarizes the acquisition sessions, the number of recorded TWTTs and the outlier ratio. The results are presented in Figure 7 and Table 4.
The three sub-figures in Figure 7 show the resulting locations for the three transponders in static mode. Symbols represent different parameterizations and colors, different static acquisition modes. The horizontal and vertical bars give the LSQ formal standard deviations. The black star (5) shows the position estimated from the box-in mode in section 4.2.
Parameterizations where δc is estimated and DOAs are loosely constrained yield a repeatability of several decimeters between the three sessions, up to a meter when DOAs are not used and δc is estimated. The dispersion of the solutions decreases FIGURE 6 | Positions of the three transponders based on the box-in acquisitions, in a local topocentric reference frame. The thin tripod symbols ( , , ) and empty triangles ( , △, ▽) represent the LSQ inversion results for each of the five data subsets. The equivalent bold/thick symbols represent the arithmetic means of the five subset-derived positions with their standard deviations. The colored-star (5) represents the result of the LSQ inversion and its formal standard deviation, based on the entire dataset; colors correspond to the different parameterizations used. For the δc estimation, "I" stands for used, and "O" for not used. Few outlier solutions are outside the frame.
depending on whether DOAs are not used, loosely or fully constrained, showing that taking DOAs into account improves the position determination. The best repeatability is obtained with constrained DOAs but δc not estimated. The dispersion then ranges between 1 and 6.5 cm on the North component and 6 and 13.4 cm on the East component. Moreover, these solutions are the most consistent with the best solution obtained in a boxin mode (section 4.2). The dispersion with constrained DOAs is two to three times greater when estimating δc. Thus, in a static mode, estimating the sound speed is not optimal, as it does not improve the solution. Moreover, the solutions without DOAs are constrained only by the small USV displacements induced by the waves and the currents (a perfectly still USV on a perfectly flat sea would lead to a singular design matrix and thus to an under-determined problem). In general, a static acquisition is not optimal for geodetic applications due to the lack of constrain on the vertical component in the USV motion; the addition of a depth sensor (echosounder or pressure sensor) would be needed.
Repeatability of Box-In Mode
To further test the repeatability of the box-in mode, we compared the sessions between Day 1 and 2, and particularly the effects of the USV direction when circling the transponders. Thus, we processed separately, for both days and for transponders 1 and 3, periods when the USV was rotating clockwise (CW) or anticlockwise (ACW), and when clockwise + anticlockwise (CW + ACW) acquisitions were combined (Table 2C and Figure 8).
Unfortunately, the raw GNSS observations are not available for the first day (and thus could not be reprocessed), so all these tests use the Real-Time RTK positions of the USV. Despite the short duration of each session, the results display a very good repeatability between Days 1 and 2 for the combined CW + ACW sessions ( Table 5B). The differences are smaller than 3 cm on the North and East components, except for Transponder 3 which shows a difference of 7.2 cm on the East component. This difference could be explained by the poor repeatability of the ACW rotation. As expected, the CW + ACW solution is located in the middle of the individual CW and ACW solutions. It is also worth noticing that the rotation direction seems to influence the repeatability of the box-in (Table 5A). The horizontal difference is about 5 cm, and up to 8.5 cm for transponder 3 on Day 1. This difference may be due to changes in the water column between successive CW and ACW acquisitions or to an unidentified bias in the lever arms; both effects would averaged out in combining CW and ACW sessions.
Influence of the GNSS Solution on Seafloor Positioning
To evaluate the effects of GNSS positioning (section 3.2), namely real-time RTK, GPS-only and multi-GNSS, on the overall solution, we analyzed the TWTT residuals after the least-square inversion. We considered the three transponder box-in modes presented in section 4.2 and tested the three different GNSS solutions. The inversion is based on constrained DOAs and an adjusted δc. The results are shown in Figure 9 and Table 6. For a better readability, the TWTT residuals are converted into distances using the estimated sound speed.
We can see that the GNSS solution has an effect on the TWTT residuals. The standard deviation difference for transponders 1 and 3 are respectively ∼3.5 and ∼4.5 cm smaller for postprocessed solution compared to the real-time one. The multi-GNSS solution also yields slightly smaller residuals than the GPS-only solution but the improvement is not significant. For transponder 2, the post-processed vs. real-time difference is less prominent (∼5 mm) and the GPS-only solution gives smaller residuals. Overall, the post-processed solutions provide smaller TWTT residuals than the real-time solution.
DISCUSSION
In line with previous experiments (Chadwell et al., 2016;Iinuma et al., 2021), this study confirms the feasibility of GNSS/A positioning from an USV. In addition to an easier implementation at a reduced cost, the size of USV avoids any complex topometric survey to determine the lever arms of the system (eg. Chadwell, 2003). Here, the lever arms were measured directly on-shore with a simple ruler with an optimal accuracy. Nevertheless, an a posteriori adjustment of the lever arms in the LSQ model can be valuable (Chen et al., 2019). Regarding the absolute positioning, we used a simple linear interpolation to determine the USV's position at the ping emission and reception epochs. This approach is sufficient for the static modes since the GNSS position sampling rate is high (1 Hz) and the USV displacements relatively small (sub-meter level). Nevertheless, a Lagrangian interpolation would be more appropriate when the USV is moving (box-in mode).
USV platforms may revolutionize seafloor geodesy in the near future. In addition to more frequent and spatially denser GNSS/A observations, combining multiple platforms (USVs and a ship) can allow a simultaneous monitoring of the sound-speed field in the ocean (Matsui et al., 2019;Ishikawa et al., 2020). This parameter undoubtedly remains the most critical for an accurate underwater geodetic positioning.
Using an USBL (here a GAPS) instead of a simple acoustic modem allow to measure DOAs in addition to TWTTs. Integrating these observations in the leastsquares inversion improve the transponder positioning accuracy. We have shown that, both in box-in and static acquisitions, taking the DOAs into account improves the repeatability of the estimated positions between different sessions. This experiment in shallow water (∼40 m) is a proof-ofconcept. The repeatability of the sessions in box-in mode is about 5 cm. Such accuracy is not sufficient to measure plate motions or fault-slips which are in the order of few mm/year to cm/year (e.g., Bürgmann and Chadwell, 2014), unless measurements are repeated very often and over half a decade or more. Despite the significant contribution of the DOAs, longer acquisition sessions (i.e., continuous over few hours) would be necessary in a true experiment. The effect would be to average out the GNSS positioning and acoustic propagation errors along with internal wave effects. Moreover, the poor repeatability of static acquisitions clearly shows that additional observations like depth are required to efficiently estimate the sound speed. Since the DOA accuracy is a function of the water depth, DOAs in deep waters may not be as critical as in shallow waters for the precision of seafloor positioning. Further investigations are needed to assess their actual contribution in the deep ocean. In any case, one of the objectives of this work was to explore the contribution of such information, and we believe that DOAs would improve seafloor positioning accuracy, for instance, in an experiment using a single seafloor transponder and several ranging mobile-platforms. They could also be of value to observe fast and/or repeated co-seismic displacements of the seafloor (in the order of several centimeters within few days), where active tectonics occurs in shallow waters, as for instance, off the Vanuatu Islands (e.g., Ballu et al., 2013) or near the Saintes Archipelago (West Indies) (e.g., Bazin et al., 2010).
In this paper, we chose to adjust the sound speed by a simple constant since the acquisition sessions were short (15 min to 1 h). The sound speed variability between the beginning and end of a session could thus be neglected at first approximation. However, for longer sessions, adjusting this parameter with a sine, polynomial, or spline function should be preferred (Fujita et al., 2006;Yasuda et al., 2017;Chen et al., 2018;Liu et al., 2019). Moreover, in deeper waters, more accurate ray-tracing should also replace the straight-ray approximation (Chadwell and Sweeney, 2010;Sakic et al., 2018).
CONCLUSION
This experiment in the Bay of Brest, France, was meant to be a proof-of-concept for underwater geodetic positioning from an Unmanned Surface Vehicle. The experimental setup comprised three acoustic transponders on the seafloor and an integrated USBL/INS system coupled with a GNSS receiver mounted on a USV. The locations of the transponders were derived from the recorded two-way-travel times between the USV and the transponders, and from the direction of arrival of the returned signals. The GNSS receiver, supplemented by the inertial system, provided the surface positioning. During the experiment and this study, different acquisition trajectories were compared: box-in circles and stations above or slanted relative to the transponders. This paper describes a method to calculate the position of the USBL acoustic head from GNSS observations and attitude measurements. A least-squares model is developed to determine the transponder positions from TWTT and DOA observations, and from an estimation of the acoustic signal propagation speed. Using DOAs improve the repeatability of transponder positioning in box-in and static acquisitions. For a single transponder localization, box-in provides better results than a static acquisition. Over all the sessions spanning 2 days, the resulting repeatability of positioning is 5 cm, despite the short duration of the GNSS/A sessions (∼ 20 min each). We also demonstrated a smaller dispersion of TWTTs residuals when a post-processed GNSS solution is used instead of the real-time FIGURE 9 | Histograms of the TWTT residuals (converted to distance) based on the different GNSS solutions for the three transponders in box-in mode.
DATA AVAILABILITY STATEMENT
The datasets used in this study can be found in the SEANOE database (www.seanoe.org/data/00674/78593/, doi: 10.17882/78593. Python 3 source codes developed for this article are freely available at an online GitHub git repository upon request to the corresponding author.
AUTHOR CONTRIBUTIONS
J-YR and VB conceived, organized, and conducted the experiment. J-YR, VB, TC, MB, CC, P-YM, and PU participated to the data acquisition at sea. PS, VB, and CC elaborated the processing strategy. PS designed and implemented the model, and produced the results. CC post-processed the GNSS data and pre-processed the acoustic data. TC was in charge for setting-up and piloting the USV PAMELi. MB designed the transponder lay-out and acquired the sound-speed profiles. P-YM and PU provided the advice and expertise on the GAPS and CANOPUS transponders. J-YR, VB, PS, CC, P-YM, and PU discussed the results, contributed to, and edited the article. All authors contributed to the article and approved the submitted version. | 8,967 | 2021-04-06T00:00:00.000 | [
"Physics"
] |
$\tau\to \nu_\tau\rho^0\pi^-$ decay in the Nambu - Jona-Lasinio model
Within the context of an extended Nambu - Jona-Lasinio model, we analyze the role of the axial-vector $a_1(1260)$ and $a_1(1640)$ mesons in the decay $\tau\to\nu_\tau \rho^0\pi^-$. The contributions of pseudoscalar $\pi$ and $\pi (1300)$ states are also considered. The form factors for the decay amplitude are determined in terms of the masses and widths of these states. To describe the radial excited states $\pi (1300)$ and $a_1(1640)$ we introduce two additional parameters which can be estimated theoretically, or fixed from experiment. The decay rate and $\rho\pi$ mass spectrum are calculated.
I. INTRODUCTION
Semihadronic decay modes of the tau lepton remain to present date a topic of interest to theoreticians as well as experimentalists [1]. One mode of particular interest is the decay τ → ν τ π + π − π − . This decay is governed by the axial-vector hadronic current j A µ and gives a unique possibility to scrutinize our understanding of chiral dynamics in the energy range of 1 − 2 GeV, where perturbative QCD methods are not applicable. There are several resonances at these energies with quantum numbers J P C = 0 −+ , 0 ++ , 1 ++ , 2 ++ . The nature of some of these states is not yet well understood.
The specific mode τ → ν τ ρ 0 π − → ν τ π + π − π − , which is the main subject of our present investigation, is most suitable to study the role of 0 −+ and 1 ++ states in the hadronization process. Besides the pion, these are π(1300), a 1 (1260), and a 1 (1640) resonances. In the Nambu -Jona-Lasinio (NJL) model a 1 (1260) is considered to be a member of the basic axial-vector nonet, i.e. a 1 (1260) is a pure qq state, with a 1 (1640) being its first radial excitation. The pseudoscalar π(1300) is the first radial excitation of the pion. One of our goals here is to clarify the role of these resonances in the τ → ν τ ρ 0 π − decay.
In fact, the considered qq picture agrees with the leading order of the 1/N c expansion [at large N c , where N c is the number of colors, mesons are pure qq states, rather than, for instance, qqqq [2,3]]. Of course, a more detailed description of these states would require implementation of mixing scenarios, in which the qq components mix with the four-quark components. This step requires to take into account the next to leading order 1/N c corrections and will not be considered here. Let us also notice that for the a 1 (1260) axial-vector meson, there is no established understanding whether it is a quark-antiquark state or dynamically generated hadronic molecule [4][5][6]. Thus, it is useful to study how far one can go with the qq picture of a 1 (1260).
Another goal of this work is to attract the attention of experimentalists to the important information contained in the specific mode τ → ν τ ρ 0 π − → ν τ π + π − π − , *<EMAIL_ADDRESS>which is shown to be sensitive only to the a 1 (1260) and a 1 (1640) contributions. The experimental data on the spectral function [see Fig.3] would clarify the specific role of the a 1 (1640) state. It is quite difficult to study the a 1 (1260) − a 1 (1640) interference through the fit of the 3π invariant mass spectra of the τ → ν τ π + π − π − mode, because the corresponding amplitude has too many parameters to fit [7]. The major subprocess of the channel τ → ν τ ρ 0 π − → ν τ π + π − π − is the τ → ν τ ρ 0 π − decay. The amplitude of this three-particle decay has much less number of parameters.
The relevant approximation to this question is the 1/N c expansion which provides the solid theoretical grounds for the description of the qq resonance states. In accord with this idea, all qq meson states [including qq-resonances] are stable, free, and non-interacting at N c = ∞. It is from the point of view of the 1/N c expansion the theoretical idea about an on-shell ρ(770) state makes the sense, and the τ → ν τ ρ 0 π − decay amplitude, at leading order, can be described by the tree Feynman diagrams.
The τ → ν τ ρ 0 π − → ν τ π + π − π − mode contains the all necessary information about the τ → ν τ ρ 0 π − decay, that relates our study to the experiment. For instance, the sequential decay formula [8] which is correct in the narrow width approximation and the fact that the ρ 0 decays into π + π − to hundred percent yield Γ(τ → ν τ ρ 0 π − ) = Γ(τ → ν τ ρ 0 π − → ν τ π + π − π − ). Since the latter value can be extracted from the data on τ → ν τ π + π − π − , the theoretical estimate of Γ(τ → ν τ ρ 0 π − ) has a definite sense [eg. recently [9], the theoretical result for Γ(η → ργ) has been used to quantify Γ(η → ργ → π + π − γ) ]. It is necessary to notice that a similar situation occurs for the τ → ν τ ρ 0 K − decay, where the sequential decay mode τ → ν τ ρ 0 K − → ν τ π + π − K − has already been measured [the PDG quoted value is Br(Γ(τ → ν τ ρ 0 K − → ν τ π + π − K − )) = (1.4 ± 0.5) × 10 −3 [10]]. The measurement of the Br(Γ(τ → ν τ ρ 0 π − → ν τ π + π − π − )) will not only fill the gap in the existing data, but, as it is shown in this work, will clarify the role of the a 1 (1640) resonance in the underlying chiral dynamics. In the NJL model, there is a nonlocal extension which arXiv:1812.06476v2 [hep-ph] 26 Feb 2019 deals with the excited states of the 0 −+ , 0 ++ , 1 −− and 1 ++ ground state nonets [11,12]. The nonlocal fourquark interactions lead to the nonlocal effective meson Lagrangian which describes the physics of these excited states. Nonetheless, here we will apply a more modest description, which should ideally arise from [11,12] in the large N c limit, namely we suppose that excited states at leading 1/N c order can be described by the local Lagrangian, like, for instance, in the extended linear sigma model approach [13]. In this case the propagators of excited states have the same form as the ground state propagators, but with different couplings to the weak axialvector current. Such a simplified treatment of exited states is not new. Notably, it is exactly how the contribution to the τ → ν τ πππ amplitude from the vector ρ(1450) resonance exchange has been estimated in [14]. One of the first theoretical studies of the role of the a 1 (1260) axial-vector state in the τ → ν τ ρ 0 π − decay is presented in [15], where the current algebra sum rules have been used to clarify whether experimental data are compatible with a contribution of the a 1 (1260) resonance to the τ → ν τ ρ 0 π − decay mode. This decay has been also considered in [16], where the a 1 (1260) dominance has been revealed. There is no doubt, nowadays, about the dominant role of a 1 (1260) in this process. However, we still need to understand the nature and parameters of the a 1 (1260) resonance. Besides that, the role of its radial excited state a 1 (1640) must be clarified. The measurements of the branching ratio and ρ 0 π − mass spectrum can provide insight into the issue.
In this paper we calculate the τ → ν τ ρ 0 π − decay amplitude in the framework of NJL model with SU (2) × SU (2) chiral symmetry. The first attempt to use the NJL approach to study this decay was made in [17], where the finite terms in the derivative expansion of quark loops corresponding to vertices a 1 ρπ and ρππ have been taken into account. Though the analysis in [17] allows to reproduce the experimental value of the τ → ν τ ρ 0 π − decay width [mainly due to the contribution of finite terms] the procedure of extracting these finite terms, used in [17], is not compatible with the chiral symmetry restrictions imposed on such contributions by the chiral invariant Schwinger-DeWitt expansion at large distances [the consistent Schwinger-DeWitt approach requires also to take into account the finite terms of self-energy diagrams which will redefine the coupling constants of the theory, what has not been done there]. As opposed to this, we do not consider here the contributions of the problematic finite terms, but show instead that the decay can be described by the standard effective meson Lagrangian [18][19][20] provided the first radial excitations π(1300) and a 1 (1640) are taken into account.
The material of the paper is presented in the following way. In Sec. II we describe the relevant meson vertices of the effective Lagrangian, obtain the amplitude of the τ → ν τ ρ 0 π − decay, and discuss the partial conservation of the axial-vector current (PCAC). This important relation should be fulfilled in the chiral approach. In Sec.
III the radial excited states are considered. We show that the inclusion of these states can be done without contradiction with the PCAC condition. In Sec. IV we introduce the momentum dependent widths of the resonances and calculate the differential decay width of the process. In Sec. V the results of our numerical calculations are presented in Tables I and II and Fig. 3. We conclude with Sec. VI.
II. LAGRANGIAN AND AMPLITUDE
Our starting point is the effective meson Lagrangian obtained on the basis of the NJL model with the global U (2) R × U (2) L chiral symmetric four-quark interactions which also possesses the gauge SU (2) L × U (1) R symmetry of the electroweak interactions. For convenience, we refer to the paper [21] which contains all necessary details related with the obtention of such effective Lagrangian.
The appropriate weak hadronic part of the Lagrangian density is τ is the weak charged lepton current, m ρ is the mass of the ρ(770) meson, g ρ is the coupling which arises due to a redefinition of the spin-1 fields, f π = 92 MeV is the pion decay constant.
Notice that the Lagrangian density (2) has the standard form of the axial-vector dominance, i.e. it does not have a contact term ∼ ρ 0 π + . In the covariant formulation [21] the corresponding part of the hadronic weak axialvector current j A µ is proportional to π + (ρ 0 µ − ∂ ν ρ 0 µν /m 2 ρ ). This factor is zero on the mass shell of the ρ-meson. This does not mean that the matrix element ρπ|j A µ |0 has no contact part. Hereafter, in Eq. (7), it is clear that it does [see the first term ∼ g µν ]. This term is effectively originated by the a 1 -exchange contribution ρπ|a 1 a 1 |j A µ |0 . Thus the fact that the on-shell ρ(770) cancels the direct term ∝ ρπ in j A µ is a part of the fundamental mechanism which is responsible for the PCAC relation in the model.
The hadronic weak axial-vector current j A µ in formula (2) can be also obtained in the standard noncovariant approach [22] by using the variational method of Gell-Mann and Lévy [23,24]. However, it requires some work, because the direct application of this technique leads only to the pion exchange. To arrive at the axial-vector dominance one should use the Lagrangian equations for the axial-vector field, and after that neglect the total derivatives of the antisymmetric tensors. Another way to obtain the Lagrangian density (2) is described in [25].
Thus we need only two additional vertices to find the amplitude of the τ → ν τ ρ 0 π − decay. This is the a 1 ρπ vertex Zm ρ is the mass of the a 1 (1260) meson; it is assumed that all fields are contracted with Pauli matrices, for instance, π = π i τ i , and trace is calculated over products of tau-matrices. Notice that the Lagrangian density (3) obtained in the covariant approach [21] coincides with the result of the standard noncovariant approach [22].
The second vertex that we need, in the covariant approach, is given by the Lagrangian density On the mass shell of the ρ-meson it yields Now we have all necessary ingredients to find the am- where Q, Q , p and q are the 4-momenta of the particles. The corresponding Feynman diagrams are shown in Figs. 1 and 2.
In this way we have where the pure hadronic part is given by the 4-vector Here ν (p) is a polarization vector of the ρ-meson, and k = q + p. The invariant subsidiary condition on the components of the vector state is assumed p ν ν (p) = 0. Notice that i.e. k µ F µ = πρ|∂ µ j A µ |0 is dominated by the pion pole in accord with PCAC.
We further stress the presence of a contact contribution in (7). The first term with g µν results from the diagram of Fig.1. Its appearance is partly due to the first term of the Lagrangian density (3).
The most general Lorentz-covariant form of F µ is In particular, the NJL model yields where g A = 1/Z. Thus, the NJL approach is quite restrictive: the form factor F + does not contribute at leading order in 1/N c and derivative expansions. All form factors are the functions of only one variable k 2 .
III. RADIALLY EXCITED STATES AND PCAC
The region between 1.2−1.8 GeV of the ρ 0 π − spectrum is still poorly described by the standard NJL model. One can improve our description of the τ → ν τ ρ 0 π − decay amplitude by including the contributions of the radially excited states of the pion and the a 1 (1260) meson, i.e. the π = π(1300) and a 1 = a 1 (1640) resonances. Following [14] we perform the substitutions in the pion and a 1 (1260) propagators: Notice that the limit β → 0 corresponds to the case without excitations; other limits m π → m π and m a 1 → m a1 lead to the same result. The substitutions are written in terms of physical states, therefore, the coupling β is the only parameter which absorbs contributions arising due to the redefinition of the primary meson fields [this includes the diagonalization of π − π and a 1 − a 1 quadratic forms and the pseudoscalar -axial-vector mixing effects].
As a result, the factor 1/(1 + β) rescales the contribution of the ground state.
Doing these replacements, we must ensure that substitutions (11) do not destroy the PCAC condition. For that, together with (11), one should modify the contact term where δ is the constant approximating the higher-mass 1 ++ contribution to F 0 in such a way that the PCAC condition is fulfilled. Indeed, in this case the modified hadronic part of the amplitude A → A , F µ → F µ has the form The divergence of the hadronic current k µ F µ should vanish in the chiral limit m π → 0. This requirement is fulfilled only if δ is uniquely fixed as As a result we obtain that This is a modified PCAC relation, which, in particular, tells us that in the chiral limit β π = f π /f π → 0, where f π and f π are the weak decay constants of the π and π mesons.
Thus, the consideration above shows that to lowest order in 1/N c our procedure introduces only four additional parameters: the two masses of π(1300) and a 1 (1640) resonances and two mixing parameters β π and β a 1 which should be fixed theoretically or from experimental data.
IV. DECAY WIDTH
Let us proceed now with calculations of the decay rate. For that we need the appropriate spin-averaged matrix element squared Here we averaged over initial τ -lepton states. It is easiest to perform the calculation in invariant form before specializing to the rest frame of tau. The invariant Mandelstam variables are s = (Q − Q ) 2 = k 2 , t = (Q − q) 2 , and u = (Q − p) 2 . It gives where the NJL model relation (1 − g A )m 2 ρ = g 2 ρ f 2 π has been used, and the Källén function λ(x, y, z) is defined as follows λ(x, y, z) = (x − y − z) 2 − 4yz.
In the physical range (m ρ + m π ) ≤ √ s ≤ m τ the form factors in Eq. (13) contain zero-width a 1 , a 1 and π propagator poles, which lead to divergent phase-space integrals in the calculation of the τ → ν τ ρ 0 π − decay width. In order to regularize the integrals one should include the finite widths of these resonances through the typical Breit-Wigner form of propagators. This is a step beyond the leading order in the 1/N c expansion which we are forced to make in connection with the above-mentioned problem. We consider the substitutions: A k 2 dependence for Γ R (k 2 ) is required by unitarity. The description of a set of resonances with the same quantum numbers as a sum of Breit-Wigner amplitudes may violate unitarity and is a good approximation only for wellseparated resonances with little overlap. This condition is fulfilled here. Following [15], we have chosen to use the form where Γ R = Γ R (m 2 R ). The function Γ R (k 2 ) has a threshold factor in the proper position, i.e. at k 2 = (m ρ +m π ) 2 . The value of k 2 R is determined by the following integral condition In the narrow-width approximation this equation is automatically fulfilled. If the resonance is broad, Eqs. (19) and (20) make our results less sensitive to the details of Γ R (k 2 ). A rigorous form can only be obtained if the total width is completely understood, but this is not the case at the moment. The differential decay rate can be written in the form where t ± (s) = 1 2 The integral over t in (21) can be done explicitly: Integrating this expression over s one finally obtains the τ → ν τ ρ 0 π − decay width.
V. NUMERICAL RESULTS
Our consideration above shows that we are able to describe the tree τ → ν τ ρ 0 π − decay amplitude in terms of the known masses: m π , m ρ , m a1 , m π , m a 1 , m τ , two mixing parameters β π , β a 1 , and three widths Γ a1 , Γ a 1 , Γ π . The value of g A is not free due to the mass formula g A m 2 a1 = m 2 ρ which is valid in the NJL model. This parameter is also related with the value of the constituent quark mass m, which, in the case of exact isospin symmetry m = m u = m d , is given by In the following we will vary the value of m a1 in the interval 1120 MeV ≤ m a1 ≤ 1300 MeV. The parameter g A will be changed correspondingly. The upper boundary is inspired by the recent measurements of the COMPASS Collaboration m a1 = 1299 +12 −28 MeV, Γ a1 = 380 ± 80 MeV [26,27]. The lower boundary is a result of a comparison between the theoretical m 2 3π -spectra of the τ → ν τ π + π − π − decay [14] with ALEPH data [28], that yields m a1 = 1120 MeV and Γ a1 = 483 ± 80 MeV. The PDG averaged values: m a1 = 1230 ± 40 MeV, Γ a1 = 250 − 600 MeV [10], and the parameters extracted by the JPAC group m a1 = 1209 ± 4 +12 −9 MeV, and Γ a1 = 576 ± 11 +80 −20 MeV [29] are also considered.
A. Ground states contribution
Our numerical calculations we start from the simplest case, when only ground states are considered. With this purpose we use the form factors given by Eq. (10) modified by the substitutions (18). As can be seen from Table I, the higher value of m a1 and the lower value of Γ a1 , TABLE I. The width of the τ → ντ ρ 0 π − decay obtained in the NJL model with only the ground state contributions. The first two columns contain the phenomenological input values of ma 1 and Γa 1 taken from [10,[26][27][28][29]. the better agreement with experimental data. We make this conclusion by confronting our results to the old value of the branching ratio Br(τ → ν τ ρ 0 π − ) = (5.4 ± 1.7)%, quoted by PDG [30] [presently, PDG does not give data on the τ → ν τ ρ 0 π − mode]. This corresponds to the following decay width It is worth pointing out that the latest measurements of the CLEO Collaboration of the τ → ν τ πππ decay [31,32] and the results of the COMPASS experiment in diffractive production [26,27,33] can be used [although this is a model dependent procedure which is also influenced by the production mechanism] to extract the branching ratios of the specific decay channels. In particular, the JPAC [29] made a rough estimate for the dominant ρπ S-wave channel. The found branching ratio is of 60% -80%. It corresponds to the decay width Γ JP AC τ →ντ ρ 0 π − = (1.26 − 1.69) × 10 −10 MeV.
In the sets (a) and (b) of the Table I we use the input data of the COMPASS Collaboration [26,27]. Two values of Γ a1 are considered: the lowest one Γ a1 = 300 MeV, and the central one Γ a1 = 380 MeV. The PDG averaged values (c) and (d) [10] are selected in the same way. The input (e) is taken in accord with the results of the JPAC group [29]. In the set (f) the output of the analyses [14] is used.
To summarize the above, it should be noted that, the ground states contribution, where the a 1 (1260) exchange dominates, is too low to explain the experimental data on Γ τ →ντ ρ 0 π − . The best estimate here is given by the set (c), but even this prediction of the NJL model is slightly below the lower boundary of the experimental value (26). Therefore, one should take into account the exited states which also contribute to the τ → ν τ ρ 0 π − decay amplitude in leading 1/N c order.
B. Exited states contribution
Let us turn now to the study of the contributions arising from the exited π(1300) = π and a 1 (1640) = a 1 states to clarify their role in the decay τ → ν τ ρ 0 π − .
The characteristics of π(1300) quoted by the PDG are m π = 1300 ± 100 MeV, and Γ π = 200 − 600 MeV [10]. The impact of this state on the τ → ν τ ρ 0 π − decay width is controlled by the parameter β π . In accord with the PCAC relation, one can expect that |β π | ∼ (m π /m π ) 2 = 0.01 [11]. This is too small to have an appreciable impact on the τ → ν τ ρ 0 π − decay width. Hence, the only contribution which may affect the description presented in Table I is the contribution of the exited a 1 (1640) state.
The PDG lists the a 1 (1640) as "omitted from summary table", nonetheless they give the average world values m a 1 = 1654 ± 19 MeV, and Γ a 1 = 240 ± 27 MeV. On top of that, the COMPASS Collaboration has recently reported on the Breit-Wigner a 1 -resonance parameters: m a 1 = 1700 +35 −130 MeV, and Γ a 1 = 510 +170 −90 MeV [27]. It is worth mentioning that the value of the parameter β a 1 is not so strongly suppressed as β π . This follows from the crude estimate |β a 1 | ∼ (m a1 /m a 1 ) 2 = 0.56. Therefore one can expect that mixing (11) gives a visible effect [let us note, that a similar estimation made for the parameter β ρ considered in [14] in the context of an effective description of the role of the ρ = ρ(1450) exited state of ρ(770) gives |β ρ | ∼ (m ρ /m ρ ) 2 = 0.28, in harmony with their result β ρ = −0.25, obtained by fitting experimental data]. In the following, the free parameter β a 1 will be fixed in accord with our estimate β a 1 = −0.56 above. Notice the increase of the impact of the ground state a 1 (1260) due to the factor 1/(1 + β a 1 ) in (11). Again, the similar effect took place for the ground ρ(770) state contribution when the exited state ρ = ρ(1450) had been taken into account [14].
Our goal now is to show that the known experimental data allow for a meaningful evaluation of the impact of the a 1 -resonance propagator on the τ → ν τ ρ 0 π − decay. To this end, we consider the sets of experimentally known characteristics of a 1 and a 1 resonances. The results of such numerical calculations are collected in Table II. In the sets (a) and (b), the data of the COMPASS Collaboration are considered [27]. Notice that COMPASS has performed the so far most advanced partial-wave analysis of diffractively produced π + π − π − final states, using the isobar model. That has allowed them, in particular, to determine mass and width of a 1 and a 1 resonances with high confidence. Their interpretation of a 1 as a first radial excitation of a 1 is in line with our theoretical consideration.
In Fig.3 we show the typical behaviour of the spectral function (24) for the case (b), which agrees well both with the experimental value (26) and with the JPAC estimate (27). The a 1 (1640) resonance contributes mostly through its interference with a 1 (1260). The distractive interfer- Table II. τ →ντ ρ 0 π − is shown for comparison. The masses and widths are given in MeV. [10]. The width of a 1 shows large uncertainties, but large values for the width are known to be ruled out by COMPASS measurements. Thus, in our estimates, we use two values Γ a1 = 250 MeV and 400 MeV. The latter value is more preferable. The distractive interference suppresses the a 1 -exchange contribution Γ (a1) τ →ντ ρ 0 π − on 20%. In the set (e) we use the a 1 parameters extracted by the JPAC group m a1 = 1209 ± 4 +12 −9 MeV, and Γ a1 = 576 ± 11 +80 −20 MeV [29]. In the set (f) the data of the theoretical fit for the ground state of a 1 are considered [14]. In both cases the characteristics of a 1 are taken from the PDG [10].
Let us summarize the results presented in Table II and Fig. 3.
2) The contribution of the π(1300) resonance to the τ → ν τ ρ 0 π − decay is negligible. This is a direct consequence of the PCAC relation. In particular, the value β π = 0 in Table II can be replaced by β π = −(m π /m π ) 2 = −0.01 without a noticeable effect. To have a noticeable effect the value of β π should be about β π = −0.4. At this stage, however we do not see any valid theoretical reason why |β π | would be so large.
3) The comparison of Tables I and II shows that the inclusion of the excited axial-vector a 1 (1640) state is a necessary element for the successful description of the τ → ν τ ρ 0 π − decay width. A reason for the improvement is contained in the substitutions (11) and (18) which increase substantially the contribution of the a 1 ground state, although this growth is partly suppressed due to a destructive interference with the exited a 1 state. This our conclusion agrees with the CLEO Collaboration result [31]. Their studies show that adding of the a 1 term into the Breit-Wigner function improves significantly the agreement with the τ → ν τ 3π data.
4) The set c) overestimates the τ → ν τ ρ 0 π − decay rate. This is a consequence of a very low a 1 -resonance width Γ a1 = 250 MeV. The other sets with larger values of Γ a1 are in agreement with the experimental value (26). Apparently, this points out that the value Γ a1 400 MeV is preferable one. This observation is consistent with the determinations from COMPASS.
5) The spectral distribution shown in Fig. 3 is a prediction of the NJL model. It is assumed that this result would be checked in the study of the tau decay into three pions and neutrino, where events with pion pairs over the ρ(770) mass would be selected.
We have used the covariant approach [21] to describe the weak interactions of mesons in leading order in 1/N c and derivative expansions. It has been shown that in this approximation the axial-vector current is dominated by the a 1 (1260) and the pion exchanges. However, we have found that the τ → ν τ ρ 0 π − decay width is too low if the physical value of Γ a1 is considered.
The contributions of the first radial excitations of the pion and the a 1 states have been taken into account to improve the description. For that we supplemented the regular π and a 1 propagators with new terms corresponding to the propagators of excited π(1300) and a 1 (1640) states. Our treatment of these excitations is similar to the successful description of the ground ρ(770) and excited ρ(1450) vector resonances in [14]. The momentum dependent off shell widths of all resonances have been approximated by the functions introduced in paper [15]. This procedure can be further elaborated as soon as the new more precise experimental data will be reported on the τ → ν τ ρ 0 π − decay.
As a result, we obtain that the contribution of the π(1300) resonance is negligible, and conclude that the channel τ → ν τ ρ 0 π − → ν τ π − π − π + is a source of sufficiently clear information on a 1 (1260) and a 1 (1640) states. The a 1 (1260) resonance dominates the intermediate process, while the a 1 (1640) contributes less than 20%. In Table II, we present our estimations for the decay width Γ(τ → ν τ ρ 0 π − ) which correspond to the different input values of a 1 and a 1 characteristics. The spectral distribution shown in Fig. 3 can be used for comparison with the data, as soon as those would be available.
Our result indicates on the important role which the a 1 (1640) state plays in the theoretical description of this τ → ν τ ρ 0 π − decay. It means, in particular, that one should carefully estimate its contribution and role in the τ → ν τ π − π − π + decay. This will be done somewhere else. The results obtained here could be useful for such studies.
ACKNOWLEDGMENTS
The author have benefited from innumerable discussions with M. K. Volkov and B. Hiller. A conversation with H. G. Dosch on the subject of PCAC relation in the presence of the radially excited states π(1300) and a 1 (1640) is gratefully acknowledged. I would like also to thank P. Roig for useful correspondence. This paper was completed at the Institute of Modern Physics of the Chinese Academy of Sciences in Lanzhou and I would like to thank P. M. Zhang and L. Zou for their warm hospitality and support. I would like to acknowledge networking support by the COST Action CA16201. | 7,812 | 2018-12-16T00:00:00.000 | [
"Physics"
] |
bcRep: R Package for Comprehensive Analysis of B Cell Receptor Repertoire Data
Immunoglobulins, as well as T cell receptors, play a key role in adaptive immune responses because of their ability to recognize antigens. Recent advances in next generation sequencing improved also the quality and quantity of individual B cell receptors repertoire sequencing. Unfortunately, appropriate software to exhaustively analyze repertoire data from NGS platforms without limitations of the number of sequences are lacking. Here we introduce a new R package, bcRep, which offers a platform for comprehensive analyses of B cell receptor repertoires, using IMGT/HighV-QUEST formatted data. Methods for gene usage statistics, clonotype classification, as well as diversity measures, are included. Furthermore, functions to filter datasets, to do summary statistics about mutations, as well as visualization methods, are available. To compare samples in respect of gene usage, diversity, amino acid proportions, similar sequences or clones, several functions including also distance measurements, as well as multidimensional scaling methods, are provided.
Introduction
The immune system is a complex network of cells and organs that mainly defends the body against pathogens [1]. Lymphocytes, in particular B and T cells, are the major cellular components of the adaptive immune response. The highly diverse Immunoglobulins (IG) and T cell receptors (TR) provide specific immune reactions due to pathogen recognition.
Major advances in next generation sequencing (NGS) led to possibilities of deep sequencing of B and T cell receptor repertoires. Among others, immune repertoires of disease models [2,3], as well as changes during aging [4] are of main interests.
Existing tools like IMGT/HighV-QUEST (tested version: 3.3.5; [5]) process raw IG/TR NGS data, while extracting V (variable), D (diversity) and J (joining) regions and defining special sequence parts like complementary determining regions (CDR) or framework regions (FR). However, to interpret these sequences and compare them among study groups, further analyses are required. Additionally, online tools for B and T cell repertoire analysis are available (e.g. Change-O, iRAP, IMEX, MiXCR or VDJtools [6][7][8][9][10]). Unfortunately, most of them are limited to either the number of input sequences or a limited number of analysis methods. Furthermore, the user is restricted to the output format generated by the program and individual output modifications are usually lacking. Whereas Change-O was designed to track somatic hypermutations of BCRs, iRAP was developed to characterize repertoire-level dynamics and diversity of B and T cell immune repertoires. IMEX analyzes diversity and clones of IGMT/ HighV-QUEST data, while MiXCR concentrates on processing raw data to quantitated clonotypes. VDJtools can use several types of inputs, but also focusses mainly on clonotype data. Table 1 provides a comparison between bcRep and other selected IG analysis tools, like Change-O, iRAP and IMEX. bcRep comprises many functions in one package, where otherwise several tools are required.
Here, we present a new R package [11], bcRep, for the analysis of IG repertoires. It comprises methods to combine and read IMGT/HighV-QUEST output files, and several methods to study not only clones, but also the total set of input sequences or subsets of sequences. Sequences can be filtered for their functionality or junction frame usage, and clones also for their size. Gene usage, as well as (silent and replacement) mutations and diversity can be analyzed. Clonotypes can be classified and compared between different samples. Several dissimilarity and distance measurements are available to analyze relations between gene usage or sequence data of different samples (beta diversity). Samples can not only be analyzed individually, but also compared
Methods
In the following we describe data formats used as input and methods implemented in bcRep.
An overview about all functions can be found in Table 2. The R package vignette provides a more detailed overview about the usage of functions and their outputs or visualization methods. Parallel processing is possible for some methods using the doParallel package [12]. The number of computing cores is set by the user (single core processing by default). In S1 Table information about computational time and memory used for more complex functions is provided.
Input data
The input data for bcRep are output tables of IMGT/HighV-QUEST. In total, IMGT/HighV-QUEST returns 10 tables (plus a parameter table and in some cases individual files). Tables required as input for the function are described in the corresponding help file. Functions to combine the output from several IMGT/HighV-QUEST output folders and to read in these tables are provided: > combineIMGT(folders = c("pathTo/IMGT1a", "pathTo/IMGT1b", "pathTo/IMGT1c"), name = "NewPro>ject") > readIMGT("PathTo/file.txt", filterNoResults = TRUE) While reading input tables, sequences without any information (marked as "no results" in the "D-GENE and allele" column) can be excluded. IMGT/HighV-QUEST gives no results, when 1. The D gene and allele reference directory of the IGH analyzed sequences cannot be managed by the IMGT/GENE database.
2. Imprecise identification of the 3'V-REGION of the V gene and allele or/and of the 5'J-REGION of the J gene and allele.
Mutation analysis
Basic summary statistics about mutations, like R/S ratios (the ratio of replacement and silent mutations), are provided. IMGT/HighV-QUEST already provides tables containing general information about silent and replacement mutations, but no statistics. Silent mutations can be further analyzed by studying proportions of mutations from one to another nucleotide to find silent mutations that appear more often than others in a given set of sequences. Further methods to investigate nucleotide distributions of the environment of mutated positions. Therefore three positions up-and downstream of the mutated position are considered and ratios of mutation from one nucleotide to another are returned. This helps to get an overview about nucleotides that appear maybe more frequently at positions around the mutations. Additionally, replacement mutations can be further analyzed. Here we concentrate on the appearance of certain mutations. Proportions of mutations resulting in amino acid replacements (reference amino acid according to germline identified by IMGT) are calculated to find substitutions that appear more often than others. In Fig 2 an example for the analysis of replacement mutations in CDR1 regions is provided. The percentages are color coded; darker colors represent higher percentages. Amino acids of the germline sequence are placed in rows the mutated ones are positioned in columns. Further replacement mutations resulting in hydropathy, chemical or volume changes can be highlighted. In the given example mutations from Serine (S) to Threonine (T) or Asparagine (N) appear most frequently (dark gray squares), but only the mutation from S to N imply also a hydropathy change (orange dots).
Clone analysis
Clonotypes can be classified using different criteria regarding the complementary determining region 3 (CDR3), V and J genes. A threshold for CDR3 sequence identity can be chosen to either allow only identical CDR3 sequences (identity = 100%) or include possible somatic hypermutations (identity < 100%). It is mandatory to have the same V genes criterion. The application to same J genes is optionally. The user can select, how strong CDR3 identity shall be weighted and if sequences not only having same V genes, but also same J genes, shall be included. For instance iRAP considers same V, D and J genes and 100% CDR3 amino acid sequence identity. Change-O provides several methods to define clones: assigning total Ig sequences into clones, considering same V and J genes and a junction length with a specified substitution distance model or defining clones by specified distance metrics on CDR3 sequences and cutting of hierarchical clustering trees.
A function to look for clones shared between at least two samples is provided, as well. This function uses the same criteria as described above (clones). Additionally, a summary function is implemented. This function returns the number of clones per sample and the number of clones shared between different groups of samples.
Further clone features like copy number, CDR3 length, functionality, junction frames and gene usage can be analyzed and visualized. Filtering methods for clone size, functionality and junction frame usage are provided, as well.
Functionality dependent of CDR3 length distribution can be visualized, using the function plotClonesCDR3Length() (Fig 3):
Diversity analysis
Functions for amino acid distributions, as well as diversity measurements are implemented.
A diversity index is a quantitative measure that reflects how many different types exist in a dataset. In our case types refer to amino acids per position. Simultaneously it takes into account how evenly the basic entities are distributed among those types. There are several diversity indices, which are simple transformations of the effective number of types, but each index can be interpreted as a measure corresponding to some real phenomenon.
The true diversity depends only on the value of sequence or amino acid frequencies and an exponent q, and not on the functional form of the index [13]. In almost all cases nonparametric diversity indices are monotonic functions of or limits of such functions as q approaches unity. D is the effective number of types, q the order, p i the relative abundance of species i and n the total number of species observed [13]. This means that when calculating the diversity of a set of sequences, it does not matter whether one uses Simpson concentration, inverse Simpson concentration or Shannon entropy; after conversion all give the same diversity. In Table 3 conversions of common diversity indices to true diversities are shown [13]. Diversities can be transformed in terms of the diversity index itself (x) or the proportions of the species (p i ) [13].
The order of a diversity indicates its sensitivity to common and rare amino acids [13]. The diversity of order zero (q = 0) is completely insensitive to species (sequence or amino acid) frequencies and is better known as species richness [13]. Orders less than unity give diversities that disproportionately favor rare amino acids, while all values of q greater than unity disproportionately favor the most common species (sequences or amino acids) [13]. In the case of q = 1, all species are weighted by their frequency without favoring rare or common ones [13]. Regardless of q it always gives exactly n when applied to a community with n equally-common species.
Diversity indices are calculated for sequences of the same length. Considering somatic hypermutations, deletions and insertions, it is difficult to assign CDR3 sequences to their native sequence and length. That is why diversity indices are calculated for each position. When visualizing the results, figures for each sequence length (x-axis: sequence position, y-axis: diversity index) or one figure including mean diversities and standard deviations (x-axis: sequence length; y-axis: mean diversity index) can be returned. An example is given in Fig 4, where mean diversity indices are compared between two samples (red and blue). Diversity is alike in both samples, except for longer sequences (with a length of 21 to 26 amino acids). For these positions CDR3 sequences of sample A are more diverse than of sample B. Also standard deviations differ for these sequence lengths. The corresponding functions for one or several samples are: > trueDiversity (sequences = NULL, aaDistribution.tab = NULL, order = c(0, 1, 2)) > compare.trueDiversity (sequence.list = NULL, comp.aaDistribution. tab = NULL, order = c(0, 1, 2), names = NULL, nrCores = 1) > plotCompareTrueDiversity (comp.tab = NULL, mean.plot = T, colors = NULL, title = NULL, PDF = NULL) Table 3. Conversion of specific diversity indices to true diversity indices [13].
Index x Diversity in terms of x Diversity in terms of p i
Species richness Further a function calculating the Gini index, which measures the inequality of clone size distribution, is given. The Gini index is bound between zero and one. An index of zero represents a clone set of equally distributed clones, all having the same size whereas a Gini index of one would point to a set including only one clone. [16] The corresponding function is: > clones.giniIndex(clone.size = NULL, PDF = NULL) In Fig 5 an example of Gini indices for three different samples is given. Sample A has a Gini index of 1, which represents a set of only one clone including all sequences. Sample 2 is still dominated by big clones (with many sequences), but has also some clones with only few sequences (Gini index = 0.8). Sample 3 has a Gini index of 0.3, which means, that the clones are roughly equally distributed, but also some big clones exist.
Comparison of different samples
There are some functions to compare data of different samples. For example, gene usage, amino acid distribution and diversity can be compared and results visualized across different samples. These functions need an input list containing sequence information from at least two individuals.
Additionally, clone sets of different samples can be compared. This function helps analyzing whether there are so called "public clones" that are shared among several samples or only "private clones" which represent each sample uniquely.
Dissimilarity/distance measurements and multidimensional scaling
For gene usage, as well as for sequence data several dissimilarity and distance functions are provided. With these functions relationships between several samples can be analyzed (beta diversity). Dissimilarity, as well as distance measurements describes numerically how similar two objects are. For example, the Levenshtein distance [17], which represents the minimum of single-character edits between two sequences, would be two for the sequences "AABBCC" and "ABBBBC", because there are two changes (second position A -> B, fifth position C -> B). Contrary, the longest common substring algorithm [18] returns an index of four (ABBC) for the given example. In the case of distances, higher values describe higher distances/dissimilarities. Small distances are equivalent to many similarities or little dissimilarity.
Studying distances between sequences can be done by either analyzing all input sequences together or analyzing subsets of sequences of the same length. Based on the R package stringdist [19] dissimilarity or distance indices like Levenshtein, cosine [20], q-gram [21], Jaccard [22], Jaro-Winker [23], Damerau-Levenshtein [24], Hamming [25], optimal string alignment [19] and longest common substring can be calculated. The indices are described more in detail in help files of bcRep and stringdist packages. For instance, Hamming distance only counts character substitutions between two sequences of the same length, whereas the Levenshtein distance also takes deletions and insertions into account. The optimal string alignment also allows for one transposition of adjacent characters, the full Damerau-Levenshtein distance allows for multiple substring edits. The q-gram, cosine, Jaccard and Jaro-Winkler distances underlie more complex algorithms.
For gene usage data a table containing gene proportions of different samples is required as input. When having samples in rows and genes in columns, the distances between the samples, based on the gene usage can be analyzed. Transforming this table will end up in distances between different genes, based on the different samples. Dissimilarity or distance measurements like Bray-Curtis [26], Jaccard or cosine are provided using implementations of the R packages vegan [27] and proxy [28]. Bray-Curtis is often used for abundance data, whereas Jaccard distance uses presence/absence data.
Further these results can be used to perform a multidimensional scaling (e.g. principal coordinate analysis, PCoA) and to visualize levels of similarity. Ordination methods, like PCoA can be used to display information contained in a distance matrix.
In the following example a distance matrix (cosine distance) is calculated, based on IGHV gene usage data of 42 samples. Afterwards PCoA is used to visualize the relationships between those samples. The 42 samples belong to two groups, for example a case and a control set.
Conclusion
The bcRep package offers a new platform for comprehensive B cell receptor repertoire analysis. It combines several methods to summarize sequence characteristics of the underlying dataset in detail. Computation time can be reduced using parallel processing; however this is still dependent on the number of cores provided for analysis and the underlying computer architecture. bcRep can be used by scientists new to IG repertoire analysis, as well as by advanced users. Functions can be applied without reformatting the input data and most results can be visualized with implemented plotting routines included in this package. Advanced programmers can use the provided functions as entry for more thoughtful in depth analyzes. A wide spectrum of methods analyzing individual samples, as well as comparing several samples is provided.
In future we plan to continue adding new methods of diversity analysis, clustering sequences into groups and comparing repertoires as well as methods for processing FASTQ or FASTA files.
Supporting Information S1 Table. Computational time and object sizes of selected bcRep functions. Only more complex functions with high computational costs are chosen. Characteristics are shown for three samples with 1) only few sequences (Sample 1, n = 31 901 sequences), 2) a moderate number of sequences (Sample 2, n = 323 560 sequences)) and 3) many sequences (Sample 3, n = 928 225 sequences). Computational time is represented by CPU elapsed time (seconds) and memory by object size (Megabytes). For all functions only one core was used (no parallel processing). System features and selected parameters for functions are shown separately. (PDF) | 3,901.8 | 2016-08-23T00:00:00.000 | [
"Biology",
"Computer Science"
] |
A novel prognostic classification integrating lipid metabolism and immune co-related genes in acute myeloid leukemia
Background As a severe hematological malignancy in adults, acute myeloid leukemia (AML) is characterized by high heterogeneity and complexity. Emerging evidence highlights the importance of the tumor immune microenvironment and lipid metabolism in cancer progression. In this study, we comprehensively evaluated the expression profiles of genes related to lipid metabolism and immune modifications to develop a prognostic risk signature for AML. Methods First, we extracted the mRNA expression profiles of bone marrow samples from an AML cohort from The Cancer Genome Atlas database and employed Cox regression analysis to select prognostic hub genes associated with lipid metabolism and immunity. We then constructed a prognostic signature with hub genes significantly related to survival and validated the stability and robustness of the prognostic signature using three external datasets. Gene Set Enrichment Analysis was implemented to explore the underlying biological pathways related to the risk signature. Finally, the correlation between signature, immunity, and drug sensitivity was explored. Results Eight genes were identified from the analysis and verified in the clinical samples, including APOBEC3C, MSMO1, ATP13A2, SMPDL3B, PLA2G4A, TNFSF15, IL2RA, and HGF, to develop a risk-scoring model that effectively stratified patients with AML into low- and high-risk groups, demonstrating significant differences in survival time. The risk signature was negatively related to immune cell infiltration. Samples with AML in the low-risk group, as defined by the risk signature, were more likely to be responsive to immunotherapy, whereas those at high risk responded better to specific targeted drugs. Conclusions This study reveals the significant role of lipid metabolism- and immune-related genes in prognosis and demonstrated the utility of these signature genes as reliable bioinformatic indicators for predicting survival in patients with AML. The risk-scoring model based on these prognostic signature genes holds promise as a valuable tool for individualized treatment decision-making, providing valuable insights for improving patient prognosis and treatment outcomes in AML.
Introduction
Acute myeloid leukemia (AML) is characterized by a clinically, epigenetically, and genetically heterogeneous disease with poor outcomes (1).Despite being initially sensitive to chemotherapy, most patients with AML ultimately experience relapse and die of progressive disease.Therefore, there is an urgent need for alternative treatment solutions.Advances in epigenomic and genomic characterization of AML have paved the way for the development and approval of novel targeted agents (2).Immunotherapy is also a promising strategy for long-term disease control.However, acquired resistance to targeted agents and a low response to immunotherapy still cause treatment failure (3).Thus, novel therapeutic targets and prognostic biomarkers are urgently required to guide clinical practice and predict the survival of patients with AML.
Emerging evidence suggests that metabolic disruptions, particularly those involving certain metabolites and associated pathways, are crucial factors in the development and progression of leukemia.Lipids and their derivatives play critical roles in energy generation and form the structural basis of cellular and organelle membranes.Extensive research conducted over numerous years has explored the impact of lipid metabolism on AML, leading to recent breakthroughs (4).As a lipid category, fatty acids represent an appealing therapeutic target that supports increased biomass, membrane biogenesis, energy production, and lipoprotein generation in dividing AML cells (5).AML is associated with the overexpression and constant activation of sphingosine kinase 1, an enzyme responsible for producing sphingosine 1-phosphate from sphingosine.Remarkably, the inhibition of sphingosine kinase 1 induces apoptosis in AML blasts and leukemic stem cells obtained from patients (6,7).Consequently, control of lipid metabolism reprogramming has emerged as a promising therapeutic target for enhancing the prognosis of individuals diagnosed with AML.Therefore, we previously constructed a prognostic signature with high specificity and sensitivity for estimating the prognosis of AML patients based on lipid metabolism-related genes (LMRGs) (8).The findings showed that the risk signature had remarkable specificity and sensitivity in estimating the outcomes of AML patients.And, consistent with the findings of other studies, interventions aimed at modulating lipid metabolism have the potential to impact not only tumor cells, but also immune cells (9,10).We found that the lipid metabolism-related risk signature was closely associated with the immune tumor microenvironment (TME) and response to immunotherapy in AML.
As is same to solid tumor cells, AML cells are capable of developing an immunosuppressive microenvironment in which both adaptive and innate immune responses are profoundly disrupted (11,12).Emerging evidence indicates that lipids are crucial for driving this dysregulated state.In acidic, hypoxic, and nutrition-deficient TMEs, both the cancer and immune cells tend to depend on the lipids for energy storage, building cellular membranes, and generating signaling molecules.Consequently, the dysregulation of lipid metabolism within the TME can have a profound impact on tumorigenesis, subsequent progression, and metastasis.Within this complex TME, lipids act as double-edged swords capable of either supporting antitumor or promoting protumor immune responses (9,12).These contradictory results present a dilemma, as simply inhibiting or stimulating a single lipid metabolic pathway within the TME fails to achieve optimal results.The models constructed with a single feature exhibited relatively weaker validity and robustness than those constructed with multiple features.Therefore, there is an urgent need for a comprehensive understanding of a multi-featured signature model specifically tailored for patients with AML, along with an exploration of its prognostic implications.
In this study, we integrated genes related to immunity and lipid metabolism to develop a prognostic signature based on the interactions between antitumor immunity and lipid metabolism.
Data collection and preparation
The clinical data and RNA-sequencing profile of the patients with AML (Supplementary Table 1) came from The Cancer Genome Atlas (TCGA) database (https://www.cancer.gov/tcga/).Prior to analysis, all transcriptome data for fragments per kilobase of transcript per million mapped reads were logtransformed and subsequently converted to transcripts per million.Baseline features of the AML patients involved in the risk signature are displayed in Supplementary For external validation, three independent datasets (GSE12417, GSE37642, and GSE71014) along with the clinical data were acquired from the GEO database, available at https://www.ncbi.nlm.nih.gov/geo/.
Identification of immune-and lipid metabolism-related prognostic genes
Here, we incorporated a comprehensive approach to identify genes associated with lipid metabolism.Specifically, we included all genes from 34 LMRG sets sourced from the Molecular Signature Database (MsigDB; available at https://www.gsea-msigdb.org/gsea/msigdb/) (13).By considering the intersection of these gene sets, we derived a final set of 1,996 LMRGs.For detailed information regarding the LMRG sets, please refer to Supplementary Table 3.A collection of 1,793 immune-related genes was acquired from the ImmPort database, available at https://www.immport.org/(14).Details of the immune-related genes (IRGs) are displayed in Supplementary Table 4.The integration of LMRGs and IRGs was performed to conduct a prognostic analysis of AML, and 180 prognostic genes (p <0.01) were acquired for the subsequent analyses.
Development and validation of a prognostic lipid metabolism and immune co-related signature
A total of 144 samples from the AML cohort in the TCGA database were then randomly divided into the training (N = 72) and validation (N = 72) datasets in a 1:1 ratio.First, we used univariate Cox regression to identify LMRGs and IRGs with prognostic role in the training dataset.Then, least absolute shrinkage and selection operator (LASSO) Cox regression analysis with the R package (version 3.6.1)"glmnet," a novel risk-scoring model with eight genes was developed as follows: Risk score = expAPOBEC3C × 0.188873061 + expMSMO1 × 0.176721847 + expATP13A2 × 0.096045519 + expSMPDL3B × 0.077828708 + expPLA2G4A × 0.071836509 + expTNFSF15 × 0.027983123 + expIL2RA × 0.022815855 -expHGF × 0.044508523 Subsequently, patients with AML in the training dataset were classified into low-risk group and the high-risk group by the median cutoff risk score.The Kaplan-Meier survival curve was performed to compare the differences between the two risk groups.The receiver operating characteristic (ROC) curves were constructed to assess the validity of the risk signature.
The validity of the risk signature was verified using samples from the GSE12417, GSE37642, and GSE71014 cohorts.The same analyses used for the training dataset were used to calculate the risk scores of samples from the GEO cohorts.
Clinical correlation and subgroup analyses
To assess the clinical significance and prognostic utility of the risk signature, we extracted the clinical data of 144 patients with AML in the TCGA database, and these variables included age (>= 60 years or < 60 years), gender (female or male), chromosome status (normal or abnormal), and gene mutation (FLT3, NPM1, RAS, and IDH1 mutation or not) (Supplementary Table 5).Then, Kaplan-Meier curves were initially generated to explore the prognostic role of each gene included in the risk signature (15).
Functional enrichment analysis
The TCGA database contained genomic data from 144 samples in the AML cohort, which were classified into either high-risk or low-risk groups based on their risk score.Using the GSEA v4.1.0software (https://www.gsea-msigdb.org/gsea/index.jsp), the hallmark gene set (h.all.v7.2.symbols.gmt)was employed for enrichment analysis, with the phenotypic label being the high-risk group versus the low-risk group.The number of permutations used was 1000, while all other settings were set to default values (13).Statistically significant findings were defined as p <0.05 and q <0.05.
Nomogram construction and assessment
By integrating the risk scores and clinical data of 144 patients with AML in the TCGA database, we constructed nomogram survival models for overall survival (OS) by the "rms" R package, incorporating both univariate and multivariate results.The calibration curve estimate was then adjusted for optimism by using a bootstrap procedure (16).In addition, ROC curves were generated to validate the predictive capacity of the risk signature with clinical characteristics.
A total of 144 patients with AML in the TCGA database were classified into low-risk group and the high-risk group by the median cutoff risk score.The CIBERSORT algorithm was performed to estimate the infiltration levels of various immune cell types (17).Tumor immune dysfunction and exclusion (TIDE) data for AML was acquired from http://tide.dfci.harvard.edu/.The TIDE algorithm was developed to generate TIDE scores and to accurately evaluate the response of immunotherapy agents in patients with cancer (18).Lower TIDE scores indicate better outcomes.The immunotherapy response of each patient was evaluated by the gene expression profiles.
Pharmaceutical screening
A total of 144 patients with AML in the TCGA database were classified into low-risk group and the high-risk group by the median cutoff risk score.Then, we employed the "pRRophetic" R package in the Genomics of Drug Sensitivity in Cancer (GDSC) database to determine the varying susceptibilities to the drug between high-and low-risk groups.The half maximal inhibitory concentration (IC 50 ) value, which indicates the concentration at which cell growth is inhibited by 50%, was used as a metric of drug sensitivity (19,20).Stringent filtration conditions (p <0.01) were used.
Quantitative real-time PCR
Details of the PCR operation was carried out in accordance with previous study (21).Samples of health donor and patients with AML were collected from Henan Cancer Hospital and approved by Medical Ethics Committee of The Affiliated Cancer Hospital of Zhengzhou University (approval no.2023-KY-0104-001).The PCR primers were purchased from SangonBiotech (Sangon, Zhengzhou, China).And, the primer sequences in this study were showed in the Supplementary Table 6.
Construction of an eight-gene signature with high accuracy of prognosis prediction
Briefly, 1,996 LMRGs and 1,793 IRGs in AML were included, of which 180 candidate prognostic genes were subsequently identified using univariate Cox regression analysis (Figure 1A).LASSO Cox regression analysis finally identified eight crucial genes for lipid metabolism-and immune-related prognostic signatures according to the optimal l value (Figures 1B, C).Among them, there were five LMRGs (MSMO1, ATP13A2, SMPDL3B, PLA2G4A, and TNFSF15) and three IRGs (APOBEC3C, IL2RA, and HGF).Except for HGF, all other seven signature genes are detrimental factors with a hazard ratio (HR) >1.The risk score for each AML sample in this study was calculated by the formula described in Section 2.3.
The median risk score was regarded as the cut-off value to classify the training TCGA cohort into the high-risk and low-risk groups (Figure 2A).The scatter plot indicated that high-risk patients were significantly associated with a high mortality rate compared to that of low-risk patients (Figure 2B).The gene expression heatmap illustrates that, except for HGF, all other seven signature genes were upregulated in the high-risk group (Figure 2C).Kaplan-Meier curve analysis demonstrated that highrisk patients suffered significantly worse survival outcomes than low-risk ones (Figure 2D).The AUC reached 0.807, 0.848, and 0.843 at 1, 3, and 5 years, respectively (Figure 2E).In addition, results for the testing and entire datasets were consistent with those from the training dataset (Figures 3A-E).The above results demonstrated that the potential prognostic signature showed great specificity and sensitivity in estimating the prognosis of AML patients.
External validation of the risk signature in the GEO cohorts
To validate the predictive reliability of this prognostic signature, we screened and included three GEO datasets as external validation cohorts.After calculating the risk scores for each sample in these datasets, we assigned patients to high-and low-risk groups by the median cut-off value of these scores.Survival analyses performed on all three validation datasets consistently demonstrated that in the high-risk patients with AML experienced significantly worse OS outcomes than the low-risk ones (GSE37642, p = 0.00041; GSE71014, p = 0.0098; GSE12417, p = 0.046) (Figures 4A-C).
Correlation between the clinical characteristics and prognostic signature
To assess the clinical significance and prognostic utility of the risk signature, Kaplan-Meier curves were initially generated to explore the prognostic role of each gene included in the risk signature.These variables included age (>= 60 years or < 60 years), gender (female or male), chromosome status (normal or abnormal), and gene mutation status (FLT3, NPM1, RAS, and IDH1 mutation or not).The results revealed that regardless of the clinicopathological features, high-risk patients tend to have the worst OS outcomes, indicating the stable performance of the prognostic risk signature (Figures 5A-N).
Nomogram analysis
Univariate combined with multivariate Cox regression analyses were preformed to explore whether the risk signature and 6A, B).In addition, a nomogram was developed using age and risk scores to accurately predict the survival rates at 1-, 3-, and 5-year in patients with AML, which suggested that a higher total score suggested worse survival.The result showed that the prognostic signature had the greatest impact on OS (Figure 6C).Meanwhile, the calibration curve demonstrated a strong agreement between the predicted and observed OS at 1-, 3-, and 5-year intervals, indicating the excellent predictive accuracy of the prognostic signature (Figures 6D-F).Furthermore, the 1-, 3-, and 5-year survival ROC analyses showed that the AUCs for the nomogram and risk scores were superior to the other variables, such as age, chromosomal status, sex, as well as FLT3, NPM1, RAS, and IDH1 mutations (Figures 6G-I).These results showed that the nomogram and risk score provided a higher practical value for prognostic prediction than the other variables.
Biological functions and pathway analysis
GSEA was performed between the two risk groups to identify the underlying biological functions and pathways associated with the risk score.The results indicated that interferon g, inflammatory, and interferon a responses, as well as TNFa signaling via NF-kB, complement, IL2-STAT5 signaling, IL6-JAK-STAT3 signaling, allograft rejection, hypoxia, and KRAS signaling pathway were enriched, which are central in mediating host responses to inflammation and antitumor immunity (Figure 7).
Correlation between the prognostic signature and tumor immune microenvironment
As the antitumor immunity-related signaling pathways were significantly enriched in the GSEA analysis, we evaluated the correlation of the prognostic risk signature with immune state in each patient with AML.CIBERSORT algorithm was performed to estimate the infiltration levels of various immune cell types in the TME.The results demonstrated that high-risk patients had a lower fraction of activated dendritic cells, CD56dim NK cells, effector memory CD4 T cells, macrophages, immature B cells, MDSCs, NK cells, NK T cells, neutrophils, T follicular helper cells, plasmacytoid dendritic cells, and type 1 T helper cells (Figure 8A).Then, the immune scores and the TIDE scores of each sample were calculated, and the results demonstrated that the high-risk samples hold lower immune scores and higher TIDE scores than the low-risk samples (Figures 8B, C), indicating that high-risk patients were associated with enhanced tumor immune escape ability.Moreover, we assessed the disparity in the response rates to immunotherapy between the two risk groups.Notably, the samples from the lowrisk group exhibited higher immunotherapy response rates than those from the high-risk group (Figure 8D).Based on these outcomes, we ascertained that the risk signature could indicate the immune cell infiltration and the response to immunotherapy in AML.
Drug sensitivity analysis
Thereafter, the pRRophetic package were used to further analyze the sensitivity of antitumor drugs based on the IC 50 available in the GDSC database for patients with AML (19,20).In our study, we successfully identified a total of 198 small molecular compounds that exhibited significantly diverse responses between the high-risk and low-risk groups (Supplementary Table 7).The results showed that the high-risk group showed a lower sensitivity to BI2536 (PLK1 inhibitor) and SB-505124 (TGFbR inhibitor), whereas they were sensitive to several other drugs such as AZD2014 (mTOR inhibitor), pictilisib (PI3Ka/d inhibitor), MK-2206 (Akt inhibitor), dactolisib (dual panclass I PI3K and mTOR kinase inhibitor), afatinib (EGFR inhibitor), rapamycin (FRAP/mTOR inhibitor), and taselisib (PI3K inhibitor targets PIK3CA mutations), even though none of these is currently used in the treatment of AML (Figure 9).The outcomes of our study offer promising molecular candidates for targeted therapy that can be utilized in the treatment of AML patients.
Discussion
Here, we studied the role of LMRGs and IRGs in the prognosis of patients with AML.By analyzing large-scale genomic and clinical datasets from TCGA and GEO databases, we identified an eight-gene signature that demonstrated robust prognostic value and potential clinical applications in AML.We performed additional analysis on the expression of eight signature genes in the high and low-risk groups across multiple cohorts, including TCGA, GSE12417, GSE37642, and GSE71014.The findings demonstrated Relationships between the prognostic signature and clinicopathological characteristics.
that MSMO1, ATP13A2, SMPDL3B, PLA2G4A, TNFSF15, APOBEC3C, and IL2RA were upregulated in the high-risk group, whereas HGF was downregulated.Survival analysis indicated that patients with high expression of these signature genes, except for HGF, experienced worse OS outcomes.These results provide further evidence that these genes may function as detrimental factors, while HGF may serve as a protective factor (Supplementary Figures 1 and 2).The relative expression of these eight signature genes were also detected in the clinical samples (Supplementary Figure 3).
APOBEC3C is a member of the APOBEC family that plays important but distinct roles in host defense and mediates C-to-T mutagenesis in cancers.A previous study indicated a negative correlation between APOBEC3C mRNA expression and base substitution mutations in estrogen receptor-negative breast cancer (22).Qian et al. found that APOBEC3C was significantly upregulated in pancreatic ductal adenocarcinoma compared with that in normal pancreatic tissues and predicted worse survival rates (23).Jiang et al. found that increased APOBEC3C expression was related to hematopoietic stem and progenitor cell proliferation and an increased C-to-T mutational burden during disease progression in patients with myeloproliferative neoplasm (24).Methylsterol monooxygenase 1 (MSMO1), an intermediate enzyme involved in cholesterol and fatty acid biosynthesis, acts as a novel mediator of chemoresistance in cancer (25).A previous study revealed that MSMO1 plays crucial roles in tumorigenesis and progression and is a promising prognostic biomarker for cervical squamous cell carcinoma (26).
As the negative regulator of Toll-like receptor signaling, Sphingomyelin Phosphodiesterase Acid Like 3B (SMPDL3B) plays a crucial role in innate immunity and at the interface of membrane biology.Qu et al. demonstrated that SMPDL3B expression indicates poor prognosis and contributes to AML progression (29).
The cytosolic phospholipase, PLA2G4A, is crucial for the pathogenesis of FLT3-ITD-mutated AML (30).Higher PLA2G4A expression results in worse OS and mutations in NRAS, which are known to contribute to the development of myelodysplastic syndrome development (31).
Tumor necrosis family superfamily member 15 (TNFSF15) promotes lymphatic metastasis by upregulating vascular endothelial growth factor-C in a lung cancer mouse model (32).Lu et al. showed that increased TNFSF15 expression indicates worse prognosis in oral cancer (33).
Excessive expression of IL2RA, the gene encoding the alpha chain of the interleukin-2 receptor, has been linked to chemotherapy resistance and unfavorable outcomes in AML (34).IL2RA enhances cell proliferation and cell cycle activity while suppressing apoptosis in both human AML cell lines and primary cells.In two genetically modified mouse models of AML, IL2RA hampered cell differentiation, facilitated stem cell-like characteristics, and was essential for leukemia development.Antibodies targeting IL2RA have demonstrated the ability to inhibit leukemic cells without affecting normal hematopoietic cells, and their combined effects with other anti-leukemic agents have shown potential synergy.Consequently, IL2RA is a promising therapeutic target in AML because it regulates key processes, such as proliferation, differentiation, apoptosis, stem cell-related properties, and leukemogenesis (35).
As a multifunctional cytokine, hepatocyte growth factor (HGF) regulates cell growth, movement, and tissue regeneration in various epithelial cells (36).HGF binds to its receptor c-Met and activates its kinase activity, initiating signaling pathways such as JAK/STAT3, PI3K/Akt/NF-kB, and Ras/Raf.Aberrations in the HGF/MET pathway act as diagnostic, predictive, and prognostic biomarkers for cancers (37).HGF has been discovered to regulate the activity of various immune cell types, including B cells, T cells, and natural killer cells, which are important components of the anti-tumor immune response.By enhancing the immune surveillance and anti-tumor effects, HGF may contribute to reducing the risk of AML development or progression.While, it's worth noting that the exact mechanisms by which HGF influences AML risk are still being investigated, and further studies are required to fully reveal its role in the disease.Nonetheless, the association between HGF and a reduced risk in AML highlights the potential importance of this growth factor in the development and treatment of the disease.The risk score defined by the prognostic signature defined in this study effectively stratified patients with AML into low-and high-risk groups with significantly different survival outcomes.These results are consistent with those of the external validation cohorts from the GEO dataset.Regardless of age, sex, cytogenetic abnormalities, or gene mutations, patients in the high-risk group consistently exhibited worse OS outcomes, further supporting the reliability and generalizability of the prognostic risk signature.
To enhance the clinical utility of our findings, we constructed nomograms that integrated the risk scores derived from the eightgene signature with other clinical factors.The ROC and calibration curves further confirmed the higher predictive accuracy of the prognostic signature and nomograms compared with the clinical variables, such as age, sex, cytogenetic abnormalities, and gene mutations, indicating their potential as reliable tools for personalized treatment decision-making.
GSEA between the two risk groups sheds light on the underlying biological mechanisms associated with the prognostic signature.Many antitumor immunity-related pathways were enriched, suggesting the involvement of immune dysregulation in AML prognosis.This could lead to the distinction in the immunotherapy response against cancer and the treatment response between the two risk groups.
Then, the correlation between the immune cell infiltration and risk score was explored.The low-risk group showed higher proportions of effector memory CD4 T cells, macrophages, NK cells, NK T cells, T follicular helper cells, Type 1 T helper cells, and other immune cell subtypes.The negative correlation between the immune cell infiltration and risk score suggests that patients in the high-risk group may have impaired immune status.The immune and immune escape scores were then calculated, and the results demonstrated a poorer immune state and stronger immune escape ability in the high-risk group, which may affect the response to immunotherapy.Furthermore, in the high-risk group, there was a notable decrease in the expression level of common immune checkpoints such as PD1, PDL1, PDL2, and CTLA4 (Supplementary Figure 4).These findings indicate that the identified signature holds promise as a valuable tool for assessing the effectiveness of immunotherapy in individuals with AML.Additionally, our prediction results of the immunotherapy response rate further verified this conclusion, which showed that low-risk patients had higher immunotherapy response rates than that of high-risk patients.This finding highlights the potential importance of immune modulation in AML treatment.Future research could focus on understanding the underlying mechanisms that contribute to immune suppression in high-risk patients and explore strategies to enhance immune cell function in these individuals.
In line with the potential impact on the immunotherapy response, we evaluated the sensitivity of AML patients to antitumor drugs using pRRophetic packages.Our results indicated that the high-risk patients exhibited higher sensitivity to some potential drugs.This finding could be relevant for treatment selection and personalized therapeutic approaches in AML as it implies that high-risk patients may be more sensitive to specific antitumor drugs, which targeted to PI3K-AKT-mTOR signaling pathways.PI3K-AKT-mTOR signaling pathway is one of the most abnormal signal pathways in human cancer including AML, which is involved in the control of cell metabolism, proliferation, movement, growth and survival and many other cellular processes (38).Inhibition of PI3K-AKT-mTOR pathway is an important strategy for tumor therapy.However, the effects of these inhibitors seem to vary greatly among patients with AML (39,40).So far, no clear mutation characteristics or other pathological processes associated with the disease have been detected to predict treatment response.Our results provide a valuable tool for individualized treatment decision-making of these drugs in AML.
It is important to acknowledge the limitations of this study.First, although we utilized large-scale datasets for the analysis, the retrospective nature of the study design may introduce inherent biases.Prospective studies are warranted to validate our findings and to assess the clinical utility of prognostic signatures and nomograms for real-time patient management.Further functional experiments and in-depth mechanistic investigations are required to elucidate the precise roles of the identified LMRGs and IRGs in AML pathogenesis and treatment responses.
In conclusion, our study presents a comprehensive analysis of the prognostic value and clinical implications of an eight-gene signature derived from LMRGs and IRGs in AML.This signature effectively stratified patients into high-and low-risk groups, demonstrating significant differences in survival outcomes and potential implications for immune cell infiltration, treatment response, and drug sensitivity.This opens up avenues for studying the interplay between lipid metabolism and immune dysregulation, which may uncover novel therapeutic targets.Future investigations could explore the manipulation of lipid metabolism pathways as a means to modulate immune responses and improve treatment outcomes in AML.Overall, these findings in this study have several broader implications.They aid in personalized risk assessment for AML patients, guiding treatment decisions towards immunotherapy or targeted drugs based on risk group assignment.
1
FIGURE 1 Development of the prognostic risk signature in the training dataset.(A) The least absolute shrinkage and selection operator (LASSO) model was subjected to ten fold cross-validation for variable selection.(B) LASSO coefficient profile of identified crucial genes.(C) Coefficient profile of the eight prognostic genes.
2
FIGURE 2 Performance of the prognostic signature in the training dataset.(A) The risk curve of each AML sample was defined by risk score.(B) Scatter plots showing the survival status of each sample.(C) Heat map of the expression of the eight selected genes.(D) Kaplan-Meier survival curves between the two risk groups.(E) The receiver operating characteristic (ROC) curves for overall survival at 1, 3, and 5 years.
3
FIGURE 3 Performance of the prognostic signature in the testing and entire datasets.(A) The risk curve of each AML sample was defined by risk score.(B) Scatter plots showing the survival status of each sample.(C) Heat map of the expression of the eight selected genes.(D) Kaplan-Meier survival curves between the two risk groups.(E) The receiver operating characteristic (ROC) curves for overall survival at 1, 3, and 5 years.
6
FIGURE 6 Construction and validation of the nomogram.(A, B) Univariate and multivariate Cox regression of the prognostic signature and clinical characteristics.(C) The developed nomogram to estimate the survival possibilities of patients with AML.(D-F) Calibration blots of the agreement between the predicted overall survival and observed overall survival at 1, 3, and 5 years.(G-I) The ROC curves for overall survival at 1, 3, and 5 years. | 6,309.8 | 2023-11-10T00:00:00.000 | [
"Medicine",
"Biology"
] |
Use of Business Analytics in Accounting Firms—Taking Deloitte as an Example
This paper aims to introduce business analytics strategies that one of the most prominent public accounting firms, Deloitte, has applied to manage hundreds of thousands of clients and discover valuable insights within the organization to increase efficiency and improve risk management. According to Deloitte official website, 76% of audit committee members within the organization believe that advanced technology should be used more extensively [1]. The insights gained from the data analytics are approved by the most occupied position in the accounting firm, which is the auditors. This paper introduces insight-driven methods used in the company, including Profit model, Network, Structure, Process, Product performance, Product system, Service, Channel, Brand, and Customer engagement. All of these elements can each be integrated into a business analytic approach. Lastly, this paper will provide some limitations associated with the data analysis in the firm.
INTRODUCTION
This paper focuses on the practical use of "Big Data" in Deloitte since it has generated significant business management insights and led companies to reevaluate the efficiency of their daily work. The differences in various aspects of the process between traditional auditing and trending auditing are discussed in the paper. The traditional record-to-report of auditing practices limits the visibility of the data. The new approach not only escalate auditors' efficiency in inputting, accessing, and analyzing client's financial reports, but also keeping the employee in a broader business calendar. Moreover, the Customer Value Model helps to calculate the total value each client brings over his projected lifetime before Deloitte makes contact with its client, every six to twelve months since the company prefers to work with clients with increasing value over time. Customizing its characteristic in detecting fraud inside the company helps Deloitte better manage and examine a large number of clients.
OVERVIEW OF DELOITTE
Big data plays an intriguing role in making one of the Big Four, Deloitte's success, from financial performance management, advanced forecasting, to fraud and forensics. Deloitte makes the switch from the traditional accounting auditing process into continuous auditing by making the software to do the retroactive and repetitive work. Previously, auditors had to manually input the data to generate monthly, semiannual, or annual reports for the clients, which was inefficient. Ratio, trend, and regression analysis are the basic statistical techniques used in a traditional auditing process. Also, data modeling helps not only efficiency but also enhances transparency between the transactions and information sharing between the employees. Making the data perceivable to the employees helps them to improve their analytical skills as well as decision making ability. The premise to this is that accountants have technical skills to run large sets of data with statistical analysis tool. Therefore, the company should keep making effort in educating the decision makers to be familiar with the models used in the accounting industry.
ADVANCED ANALYTICS IN FINANCIAL SERVICE
Auditing is a significant part of the accounting since it plays an initial role in analyzing an organization's financial status. As accounting journals get broadened and perplexing, leading auditing can use models to reduce the margin of omissions, errors, or fraud. Deloitte audit methodology recognizes the advances in the management of statistical science and data set. Table 1 shows how transformational applications change the auditing process by comparing the differences between the traditional audits and the leading audits. With the software and the auditor's capability to analyze entire sets of translation rather than a sample, the outliers are more quickly and accurately identified [3]. With the help of big data applied in auditing practice, accountants now move from doing repetitive accounting tasks to creating financial plans and offering insights to their clients. Therefore, the advanced technologies not only help with the task's efficiency, but also increase the competitiveness between accountants because they offer advice that is more specific to the clients need. The use of business analytics also helps clients engage in a Tax Management approach. Deloitte has faced the complexity of the changing tax code. A proper application can track tax rates and calculate possible tax savings by manually inputting some rules. The result of the analysis can be visually presented to its clients in an intelligible format [6].
ENTERPRISE FRAUD MANAGEMENT (EFM)
Anti-Fraud programs created by Deloitte involves various aspects from prevention, detection, to response. Enterprise Fraud Management takes an integrated view of an organization, calculating fraud risks and patterns at an enterprise level. Applying machine learning to business, the Enterprise Fraud Management model can detect abnormal events with specific characteristics associated with fraud in the past, such as intentional omission, manipulation, and misappropriation of funds [4]. Deloitte held its Financial Crime Strategy conference in 2014. It recognized the critical observations relating to their financial crime strategy fraud formulation, including that financial and reputational cost of non-compliance is increasing, and technology is indispensable in targeting potential risk and allowing the company to combat financial crime. Figure 1 shows that most delegates believe that effective analytics delivers value across a range of financial crime processes and helps reduce monetary and relational losses. The classification is used in machine learning group data together by sets of criteria, such as a change in receivable, change in inventory, and change in payment method. Features capture the nature of the business, whether the clients in manufacturing, computer, construction industry. The goal of this model is to distinguish fraudulent transactions from thousands of unorganized ones, and using the model can get a result in a short time. Another model used in detecting fraud is the Forensic Data Analytics. It enables periodic monitoring of controls using technologies, and it also investigates fraud by signals from cases and confirmed fraud. It is important for a company to continue improving the measurements because the fraud risk assessment would be more daunting when there is large amount of data sets. Figure 2 demonstrates that executives at more effective organizations anticipated that fraud incidents were less likely to occur over the next year as compared with organizations with less effective fraud controls. The gap between the difference between two types of company is an improved employee sensitization and the awareness of fraud.
CUSTOMER VALUE MEASURE
Lack of customer relationships can be harmful to an organization in the long term, especially in the financial service industry like Deloitte, because there are too many competitors in the environment. An organization must measure each client's lifetime value to the business for differentiated treatments, diagnosis, analysis, and understanding of valuable customers. Deloitte separates the customer value into two sides: experience value and business value because the company wants to know how satisfied the customers are and how much the client promotes the company and their cost to serve. There are different approaches that Deloitte developed in understanding their business partner, such as the Next Best Action (NBA) approach which implements a customer-level decision engine which optimizes offers to their clients based on the likelihood to the client's positive response as well as a business priority. The program satisfies both the organization's and the customers' needs. Deloitte also adopts an "Always on marketing" approach that refers to an optimal state where an organization successfully delivers the most relevant message and consistent experience. Pricing, promotion, reliability, and service quality must be retained at a high level of satisfaction to ensure the organization's future growth, and a comprehensive data analysis could perform all of the information for clients in different demographics. Customer lifetime value (CLV) metric predicts the net present value of future profitability for each customer using regression trees with high quality of input. The micro-segmentation model is based on linear algebra and calculates the average for the clients' credit score for a fixed-length period [5]. The metric can be difficult to calculate since there are variety of different formulas depends on how many aspects of a client Deloitte wants to take into consideration in defining a high-value customer. Figure 3 is an example of a Customer Lifetime Value formula. It is based on the profitability of a company and unique customers count by measuring the retail profit per transaction and sales per unique customer. Thus, CLV addresses the heart of a client's performance in growth and profitability.
LIMITATIONS AND SOLUTIONS
Even though business intelligence has helped Deloitte to grow, developing technology has many issues to deal with such as data breaches, high price, difficulty in analysis of various sources, resistance to technology adoption. Since Deloitte has a diverse type of clients, the sensitive information could potentially be breached by accessing a laptop or bypassing network security remotely. Even though Deloitte had done an excellent job ensuring the security of its confidential data, using twofactor authentication which involves giving supplementary information such as access code sent by text, or personal information checks before accessing clients' information [2]. Based on figure 3, another issue regarding the application of business analytics is that most executives are uncomfortable using data from the resources either because they simply are not familiar with the techniques, they think the company is not an insightdriven organization or they do not want to take more responsibilities. However, the company can solve this problem by giving incentives in learning new analytics skills, communicating different perspectives on how data can be useful, and upgrading the technology to support the implementation of new analytical tools. This technical transition requires years of training and adjusting. Once the data-driven approach becomes mainstream, managers and employees will understand that analytics can help them make better decisions in improving customer service, identifying business processes, informing marketing, and deriving performance measurement. Since Deloitte has already established a model to examine customer value, it is crucial to develop another model to personalize client engagements. For example, sending text message according to their needs and providing loyalty benefits for retention.
Even though business intelligence has helped Deloitte grow, developing technology has many issues to deal with, such as data breaches, high price, difficulty in the analysis of various sources, and resistance to technology adoption. Since Deloitte has a diverse type of clients, sensitive information could potentially be breached by accessing a laptop or bypassing network security remotely. Even though Deloitte had done an excellent job ensuring the security of its confidential data, using twofactor authentication involves giving supplementary information such as access code sent by text, or personal information checks before accessing clients' information [2]. Based on figure 4, another issue regarding the application of business analytics is that most executives are uncomfortable using data from the resources either because they are not familiar with the techniques, think the company is not an insight-driven organization, or do not want to take more responsibilities.
CONCLUSION
With the emergence of business analytics, a lot of repetitive work in public accounting firms is replaced by data models. Even though it takes up years in transitioning the entire organization into a data-driven decision-making approach, the impact on the profitability of the company is long-lasting. All the model introduced in this paper, such as The Next Best Action (NBA), Tax Management, Always on marketing, Customer lifetime value, and Enterprise Fraud Management approaches, require different machine learning algorithms to make the accounting information more accurate and analytical. However, this paper does not explain the technical aspect of these algorithms. The information provided in this paper implies that big data plays a huge role in accounting practices. Even though the answer to whether there is going to be changed in current auditing procedures due to big data remains unknown, it is certain for the auditors to develop a thorough understanding of the presentation of big data in financial statements, as well as the integration of big data in detailed audit practice is on its way [7]. Besides, it is also expected that more software will be developed to support more complex transactions. Top accounting firm like Deloitte must integrate more techniques into their business models. Meanwhile, the users have to be aware of the challenges, such as the quality of data, the source of the data, as well as the choice of data.
ACKNOWLEDGEMENT
I would like to show my deepest gratitude to the accounting professors at the University of Connecticut, who have guided me in completing the research paper. | 2,840.6 | 2020-11-01T00:00:00.000 | [
"Business",
"Computer Science"
] |
Some mathematical properties of a barotropic multiphase flow model
We study a model for compressible multiphase flows involving N non miscible phases where N is arbitrary. This model boils down to the Baer-Nunziato model when N = 2 . For the barotropic version of model, and for more general equations of state, we prove the weak hyperbolicity property, the convexity of the natural phasic entropies, and the existence of a symmetric form.
Introduction
The modeling and numerical simulation of multiphase flows is a relevant approach for a detailed investigation of some patterns occurring in many industrial sectors. In the nuclear industry for instance, some accidental configurations involve three phase flows such as the steam explosion, a phenomenon consisting in violent boiling or flashing of water into steam, occurring when the water is in contact with hot molten metal particles of "corium": a liquid mixture of nuclear fuel, fission products, control rods, structural materials, etc.. resulting from a core meltdown. We refer the reader to [3,12] and the references therein in order to have a better understanding of that phenomenon.
The modeling and numerical simulation of the steam explosion is an open topic up to now. Since the sudden increase of vapor concentration results in huge pressure waves including shock and rarefaction waves, compressible multiphase flow models with unique jump conditions and for which the initial-value problem is well posed are mandatory. Some modeling efforts have been provided in this direction in [10,9,5,14]. The N -phase flow models developed therein 1 consist in an extension to N ≥ 3 phases of the well-known Baer-Nunziato two phase flow model [1]. They consist in N sets of partial differential equations (PDEs) accounting for the evolution of phase fraction, density, velocity and energy of each phase. As in the Baer-Nunziato model, the PDEs are composed of a hyperbolic first order convective part consisting in N Euler-like systems coupled through non-conservative terms and zero-th source terms accounting for pressure, velocity and temperature relaxation phenomena between the phases. It is worth noting that the latter models are quite similar to the classical two phase flow models in [4,2,7].
In [6], two crucial properties have been proven for a class of two phase flow models containing the Baer-Nunziato model, namely, the convexity of the natural entropy associated with the system, and the existence of a symmetric form. As recalled in that paper, such properties are well understood for systems of conservation laws since Godunov [8] and Mock [13], but remain an open question for non conservative and non strictly hyperbolic models such as those considered here.
In the present paper, we prove the convexity of the entropy and the existence of a symmetric form for a multiphase flow model with N -where N is arbitrarily large -phases. We restrict the study to the case where the interfacial velocity coincides with one of the phasic material velocities. We consider two versions of the model. Firstly, the model with a barotropic pressure law, introduced in [10], and secondly, a similar model with a more general equation of state.
The barotropic multiphase flow model
We consider the following system of partial differential equations (PDEs) introduced in [10] for the modeling of the evolution of N distinct compressible phases in a one dimensional space: for k = 1, .., N , x ∈ R and t > 0: The model consists in N coupled Euler-type systems. The quantities α k , ρ k and u k represent the mean statistical fraction, the mean density and the mean velocity in phase k (for k = 1, .., N ). The quantity p k is the pressure in phase k. We assume barotropic pressure laws for each phase so that the pressure p k is a given function of the density p k : ρ k → p k (ρ k ) with the classical assumption that p ′ k (ρ k ) > 0. The mean statistical fractions and the mean densities are positive and the following saturation constraint holds everywhere at every time: Thus, among the N equations (1a), N − 1 are independent and the main unknown U is expected to belong to the physical space: such that 0 < α 2 , .., α N < 1 and α k ρ k > 0 for all k = 1, .., N .
Following [10], we make the following choice for the closure laws of the socalled interface pressures P kl (U ): Observing that the saturation constraint gives N l=1,l =k ∂ x α l = −∂ x α k for all k = 1, .., N the momentum equations (1c) can be simplified as follows:
Eigenstructure of the system
The following result characterizes the wave structure of system (1): is weakly hyperbolic on Ω U : it admits the following 3N − 1 real eigenvalues: . The corresponding right eigenvectors are linearly independent if, and only if, The characteristic field associated with σ 1 (U ), .., σ N −1 (U ) is linearly degenerate while the characteristic fields associated with σ N −1+k (U ) and σ 2N −1+k (U ) for k = 1, .., N are genuinely non-linear. When (6) fails, the system is said to be resonant.
Proof. In the following, we denote p k and c k instead of p k (ρ k ) and c k (ρ k ) for k = 1, ..N in order to ease the notations. Choosing the variable U = (α 2 , .., α N , u 1 , p 1 , .., u N , p N ) T , the smooth solutions of system (1) satisfy the following equivalent system: where A (U) is the block matrix: 3 Defining M k = (u k − u 1 )/c k the Mach number of phase k relatively to phase 1 for k = 2, .., N , the matrices A, B 1 , .., B N and C 1 , .., C N are given as follows.
where δ p,q is the Kronecker symbol: for p, q ∈ N, δ p,q = 1 if p = q and δ p,q = 0 otherwise. Since A is diagonal and C k is R-diagonalizable with eigenvalues u k − c k and u k + c k , the matrix A (U) admits the eigenvalues u 1 (with multiplicity N − 1), u k − c k and u k + c k for k = 1, .., N . In addition, A (U) is Rdiagonalizable provided that the corresponding right eigenvectors span R 3N −1 .
The right eigenvectors are the columns of the following block matrix: where A ′ , B ′ 1 , .., B ′ N and C ′ 1 , .., C ′ N are matrices defined by: for k = 2, .., N , The first N − 1 columns are the eigenvectors associated with the eigenvalue u 1 . For k = 1, .., N , the N + 2(k − 1) -th and N + (2k − 1) -th columns are the eigenvectors associated with u k − c k and u k + c k respectively. We can see that R(U) is invertible if and only if M k = 1 for all k = 2, .., N i.e. if and only if inequations Hence, the field associated with the eigenvalue u 1 is linearly degenerated. Now we observe that all the acoustic fields are genuinely non linear since for all k = 1, .., N : Proposition 2.2. The the linearly degenerated field σ 1 (U ) = .. = σ N −1 (U ) = u 1 admits the following 2N independent Riemann invariants: The computation is tedious but straightforward.
Mathematical Entropy
An important consequence of the closure law (3) for the interface pressures P kl (U ) is the existence of an additional conservation law for the smooth solutions of (1). Defining the specific internal energy of phase k, e k by e ′ k (ρ k ) = p k (ρ k )/ρ 2 k and the specific total energy of phase k by E k = u 2 k /2 + e k (ρ k ), the smooth solutions of (1) satisfy the following identities: Summing for k = 1, .., N ,the smooth solutions of (1) are seen to satisfy the following additional conversation equation which expresses the conservation of the total mixture energy : As regards the non-smooth weak solutions of (1), one has to add a so-called entropy criterion in order to select the relevant physical solutions. For this purpose, we prove the following result.
is a non strictly convex function of U . Consequently, the total mixture energy, defined by is also a non strictly convex function of U . In the light of (10), the total mixture energy is a mathematical entropy of system (1).
The monophasic mathematical entropy of phase k is given by: Without loss of generality, we can rearrange the components of U and assume that: has the following block-diagonal structure for k = 2, .., N : Let be given (a, b T ) T ∈ R 3×1 with a ∈ R and b ∈ R 2×1 . Then, we easily see 6 that: is a positive matrix by the strict convexity of the monophasic mathematical entropy S k , the right hand side is positive, which yields the positivity of the matrix S ′′ k (U k ) and hence the (non-strict) convexity of (α k ρ k E k )(U ) for k = 2, .., N .
Thus, the Hessian matrix (α 1 ρ 1 E 1 ) ′′ (U ) has the following structure: Defining A 1 , B 1 and C 1 as in (11), the matrices A 1 and B 1 are given by: Let be given with a k ∈ R for k = 2, .., N and b k ∈ R 2×1 for all k = 1, .., N . An easy computation gives: We easily check that Hence, Since S ′′ 1 (V 1 ) is a positive matrix by the strict convexity of the monophasic mathematical entropy S 1 , the right hand side is positive, which yields the positivity of the matrix (α The convexity of the total mixture energy is a direct consequence of the convexity of all the fractional specific energies and we have: Thus, the total mixture energy in non strictly convex.
Symmetrizability
Definition 2.1. The system (1) is said to be symmetrizable if there exists a , and a symmetric matrix Q(U) such that the smooth solutions of (1) satisfy: Since the total mixture energy defined in the previous section is not strictly convex, we cannot use it to prove the symmetrizability of system (1) by multiplication by its hessian matrix. However we can find a suitable positive definite matrix P(U) which symmetrizes the system. Theorem 2.4. System (1) is symmetrizable as long as the non resonance condition (6) holds.
Proof. Let us define U = (α 2 , .., α N , u 1 , p 1 , .., u N , p N ) T . The smooth solutions of system (1) satisfy where the matrix A (U) is given in (7). Let us seek for a symmetric positive definite matrix P(U) that symmetrizes the system. We seek for P(U) in the form: We can easily see that the matrix P k C k is symmetric for all k = 1, .., N . A necessary and sufficient condition for Q(U) to be symmetric is: for all k = 1, .., N, The matrix C T k − u 1 I 2 is a 2 × 2 matrix the determinant of which is c 2 k (M 2 k − 1) where M k = (u k − u 1 )/c k is the relative Mach number of phase k. Hence, the matrices C T k − u 1 I 2 are invertible if and only if the non resonance condition (6) holds. Assuming (6), the matrix D k is therefore given by: An easy computation shows that the matrix (C T k − u 1 I 2 ) −1 P k is symmetric and we get that D T k B k = B T k (C T k − u 1 I 2 ) −1 P k B k is also symmetric. Thus, condition (6) is a necessary and sufficient condition for matrix Q(U) to be symmetric. The matrix P(U) is clearly symmetric. Therefore, it remains to prove that there exists θ > 0 such that P(U) is positive definite. Let x = (a T , b T 1 , .., b T n ) T ∈ R (3N −1)×1 \{0} with a ∈ R (N −1)×1 and for k = 1, .., N , b k ∈ R 2×1 . We have: by the Cauch-Schwarz inequality. The right hand side of this inequality is a polynomial of degree 2 in |a| and its second discriminant ∆ ′ is given by: again by the Cauch-Schwarz inequality. Since D k D T k is symmetric and P k is symmetric positive definite, there exists an invertible 2 × 2 matrix Q k which simultaneously diagonalizes these two matrices. More precisely, we have Q T k P k Q k = Hence, choosing θ larger than the two the eigenvalues of N δ k for all k = 1, .., N (observe that these eigenvalues only depend on U and not on the vector x), we get that ∆ ′ < 0 and therefore x T P(U)x > 0 for all x ∈ R (3N −1)×1 \{0}.
The multiphase flow model with energies
We still consider the evolution of N distinct compressible phases in a one dimensional space. We now consider the following multiphase flow model where the evolution of the phasic energies is now governed by additional PDEs: for k = 1, .., N , x ∈ R and t > 0: The saturation constraint is still valid: and the main unknown U is expected to belong to the physical space: Defining e k := E k − u 2 k /2 the specific internal energy of phase k, the pressure p k = p k (ρ k , e k ) is now given by an equation of state (e.o.s.) as a function defined for all positive ρ k and all positive e k We assume that, taken separately, all the phases follow the second principle of thermodynamics so that for each phase k = 1, .., N , there exists a positive integrating factor T k (ρ k , e k ) and a strictly convex function s k (ρ k , e k ), called the (mathematical) specific entropy of phase k such that: Finally, the closure laws for the interface pressures P kl (U ) are given by: for k = 1, P 1l (U ) = p l (ρ l , e l ), for l = 2, .., N for k = 1, P kl (U ) = p k (ρ k , e k ), for l = 1, .., N, l = k. (15) Observing that the saturation constraint gives N l=1,l =k ∂ x α l = −∂ x α k for all k = 1, .., N the momentum equations (1c) can be simplified as follows: In the same way, the energy equations (12d) can be simplified as follows:
Eigenstructure of the system
The following result characterizes the wave structure of system (12): Theorem 3.1. System (12) admits the following 4N − 1 eigenvalues: . If c k (ρ k , e k ) 2 > 0, then system (12) is weakly hyperbolic on Ω U in the following sense: all the eigenvalues are real and the corresponding right eigenvectors are linearly independent if, and only if, The characteristic fields associated with σ 1 (U ), .., σ N −1 (U ) and σ 2N −1+k (U ) = u k for k = 1, .., N are linearly degenerate while the characteristic fields associated with σ N −1+k (U ) and σ 3N −1+k (U ) for k = 1, .., N are genuinely non-linear. When (20) fails, the system is said to be resonant.
Remark 3.1. The condition c k (ρ k , e k ) 2 > 0 is a classical condition that ensures the hyperbolicity for monophasic flows. In general, assuming U ∈ Ω U is not sufficient to guarantee that c k (ρ k , e k ) 2 > 0. For the stiffened gas e.o.s. for instance, where the pressure is given by where γ k > 1 and p ∞,k ≥ 0 are two constants, a classical calculation yields ρ k c k (ρ k , e k ) 2 = γ k (γ k − 1)(ρ k e k − p ∞,k ). Hence, the hyperbolicity of the system requires a more restrictive condition than simply the positivity of the internal energy which reads : ρ k e k > p ∞,k .
Proof. We choose the variable U = (α 2 , .., α N , u 1 , p 1 , s 1 , .., u N , p N , s N ) T . We denote p k and c k instead of p k (ρ k , e k ) and c k (ρ k , e k ) for k = 1, ..N in order to ease the notations. The smooth solutions of system (1) satisfy the following equivalent system (see Section 3.2 for the entropy equations on s k for k = 1, .., N ): where A (U) is the block matrix: Defining M k = (u k − u 1 )/c k the Mach number of phase k relatively to phase 1 for k = 2, .., N , the matrices A, B 1 , .., B N and C 1 , .., C N are given as follows.
A = diag(u 1 , .., u 1 ) ∈ R (N −1)×(N −1) Since A is diagonal and C k is R-diagonalizable if c 2 k > 0, with eigenvalues u k −c k , u k and u k + c k , the matrix A (U) admits the eigenvalues u 1 (with multiplicity N ), u k − c k and u k + c k for k = 1, .., N and u k for k = 2, .., N . In addition, A (U) is R-diagonalizable provided that the corresponding right eigenvectors span R 4N −1 . The right eigenvectors are the columns of the following block 12 matrix: ., B ′ N and C ′ 1 , .., C ′ N are matrices defined by: we can see that the N -th component of R j (U) is zero. This implies that for all 1 ≤ j ≤ N − 1 and for j = N + 2, R j (U) · ∇ U (u 1 ) = 0. Hence, the field associated with the eigenvalue u 1 is linearly degenerated. In the same way, since the N + 2(k − 1) -th component of R N +2k (U) is zero, the field associated with the eigenvalue u k is linearly degenerated. Now we observe that all the acoustic fields are genuinely non linear since for all k = 1, .., N : Proposition 3.2. The the linearly degenerated field σ 1 (U ) = .. = σ N −1 (U ) = σ 2N (U ) = u 1 admits the following 3N − 1 independent Riemann invariants:
Mathematical Entropy
A consequence of the second law of thermodynamics (14) and the closure laws (15) is the following convection equations satisfied by the specific phasic entropies: We have the following result. is a non strictly convex function of U . Consequently, the total mixture entropy, defined by N k=1 α k ρ k s k (U ) is also a non strictly convex function of U . In the light of (22), the fractional specific entropies are mathematical entropies of system (12).
Symmetrizability
We have the following symmetrisability result for system (12). where θ ∈ R + , I N −1 is the (N − 1) × (N − 1) identity matrix and for k = 1, .., N , and D k is the 3 × (N − 1) matrix given by: and a necessary and sufficient condition for the 3 × 3 matrix C T k − u 1 I 3 to be invertible is the non resonance condition (20). As in the proof of Theorem 2.4, we can show that Q(U) = P(U)A (U) is symmetric and that P(U) is a symmetric positive definite matrix provided that θ is large enough.
Conclusion
For both the barotropic and non barotropic multiphase flow models described in (1) and (12), we have proven the weak hyperbolicity, the existence of convex mathematical entropies as well as the existence of a symmetric form. This last property is valid only far from resonance, i.e. as long as the considered models remain in their domain of hyperbolicity. These properties have been obtained for any admissible phasic equations of state (increasing phasic pressure laws for the barotropic system, and for the system with energies, equations of state abiding by the second law of thermodynamics). What is more, the proven properties can be extended to the two and three dimensional versions of theses models thanks to their frame invariance.
An important consequence of the symmetrisability and Kato's theorem on quasi-linear symmetric systems ( [11]) is that, far from resonance, there exists a unique local-in-time smooth solution to the Cauchy problem. The blow-up in finite time still holds, but with the additional restriction due to the non resonance conditions (6) and (20). | 5,125.4 | 2020-01-01T00:00:00.000 | [
"Mathematics"
] |
Efficient [Fe-Imidazole@SiO2] Nanohybrids for Catalytic H2 Production from Formic Acid
Three imidazole-based hybrid materials, coded as IGOPS, IPS and impyridine@SiO2 nanohybrids, were prepared via the covalent immobilization of N-ligands onto a mesoporous nano-SiO2 matrix for H2 generation from formic acid (FA). BET and HRTEM demonstrated that the immobilization of the imidazole derivative onto SiO2 has a significant effect on the SSA, average pore volume, and particle size distribution. In the context of FA dehydrogenation, their catalytic activity (TONs, TOFs), stability, and reusability were assessed. Additionally, the homologous homogeneous counterparts were evaluated for comparison purposes. Mapping the redox potential of solution Eh vs. SHE revealed that poly-phosphine PP3 plays an essential role in FA dehydrogenation. On the basis of performance and stability, [Fe2+/IGOPS/PP3] demonstrated superior activity compared to other heterogeneous catalysts, producing 9.82 L of gases (VH2 + CO2) with TONs = 31,778, albeit with low recyclability. In contrast, [Fe2+/IPS/PP3] showed the highest stability, retaining considerable performance after three consecutive uses. With VH2 + CO2 = 7.8 L, [Fe2+/impyridine@SiO2/PP3] activity decreased, and it was no longer recyclable. However, the homogeneous equivalent of [Fe2+/impyridine/PP3] was completely inactive. Raman, FT/IR, and UV/Vis spectroscopy demonstrated that the reduced recyclability of [Fe2+/IGOPS/PP3] and [Fe2+/impyridine@SiO2/PP3] nanohybrids is due to the reductive cleavage of their C-O-C bonds during catalysis. An alternative grafting procedure is proposed, applying here to the grafting of IPS, resulting in its higher stability. The accumulation of water derived from substrate’s feeding causes the inhibition of catalysis. In the case of [Fe2+-imidazole@SiO2] nanohybrids, simple washing and drying result in their re-activation, overcoming the water inhibition. Thus, the low-cost imidazole-based nanohybrids IGOPS and IPS are capable of forming [Fe2+/IGOPS/PP3] and [Fe2+/IPS/PP3] heterogeneous catalytic systems with high stability and performance for FA dehydrogenation.
Introduction
The clean energy potential of molecular hydrogen (H 2 ) has garnered significant interest due to its favorable characteristics, such as its energy density, which is 2.6 times greater than that of gasoline, and the absence of toxic byproducts during the combustion process [1][2][3]. However, free H 2 does not exist on Earth and a primary energy source is required for its production. Within the concept of a cyclic economy, the production of H 2 that is fully reliant on renewable sources includes two independent processes. The first process involves the generation of H 2 through the dehydrogenation of a hydrocarbon substrate, while the second one involves the reduction of CO 2 to produce hydrocarbon fuels [4,5]. The technology in question has the potential to revolutionize the industry, as it is worth noting that a significant majority of H 2 generation, specifically 96%, currently relies on non-renewable sources such as fossil fuels [6]. Formic acid (FA) is a highly promising substrate for providing H 2 , owing to its favorable cost and simplicity of handling [7,8]. The 2 of 18 decomposition of FA occurs via two possible pathways; it is imperative to avoid Reaction (1), as the produced CO is detrimental to the functionality of fuel cells due to its toxic nature. Equation (2) exhibits a thermodynamically allowed reaction pathway, as evidenced by a negative Gibbs free energy change of −32.9 kJ/mol at elevated temperatures. However, the reaction is kinetically blocked, necessitating the use of a catalyst in order to accelerate the process [9]; HCOOH (l) → H 2 O (l) + CO (g) ∆G o = −12.4 kJ/mol (1) HCOOH (l) → H 2 (g) + CO 2 (g) ∆G o = −32.9 kJ/mol (2) Since 1967, with the first reported catalytic system for FA dehydrogenation [10], numerous studies have been conducted to identify highly effective homogeneous and heterogeneous catalysts that can selectively produce H 2 and CO 2 from FA under mild conditions. Complexes of Ir [11][12][13], Ru [14][15][16][17][18][19] and Rh [20] have been extensively investigated as noteworthy catalysts. Beyond the nature of the metal, the electronic and steric properties of the organic ligand play a crucial role in determining catalysts' reactivity and regulating metal-substrate interactions during catalysis [8]. Within this context, N,N bidentate ligands, including imidazole and pyridyl groups, have been proved to be very effective due to the nitrogen-atom donor capacity of the ligand towards the metal center [21]. In several studies conducted by the research group of Himeda [11], refs. [22][23][24] on Ir complexes using different N,N' bidentate ligands (imidazole and pyridyl moieties), the notable catalytic activity and stability of these substances have been demonstrated. In addition to the nitrogen donor ligands, imidazolium-based ionic liquids (ILs) have been observed functioning as effective reaction media by aiding in the stabilization of various transition metal catalysts and supporting the catalyst's recyclability [25]. The reversible decomposition of FA into CO 2 and H 2 in the ionic liquid (IL) 1,3-dipropyl-2-methylimidazolium formate was investigated by Yasaka et al. in 2010 [26]. In their study, the research group of Deng [16] utilized the commercially accessible IL 1-butyl-3-methylimidazolium chloride (BMimCl) as a solvent for the decomposition of FA, employing the Ru-based catalyst, [{RuCl 2 (pcymene)} 2 ], with iPr 2 NEt/HCOONa as a base. The experimental setup yielded 725 mL of gas within a 2 h timeframe, resulting in a TON 2h value of 240. Berger et al. [27] reported a catalytic system consisting of RuCl 3 dissolved in 1-ethyl-2,3-dimethylimidazolium acetate ionic liquid (IL) as the solvent. The resulting catalyst (RuCl 3 /[EMMIM][OAc]) achieved a turnover frequency (TOF) of 150 h −1 at 80 • C, and it was able to undergo recycling for up to 10 cycles [27]. Even if a multitude of research studies unequivocally indicate that the utilization of ionic liquids (ILs), bearing imidazolium moieties, exhibit exceptional properties as reaction media [25], the time-consuming synthesis process and high cost of ILs limit their use [28].
Most catalytic systems which exhibit high efficiency in producing H 2 consist of metal centers of noble metals that are both scarce and costly. However, the scientific community has begun synthesizing catalysts utilizing non-noble transition metals due to their cost-effectiveness, non-toxicity, and abundance. First-row transition metals, such as iron (Fe) [29], cobalt (Co) [30], and nickel (Ni) [31], that possess diverse σ-donor ligands have effectively catalyzed the process of FA dehydrogenation, which was previously restricted to precious metals [32][33][34]. In more recent studies, Beller et al. synthesized the non-precious Mn(pyridine-imidazoline)(CO) 3 Br complex for FA dehydrogenation, producing 14 L of H 2 + CO 2 within 3 days. Although the activity was satisfied, the complex produced more than 2500 ppm of poisonous CO [35].
Despite the good catalytic performance of the homogeneous molecular systems, they exhibit a deficiency in their capacity for recycling, which can be overcome by grafting the catalytic metal complexes onto a solid matrix [36]. The properties required for catalyst supports include chemical stability, a high specific surface area, and the ability to disperse molecular unities on their surface. For reference, porous silicas exhibit a significant portion of the aforementioned properties such as high pore size and specific surface area [37]. Furthermore, silica can be easily manipulated through the modification of its synthetic parameters such as temperature, reaction time, and amount of silica source, via the modification of the calcination condition [38]. To date, there have been a limited number of immobilized homogeneous metal catalysts utilized for FA dehydrogenation. For instance, the research group of Laurenczy [39] immobilized a Ru-phosphine homogeneous catalyst onto various materials such as resin, polymer, and zeolites through ion exchange, coordination, or absorption, albeit with unsatisfactory catalytic performance. Manaka et al. [40] found that the immobilized [Cp*Ir(pyridylimidazoline)(H 2 O)]@SiO 2 has the same activation energy E a in comparison with its homogeneous counterpart. However, the reduction in collision frequency resulted in a decrease in reaction velocity, with the authors stating that efficient agitation control is necessary in order to achieve the implementation of the immobilized complex in future H 2 technology [40]. In a recent study, the utilization of a hybrid catalyst [Ir_PicaSi_SiO 2 ] comprising the Cp*Ir(R-pica)X complex which was immobilized onto mesoporous silica was examined, showing satisfactory activity but low stability [41]. The best to our knowledge, our laboratory was the first to have immobilized the non-precious complexes Fe 2+ -RPPh 2 and Fe 2+ -polyRPhphos onto a mesoporous SiO 2 surface. Fe 2+ /RPPh 2 @SiO 2 has the remarkable ability to produce a maximum of 14 L of H 2 within 6 h, whereas the homogeneous Fe 2+ /RPPh 2 was completely inactive [42].
In this study, we present three imidazole-based hybrid materials, namely IGOPS, IPS and impyridine@SiO 2 nanohybrids, prepared by means of the covalent immobilization of N-ligands onto a mesoporous nano-SiO 2 matrix. We show hereafter that their integration with the low-cost Fe 2+ in combination with a polydentate alkyl-phenyl-phosphine ligand (PP 3 ) produces efficient and reusable heterogeneous catalytic systems for H 2 production from formic acid. The role of PP 3 phosphine in catalysis is investigated and discussed. IGOPS was the best among the nanohybrids, which contributed to the formation of 8.42 L of gasses (H 2 + CO 2 ) within 4 h, while IPS showed remarkable stability. The catalytic drop efficiency was investigated, and it was attributed to (i) the accumulation of H 2 O, derived from the FA stock which contains 2.5% water. We demonstrate herein that the catalysis' inhibition by H 2 O is reversible, and it can be overcome by a straightforward washing and drying procedure of [Fe 2+ -imidazole@SiO 2 ] nanohybrids; (ii) the reductive cleavage which occurred during catalysis of the C-O-C bond of IGOPS and impyridine@SiO 2 nanohybrids. An alternative grafting procedure is suggested to avoid the fragile C-O-C bond; this was applied for the IPS nanohybrid where the C-O-C group was replaced by an aliphatic C-C-C, resulting in the high durability of IPS in catalysis. Overall, we demonstrate here that the use of nanohybrids in conjunction with non-noble metals such as Fe 2+ in FA dehydrogenation catalysis for H 2 production has a high potential, offering flexible, convenient, and lowcost alternatives. [43,44]. Details about the synthetic procedure of impyridine@SiO 2 are provided in the Supplementary Material ( Figure S1). TGA, FT-IR and Raman measurements confirmed the successful synthesis of nanomaterials and were provided afterwards.
Characterization Techniques
A Nicolet IS5 system equipped with OMNIC FTIR Software 9.2.86 was used to acquire FT/IR spectra in the range of 4000 to 400 cm −1 with a resolution of 2 cm −1 and 100 scans. Raman spectra were recorded using a Raman HORIBA-Xplora Plus spectrometer connected to an Olympus BX41 microscope. As an excitation source, a 785 nm diode laser was employed, and the laser beam was focused on the sample using a microscope. Before measurement, each powder material was formed into a pellet by gently pressing it between two glass plates. We employed a 15 mW laser and discovered via trial and error that at this low intensity, the crystal phase stayed unaltered. Typically, Raman spectra with a reasonable signal-to-noise ratio were collected for 15 accumulations at 30 s.
The monitoring of Fe 2+ species detected in the solution after the end of the catalytic reaction was realized using a Lamda 35 Perkin Elmer UV/Vis spectrometer. Thermogravimetric analysis (TGA-DTA) was performed using a SETARAM TGA 92 analyzer with a heat rate of 10 • C/min from 25 • C to 800 • C and a flow rate of 20 mL/min for the oxygen carrier gas. The organic loading of imidazolium in IGOPS was 0.82 mmol/g, in IPS@SiO 2 it was 0.45 mmol/g, while the impyridine loading of impyridine@SiO 2 nanoparticle was 0.24 mmol/g. The measurement of the specific surface area (SSA) and pore size of the nanomaterials was conducted using a Quantachrome NOVAtouch_LX2. This involved recording the N 2 adsorption-desorption isotherms at a temperature of 77 K. The specific surface area (SSA) was determined by utilizing the absorption data points within the 0.1-0.3 range of relative pressure (P/Po). The analysis of the pore radius was conducted using the BJH method [45] within a range of 0.35-0.99 P/Po. The morphology of the nanomaterial was examined through the utilization of high-resolution transmission electron microscopy (HRTEM) with a Philips CM 20 microscope that was operated at 200 kV, offering a resolution of 0.25 nm. Prior to conducting measurements, the samples went through a mild grinding process using a mortar and were subsequently loaded in a dry state onto a support film composed of Lacey Carbon with a mesh size of 300 (Cu). The images that were recorded were subjected to analysis using the Gatan Digital Micrograph 3.9 software.
Catalytic Experiments
At a temperature of 80 • C (±1 • C), catalytic reactions were conducted in a doublewalled thermostated reactor with the addition of Ar gas and constant stirring. The reactor was connected online to a GC system (Shimadzu GC-2014 Gas Chromatograph with Thermoconductive Detector, GC-TCD, equipped with a Carboxen-1000 column) for the analysis and qualification of produced gases, while the total volume of evolved gases was measured with a manual gas burette. In a typical catalytic experiment, 7.5 µmol of Fe(BF 4 ) 2 6H 2 O and 15 µmol of IGOPS or IPS or impyridine@SiO 2 were added to a 7 mL propylene carbonate/FA mixture (5 mL/2 mL). After 10 min of vigorous stirring, 7.5 mol of PP 3 was introduced to the reaction. For the calculation of TONs and TOFs, the procedure described in [17,42] was followed (Supplementary Material). The redox potential (E h ) was measured using a Metrohm platinum electrode (type 6.0401.100) versus a standard hydrogen electrode (SHE) that had been calibrated with a Ferri/Ferro solution.
Continuous operation system: Upon the consumption of approximately half of the FA (1 mL), resulting in the production of 1200 mL of gasses (H 2 + CO 2 ), an additional 1 mL of FA was introduced. This process was repeated as soon as 1200 mL of gases were produced, until the reaction stopped. In this way, the catalyst's performance was believed to be optimized by avoiding the imposition of extreme pH changes.
Recycling experiments: When catalytic gas evolution stopped, the solid catalyst was collected by centrifugation (4000× g, 15 min), washed with 8 mL methanol and dried overnight at 100 • C. The collected solid was added for a second use under the same catalytic conditions (continuous operation system), with no further Fe 2+ or [imidazole derivative@SiO 2 ] nanohybrid addition. This procedure was repeated until the reaction stopped. (Figure 1a, upper left) have a spherical shape, mixing to create chain-like agglomerates, while the average size distribution is 25 nm, and we noticed an increase when the immobilization of the imidazole derivative occurred (e.g., Size Impyridine@SiO2 = 35 nm, Figure S5a-d). As a result of the grafting, the accessible pores were filled (the average pore volume decreased from 0.71 cc g −1 for SiO 2 to 0.41, 0.45, 0.59 cc g −1 for IGOPS, IPS and impyridine@SiO 2 , respectively), making the modified surface area more compact (Figure 1b-d). Figure 1 depicts the surface functionalization of SiO2 NPs, as shown by the color changes of the white-pristine SiO2 particles to yellow and orange-brown. The BET results ( Figure 1) demonstrate that grafting reduces the specific surface area by 25%, accompanied by a reduction in pore diameters; for further information, see Figures S3 and S4 in the Supplementary MaterialFile and Table 1. Figure 1′s top row depicts TEM images of SiO2 and functionalized SiO2-imidazole derivative hybrids. SiO2 nanoparticles (Figure 1a, upper left) have a spherical shape, mixing to create chain-like agglomerates, while the average size distribution is 25 nm, and we noticed an increase when the immobilization of the imidazole derivative occurred (e.g., SizeImpyridine@SiO2 = 35 nm, Figure S5a-d). As a result of the grafting, the accessible pores were filled (the average pore volume decreased from 0.71 cc g −1 for SiO2 to 0.41, 0.45, 0.59 cc g −1 for IGOPS, IPS and impyridine@SiO2, respectively), making the modified surface area more compact (Figure 1b-d). Thermogravimetry of nanohybrid SiO 2 -imidazole derivatives show increasing mass loss, accompanied by exothermic-endothermic curves in all cases. The exothermic changes are due to the combustion of the organic groups, while the endothermic ones are due to the presence of organic solvents that may be present in the sample. More specifically, the IGOPS nanohybrid ( Figure 2a) provides a wide exothermic curve in the range of 250-450 • C, with a maximum at 380 • C, which corresponds to the weight loss of imidazole groups on the SiO 2 surface. Organic loading is 14% corresponding to 0.82 mmol of imidazole/g of SiO 2 . The endothermic curve at a temperature of 50 • C corresponds to the presence of the organic solvent, and it is not included in the calculation of the organic loading (see Table 1). In a same way, the organic loading of IPS and impyridine@SiO 2 is equal to 5% and 6% (range of the peak, 250-350 • C, with a maximum at 300 and 290 • C for IPS and impyridine@SiO 2 , respectively), corresponding to 0.45 and 0.24 mmol organic ligand/g of the modified material, respectively. Thermogravimetry of nanohybrid SiO2-imidazole derivatives show increasing mass loss, accompanied by exothermic-endothermic curves in all cases. The exothermic changes are due to the combustion of the organic groups, while the endothermic ones are due to the presence of organic solvents that may be present in the sample. More specifically, the IGOPS nanohybrid ( Figure 2a) provides a wide exothermic curve in the range of 250-450 °C, with a maximum at 380 °C, which corresponds to the weight loss of imidazole groups on the SiO2 surface. Organic loading is 14% corresponding to 0.82 mmol of imidazole/g of SiO2. The endothermic curve at a temperature of 50 °C corresponds to the presence of the organic solvent, and it is not included in the calculation of the organic loading (see Table 1). In a same way, the organic loading of IPS and impyridine@SiO2 is equal to 5% and 6% (range of the peak, 250-350 °C, with a maximum at 300 and 290 °C for IPS and impyridine@SiO2, respectively), corresponding to 0.45 and 0.24 mmol organic ligand/g of the modified material, respectively. Figure 3 depicts the FTIR spectra of the hybrid materials, IGOPS, IPS, and impyridine@SiO 2 , as compared to nonfunctionalized SiO 2 and powders of free imidazole and impyridine. SiO 2 is defined (black line) by the 465, 811, and 1080 cm −1 peaks, which may be attributed to the Si-O-Si and Si-O bond's asymmetric stretching vibrations, respectively [46]. Imidazole and impyridine are accompanied by the characterizing bend vibration of the N-H bond at 1550 cm −1 [47]. The peaks at the regions of 3120-2840 cm −1 and 2900-2700 cm −1 are attributed to the stretching vibrations of aliphatic and aromatic C-H bonds [47]. Stretching vibration modes of C-C and C-N bonds of imidazole rings appeared in the regions of 1500-1400 cm −1 and 1335-1250 cm −1 , respectively [48]. The FTIR spectra of IGOPS, IPS, [49]. The downward shift, i.e., −5 cm −1 , of those bonds suggests the vibrational interaction of nano-SiO 2 with imidazole functionalities [44]. In the case of IGOPS and impyridine@SiO 2 hybrids, the appearance of a band at 1320 cm −1 is indicative of the C-O stretching bond which they have in their molecular structure (see the Supplementary Material, Figure S2). In the FT-IR spectra of all hybrid materials, the bands observed in the regions of 1500-1400 cm −1 and 1335-1250 cm −1 are assigned to C-C and C-N bonds of imidazole rings [43]. Interestingly, these bands of IGOPS are more intense in comparison with those of the IPS and impyridine@SiO 2 nanohybrids; this is due to the higher organic loading of IGOPS of 14% vs. 5% and 6% for IPS and impyridine@SiO 2 , respectively. Overall, the current FTIR measurements indicate the covalent attachment of imidazole and impyridine compounds onto the SiO 2 surface of IGOPS, IPS and impyridine@SiO 2 nanohybrids. be attributed to the Si-O-Si and Si-O bond's asymmetric stretching vibrations, respectively [46]. Imidazole and impyridine are accompanied by the characterizing bend vibration of the N-H bond at 1550 cm −1 [47]. The peaks at the regions of 3120-2840 cm −1 and 2900-2700 cm −1 are attributed to the stretching vibrations of aliphatic and aromatic C-H bonds [47]. Stretching vibration modes of C-C and C-N bonds of imidazole rings appeared in the regions of 1500-1400 cm −1 and 1335-1250 cm −1 , respectively [48]. The FTIR spectra of IGOPS, IPS, and impyridine@SiO2 hybrid materials are characterized by the asymmetric stretching vibrations of Si-O-Si and Si-O bonds (1075, 460 cm −1 and 805 cm −1 ) derived from the silica support [49]. The downward shift, i.e., −5 cm −1 , of those bonds suggests the vibrational interaction of nano-SiO2 with imidazole functionalities [44]. In the case of IGOPS and impyridine@SiO2 hybrids, the appearance of a band at 1320 cm −1 is indicative of the C-O stretching bond which they have in their molecular structure (see the Supplementary Material, Figure S2). In the FT-IR spectra of all hybrid materials, the bands observed in the regions of 1500-1400 cm −1 and 1335-1250 cm −1 are assigned to C-C and C-N bonds of imidazole rings [43]. Interestingly, these bands of IGOPS are more intense in comparison with those of the IPS and impyridine@SiO2 nanohybrids; this is due to the higher organic loading of IGOPS of 14% vs. 5% and 6% for IPS and impyridine@SiO2, respectively. Overall, the current FTIR measurements indicate the covalent attachment of imidazole and impyridine compounds onto the SiO2 surface of IGOPS, IPS and impyridine@SiO2 nanohybrids. In addition to FT-IR, Raman spectroscopy is a very sensitive method used to study the internal siloxane configurations and surface silanol groups of the silica supporting nanomaterial. However, it is possible to detect the distinctive vibrations of imidazole and impyridine. The Raman spectra of the pristine compounds, as well as the IGOPS, IPS and impyridine@SiO2 nanohybrids, are depicted in Figure 4a In addition to FT-IR, Raman spectroscopy is a very sensitive method used to study the internal siloxane configurations and surface silanol groups of the silica supporting nanomaterial. However, it is possible to detect the distinctive vibrations of imidazole and impyridine. The Raman spectra of the pristine compounds, as well as the IGOPS, IPS and impyridine@SiO 2 nanohybrids, are depicted in Figure 4a Moreover, the silica matrix shows a characteristic band at ∼800 cm −1 attributed to the symmetric stretching vibration of (Si-O-Si) [49]. In hybrid materials, interestingly, the disappearance of most of the peaks is evident, while the downward or upward shift of different modes can be observed. For example, Si-O-Si breathing modes upshifted from 490 cm −1 to 505 cm −1 , while C-H out-of-plane deformation peaks (600-850 cm −1 ) disappeared in the case of IGOPS and IPS nanomaterials. Impyridine@SiO 2 maintains these peaks, probably because of the existence of two aromatic rings (pyridine plus imidazole), in contrast with IGOPS and IPS, which contain only imidazole. However, the bands that contributed to imidazole, i.e., v(C-N) (1310-1404 cm −1 , deformation) and v(C-C) (1429-1781 cm −1 , aromatic ring), maintained and upshifted to lower wavenumbers. Moreover, impyridine@SiO 2 and IGOPS demonstrate a new peak at~1190 cm −1 , which can be attributed to the vibration mode of the ether group (v(C-O)) that they bear in their molecular structure ( Figure S2a vibration of (Si−O−Si) [49]. In hybrid materials, interestingly, the disappearance of most of the peaks is evident, while the downward or upward shift of different modes can be observed. For example, Si-O-Si breathing modes upshifted from 490 cm −1 to 505 cm −1 , while C-H out-of-plane deformation peaks (600-850 cm −1 ) disappeared in the case of IGOPS and IPS nanomaterials. Impyridine@SiO2 maintains these peaks, probably because of the existence of two aromatic rings (pyridine plus imidazole), in contrast with IGOPS and IPS, which contain only imidazole. However, the bands that contributed to imidazole, i.e., v(C-Ν) (1310-1404 cm −1 , deformation) and v(C-C) (1429-1781 cm −1 , aromatic ring), maintained and upshifted to lower wavenumbers. Moreover, impyridine@SiO2 and IGOPS demonstrate a new peak at ~1190 cm −1 , which can be attributed to the vibration mode of the ether group (v(C-O)) that they bear in their molecular structure ( Figure S2a
Optimization of Catalytic Procedure
To check if the sequence of chemicals' addition affects the performance of catalysis, various experimental procedures were performed, as shown in Figure S6 of the Supplementary Material. The optimum was obtained when, to a propylene carbonate/FA mixture (5/2 v/v), the chemicals were added following the order: source of Fe 2+ , imidazole-based nanohybrid and PP 3 . Interestingly, when PP 3 is not inserted last in the catalytic reaction, this highly reduces both gas production and the reaction rate (see Figure S6a,b). This effect could be attributed to the polydentate nature of PP 3 , which probably creates a saturated environment around Fe 2+ , preventing the approaching of other catalytic components [29]. Moreover, considering that PP 3 is necessary to initiate gas evolution, it probably plays another role beyond its ligation, i.e., adjusting the solution's potential for a catalytic reaction [51]. In addition, the molar ratio of [Fe 2+ /IGOPS material/PP 3 ] was investigated (see Figure S6c); the optimum catalytic behavior is exhibited by the ratio [Fe 2+ /IGOPS material/PP 3 ] = [7.5/15/7.5 µmol]. Homogeneous catalytic systems with imidazole or impyridine as nitrogen-based ligands are affected in the same way by the sequence of reagents' addition and their molecular ratio; that is, the optimum is obtained by following the addition order of Fe 2+ , imidazole-based ligand and PP 3 with the ratio [Fe 2+ /imidazolebased ligand/PP 3 ] = [7.5/15/7.5 µmol]. Therefore, we maintained the above experimental conditions through the whole of our study.
Catalytic Results
Catalytic gas evolution, as monitored by GC-TCD, revealed that the produced gas consisted exclusively of H 2 and CO 2 with a constant ratio of [H 2 /CO 2 = 1/1] during the catalytic reaction [14,17,18]. The present catalytic systems are highly selective, which is crucial for the applications of fuel cells, as no CO was detected. All the catalytic data presented herein were derived from the average of at least three experimentation sets with a standard error of 5%. Figure S7a the homogeneous imidazole and impyridine counterparts. Interestingly, a higher production rate was achieved when the imidazole was in the homogeneous phase, producing V(H 2 + CO 2 ) = 2380 mL within 40 min, which corresponds to a 100% yield. A 10% decrease in catalytic gas production was observed in the case of [Fe 2+ /IGOPS/PP 3 ] (V(H 2 + CO 2 ) = 2142 mL within 40 min), while [Fe 2 + /IPS/PP 3 ] produces almost the same yield but at a lower rate (the reaction was completed in 75 min). In the case of impyridine@SiO 2 (Figure S7b), the corresponding homogeneous impyridine had a performance of almost zero, producing only 20 mL of gas in total, in contrast with the homologous heterogeneous counterpart, which presents a satisfying catalytic activity of V(H 2 + CO 2 ) = 1750 mL within 55 min. We referred to this analogous behavior in our previous work [42], where we proved that the immobilization of PPh 3 onto the SiO 2 surface generates an active Fe 2+ /RPPh 2 @SiO 2 heterogeneous catalytic system which produces up to 14 lt of H 2 , whereas the corresponding homogeneous Fe 2+ /RPPh 2 was completely inactive; this was attributed to the considerable reduction in the activation energy step barrier which occurred after the ligand's grafting onto SiO 2 [42].
In the context of the study of the catalytic reaction and the effect that each component has on the potential of the solution, the solution potential, E h , (vs standard hydrogen electrode, SHE) was mapped, and the results are given in the following figure.
The data in Figure 5 show that, before the reaction began, the redox potential of the solution had positive values, indicating the highly oxidizing environment which is created by the solvent (propylene carbonate) and substrate (formic acid). The addition of Fe 2+ changes E h to more positive values, e.g., from +366 mV vs. SHE to +489 mV vs. SHE for the catalytic system [Fe 2+ /IGOPS/PP 3 ]. A small decrease in E h is observed when a IGOPS, IPS or impyridine@SiO 2 nanohybrid containing reductive imidazole groups is added. Remarkably, the polydentate alkyl-phenyl-phosphine ligand (PP 3 ) highly decreases E h , resulting in a reducing environment with slightly negative E h values. This change is accompanied by gas generation, indicating the initiation of the catalytic reaction. After 10 min, E h became more negative, with the [Fe 2+ /impyridine@SiO 2 /PP 3 ] having a higher value (E h = −65 mV vs. SHE). As the reaction progressed, the E h continued to present negative values, with those of the [Fe 2+ /impyridine@SiO 2 /PP 3 ] system being the lowest. Homogeneous imidazole and impyridine counterparts present a similar trend, with the impyridine having more negative values (see Figure S8 of the Supplementary Material). From the above results, it seems that the poly-phosphine ligand (PP 3 ) is necessary for reaction initiation, resulting in a negative Eh value. However, the generation of a reduced or a highly reduced environment does not ensure catalytic reactivity or/and performancesee, for example, the homogeneous [Fe 2+ /impyridine/PP 3 ] system. From a mechanistic point of view, a slow and determining step for catalytic FA dehydrogenation is the β-H abstraction from the formate coordinated on the Fe-center [52], which is probably triggered by phosphine's addition. As a result, CO 2 is emitted and the Fe-H species is formed, which then is protonated, resulting in H 2 production [53].
with those of the [Fe 2+ /impyridine@SiO2/PP3] system being the lowest. Homogeneo idazole and impyridine counterparts present a similar trend, with the impyridine more negative values (see Figure S8 of the Supplementary Material). From the ab sults, it seems that the poly-phosphine ligand (PP3) is necessary for reaction ini resulting in a negative Eh value. However, the generation of a reduced or a highly r environment does not ensure catalytic reactivity or/and performance-see, for ex the homogeneous [Fe 2+ /impyridine/PP3] system. From a mechanistic point of view, and determining step for catalytic FA dehydrogenation is the β-H abstraction fr formate coordinated on the Fe-center [52], which is probably triggered by phosp addition. As a result, CO2 is emitted and the Fe-H species is formed, which then is nated, resulting in H2 production [53]. In order to investigate the performance of the catalytic systems upon the cont feeding of FA, after the catalytic conversion of the initial 1 mL of FA, a new amoun (1 mL) was introduced to the catalytic reaction without any further addition of re The catalytic gas production data in Figure 6a indicate that the [Fe 2+ /IGOPS/PP3] n brid generated a total gas volume of V(H2 + CO2) = 8.42 L after the continuous add 9 mL FA, presenting total TONs = 22,953 and TOFs = 5571 h −1 . Inferior performanc a decrease of approximately 15%, was noted for the [Fe 2+ /IPS/PP3] (TONs = 17,938, 4599 h −1 , V(H2 + CO2) = 6.58 L). Comparatively, the generation rate of the homog counterpart [Fe 2+ /imidazole/PP3] showed higher TONs and TOFs (see Table 2). other hand, the homogeneous impyridine had at almost zero catalytic efficiency i parison with heterogeneous impyridine@SiO2, which had satisfactory activity, wit = 4228 h −1 (TONs= 20,718, V(H2 + CO2) = 7.6 L). In order to investigate the performance of the catalytic systems upon the continuous feeding of FA, after the catalytic conversion of the initial 1 mL of FA, a new amount of FA (1 mL) was introduced to the catalytic reaction without any further addition of reagents. The catalytic gas production data in Figure 6a indicate that the [Fe 2+ /IGOPS/PP 3 ] nanohybrid generated a total gas volume of V(H 2 + CO 2 ) = 8.42 L after the continuous addition of 9 mL FA, presenting total TONs = 22,953 and TOFs = 5571 h −1 . Inferior performance, with a decrease of approximately 15%, was noted for the [Fe 2+ /IPS/PP 3 ] (TONs = 17,938, TOFs = 4599 h −1 , V(H 2 + CO 2 ) = 6.58 L). Comparatively, the generation rate of the homogeneous counterpart [Fe 2+ /imidazole/PP 3 ] showed higher TONs and TOFs (see Table 2). On the other hand, the homogeneous impyridine had at almost zero catalytic efficiency in comparison with heterogeneous impyridine@SiO 2 , which had satisfactory activity, with TOFs = 4228 h −1 (TONs= 20,718, V(H 2 + CO 2 ) = 7.6 L). Nanomaterials 2023, 13, x FOR PEER REVIEW 11 of 19 Indeed, in all catalytic systems, the gas evolution was stopped after a satisfactory amount of FA was added (e.g., for [Fe 2+ /IGOPS/PP 3 ], after the addition of 9 mL of FA). This could be due to the accumulation of H 2 O, which is present at a concentration of 2.5% [v:v] in the FA stock obtained from the supplier, as we examined in our previous work [42]. To verify this hypothesis here, in the catalytic system [Fe 2+ /IGOPS/PP 3 ] upon normal operation conditions, small quantities of H 2 O were added. The inhibiting role of H 2 O was confirmed, since the gas production rate diminished after the addition of 200 µL of H 2 O and ceased when a total amount of 400 µL was added ( Figure S9, Supplementary Material).
In homogeneous catalytic systems (imidazole or impyridine), the suppressive impact of H 2 O is not easily overcome. Nevertheless, as we demonstrate hereafter, the current heterogeneous systems provide a low-cost solution for removing this inhibiting effect: after H 2 production ceased, e.g., after continuous H 2 production from 9 mL of FA by [Fe 2+ /IGOPS/PP 3 ], 7 mL of FA by [Fe 2+ /IPS/PP 3 ], and 8 mL of FA by [Fe 2+ /impyridine@SiO 2 / PP 3 ], the suspension was centrifuged and rinsed, and the resulting catalyst was reused, resulting in continuous H 2 production from new quantities of added FA ( Figure 6). This demonstrates that the decrease in catalytic performance after the dehydrogenation of 8 mL of FA approximately was not due to irreversible damage of the catalyst, as it can be resolved by means of a straightforward washing and drying procedure.
Within this context, we recovered and reused the present heterogeneous systems. Thus, when catalytic gas evolution stopped, the solid catalyst was collected via centrifugation, washed with methanol and employed for a second use under the same catalytic conditions, with no further Fe addition. In the case of the IGOPS nanohybrid (see Figure 7a), 41.6 mg was recovered and applied for a second use, producing 1.1 L of gases within 1 h, with TONs = 5597 and TOFs = 5597 h −1 , and an average production rate 30 mL/min (Table 3). Subsequently, when the catalysis stopped again, 21 mg of the solid catalyst could be recovered, washed, and applied for a third reuse, providing TONs = 3228 within 1 h (Table 3). Overall, the [Fe 2+ /IGOPS/PP 3 ] system was reused three times with no further [Fe 2+ /IGOPS] addition, providing 9.82 L of gases and 31,780 TONs, taking into account the fact that the best performance is achieved within the first use (Table 3). On the contrary, [Fe 2+ /IPS/PP 3 ] presents an inferior performance during the first use, but it seems that it maintains its efficiency after the second and third use by recycling 62.7 and 35.3 mg, respectively, and producing, in total, 9.1 L of gases and 29,260 TONs ( Figure 7b, Table 3). Nanohybrid impyridine@SiO 2 was practically non-reusable. that the C-O-C bond is less stable in reducing conditions, and as a result, a release of [Fe 2+ -L, where L = imidazole derivative ligand] occurred, demonstrating the lower catalytic activity of IGOPS during the second and third use and the non-reusability of the impyridine@SiO2 system. Based on the data in Table 3, the catalytic performance of reused [Fe 2+ /IGOPS/PP 3 ] gradually reduced. To check if the observed loss of activity is due to the loss of catalyst mass, the volume of gas produced upon recycling runs, i.e., the second and third use, was normalized based on the nanohybrid material mass used for the first run (see Figure 7a dotted lines). This data analysis demonstrates that other reasons beyond mass catalyst loss are responsible for catalyst deactivation.
In order to investigate the drop in catalytic efficiency in the case of [Fe 2+ /IGOPS/PP 3 ] and [Fe 2+ /impyridine@SiO 2 /PP 3 ], two independent protocols were established; (i) Monitoring leaching of Fe 2+ species through the solution after the end of the reaction using UV/Vis spectroscopy. According to [54] and our previous study [18], the UV/Vis spectra of Fe 2+ /PP 3 complexes exhibit a prominent peak at 510 nm (as depicted in Figure S10 of the Supplementary Material), which is attributed to the occurrence of MLCT transitions [55]. When removing the solid catalyst after the end of the reaction by means of filtration, the characteristic band at 510 nm did not appear, proving that the decrease in catalytic efficiency is not attributed to the leaching of Fe 2+ atoms. (ii) Using FT-IR and Raman spectroscopy, as long as the reaction was completed and the catalytic materials were recovered, it is revealed that the C-O bond (1320 cm −1 , FT-IR and 1190 cm −1 , Raman) of IGOPS and impyridine@SiO 2 vanished (Figures S11a-c and S12a-c of the Supplementary Material). IGOPS and impyridine@SiO 2 bear a glycidyl group with a C-O-C bond due to the grafting method applied, while IPS only bears a propyl group, respectively (see Figure S2 of the Supplementary Material). It seems that the C-O-C bond is less stable in reducing conditions, and as a result, a release of [Fe 2+ -L, where L = imidazole derivative ligand] occurred, demonstrating the lower catalytic activity of IGOPS during the second and third use and the non-reusability of the impyridine@SiO 2 system.
Comparison of [Fe-Imidazole@SiO 2 ] Nanohybrids with Other Immobilized Catalysts
The first heterogeneous systems used for FA dehydrogenation were metal particlesnot complexes- [56] primarily operating at high temperatures (T > 200 • C) and pressures, with FA in gas form. Since 2008, liquid phase reactions that can proceed at near-ambient temperatures have been presented by the scientific community [57]. Utilizing the benefits of homogeneous catalytic metal complexes, an alternative strategy is the grafting of the metal complex onto a solid matrix. The first attempt to do this was made by the research group of Laurenczy in 2009 [39], immobilizing, by means of various techniques such as ion exchange and adsorption, an homogeneous Ru[meta-trisulfonatedtriphenylphospine] complex on different supports, including polymers and zeolites. In some instances, satisfactory catalytic activity was obtained, with a higher TOF of approximately 427 h −1 observed for the zeolite PB Na-BEA ( Table 4). Leaching of the catalytically active complex from the surface was a significant disadvantage, as it caused the progressive deactivation of the catalyst. In order to overcome these limitations, the same research group, in a more recent work, immobilized a Ru(II)-phosphine catalyst onto mesoporous silica supports. The heterogenous catalytic complex MCM41-Si-(CH 2 ) 2 PPh 2 /Ru-mTPPTS achieved a TOF = 2780 h −1 within 150 min [58]. Another example of an immobilized catalyst consisting of a Ru metal center with sulfur ligands covalently bonded to a SiO 2 support was the Ru-S-SiO 2 compound with a moderate activity of TOF = 344 h −1 [59]. Ir immobilized complexes on SiO 2 matrixes were shown to be the most promising, with TOFs > 10,000 h −1 [40,41], despite the high cost [8]. The impact of the central metal cation (Rh and Ir) was investigated by Yoon et al. [60] using half-sandwich Rh(III) or Ir(III) catalysts immobilized on bipyridine-based covalent triazine frameworks with tunable dimensions (bpy-CTFs). They found that Ir 4.7 @bpy-CTF400 and Rh 1.7 @bpy-CTF400 heterogenous catalysts presented the highest H 2 yields, with initial TOFs = 2860 h −1 and 1760 h −1 , respectively. To the best of our knowledge, our research group was the first time to present a cheap non-noble metal [Fe-phosphine@SiO 2 ] catalyst with satisfactory performance (TONs = 8041 and TOFs = 4308 h −1 for Fe/polyRPhphos@SiO 2 ) [42] [41]). Overall, it seems that the present low-cost imidazole-based nanohybrids IGOPS and IPS linked with the nonnoble Fe 2+ metal may constitute heterogeneous catalytic systems with excellent stability and performance for FA dehydrogenation. Figure S1: Schematic illustration of the synthesis of a impyridine@SiO 2 nanohybrid, Figure Supplementary Materials: The following supporting information can be downloaded at: www.mdpi.com/xxx/s1, Figure S1: Schematic illustration of the synthesis of a impyridine@SiO2 nanohybrid, Figure S2:
Conflicts of Interest:
The authors declare no conflict of interest. | 9,556.4 | 2023-05-01T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
Characterization of Sensorineural Hearing Loss in Children with Alport Syndrome
Most adults with Alport syndrome (AS) suffer from progressive sensorineural hearing loss. However, little is known about the early characteristics of hearing loss in children with AS. As a part of the EARLY PRO-TECT Alport trial, this study was the first clinical trial ever to investigate hearing loss in children with AS over a timespan of up to six years Nine of 51 children (18%) had hearing impairment. Audiograms were divided into three age groups: in the 5–9-year-olds, the 4-pure tone average (4PTA) was 8.9 decibel (dB) (n = 15) in those with normal hearing and 43.8 dB (n = 2, 12%) in those with hearing impairment. Among the 10–13-year-olds, 4PTA was 4.8 dB (healthy, n = 12) and 41.4 dB (hearing impaired, n = 6.33%). For the 14–20-year-olds, the 4PTA was 7.0 dB (healthy; n = 9) and 48.2 dB (hearing impaired, n = 3.25%). On average, hearing thresholds of the hearing impaired group increased, especially at frequencies between 1–3 kHz. In conclusion, 18% of children developed hearing loss, with a maximum hearing loss in the audiograms at 1–3 kHz. The percentage of children with hearing impairment increased from 10% at baseline to 18% at end of trial as did the severity of hearing loss.
Introduction
Alport Syndrome (AS) is a rare genetic disorder of type IV collagen formation leading to progressive renal failure, ocular problems and the development of high-frequency sensorineural deafness [1][2][3][4][5][6][7]. Although approximately 70% of patients with AS suffer from progressive sensorineural hearing loss, little is known about the early development and characteristics of hearing loss in children with AS [2].
Pathogenic variants in the COL4A3, COL4A4 and COL4A5 genes encoding for type IV collagen cause AS [2,3,8]. AS can be inherited in an X-linked (XLAS) form, an autosomal recessive (ARAS) form, or patients with a single heterozygous mutation can present as autosomal dominant AS (ADAS). Pathogenic variants in the COL4A5 gene cause XLAS, which accounts for 80-85% of all patients with AS [9,10]. 15-20% of AS is inherited autosomal and is caused by pathogenic variants in COL4A3 and COL4A4 [1,11]. Individuals with heterozygous AS variants also have an increased risk of end-stage renal failure (ESRF) [12]. AS is diagnosed clinically with the help of patient's history, physical examination, detailed family history, renal biopsy and genetic testing [13].
Type IV collagen is an important component in the structure and function of respective basement membranes in kidney, cochlear, and eyes. Type IV collagen, composed of six different Life 2020, 10, 360 2 of 13 α chains, assembles into three different heterotrimers (α1α1α2, α3α4α5 and α5α5α6), which are tissue-specific [14]. The α1α1α2 heterotrimer is a component of all basement membranes but can be partially replaced by more stable α3α4α5 heterotrimers in mechanically particularly stressed areas in the renal glomerular basement membrane (GBM) in the kidney, cochlea and eyes [15,16]. The α3α4α5 chains form a triple helix structure with a tight twisting of the collagen chains due to the presence of glycine at every third amino acid position. Many pathogenic gene variants in AS are glycine-missense mutations, which lead to kinking of the triple helix structure. Other variants lead to premature chain termination and faster degradation. Patients with splice site or truncating variants or with variants that are located at the 5 end of the gene have a significantly increased risk of extrarenal involvement. In addition, patients with hearing loss are more likely to develop ESRF early, which makes extrarenal involvement an important prognostic factor [11].
In the glomeruli of the kidney, absence or deficiency of α3α4α5 type IV collagen results in decreased mechanical stability and splitting of the GBM, finally leading to hematuria, proteinuria and to progressive renal failure [16]. Similarly, in the cochlea, the developmental isotype switch between α1α1α2 and α3α4α5 isoforms in basement membranes of the spiral ligament and spiral limbus and underneath the basilar membrane underneath the Organ of Corti does not take place in AS, likely resulting in defects of cochlear homeostasis or micromechanics [17].
Hearing loss due to AS has never been described as congenital, and patients would usually pass newborn hearing screening. It is usually first detected by audiometry in late childhood or early adolescence, presenting with bilateral reduction of sensitivity to mid and high frequencies [18][19][20][21]. The risk of developing hearing loss before age of 30 has been reported as 60% for patients with missense variants and up to 90% for patients with other variants in XLAS [11]. In general, previous studies showed that approximately 70% of the adult patients with AS develop hearing loss over time [2]. There is no specific treatment for delaying hearing loss in AS. Angiotensin-converting enzyme inhibitors (ACEi) are standard off-label therapy for delaying renal failure in patients with AS. Registry data have shown that the progress of renal manifestation can be delayed if treatment with ACEi is started before the glomerular filtration rate (GFR) has dropped below 60 mL/min [22].
To clarify whether an even earlier start to therapy is safe and effective, the EARLY PRO-TECT Alport trial (NCT01485978) was initiated in 2012 [23]. The trial was the first randomized and placebo-controlled trial to evaluate the safety and efficacy of renin-angiotensin-aldosterone system (RAAS) inhibition in children. Indicating the safety and efficacy of nephroprotective therapy, results of the primary endpoints have been recently published [24]. The aim of the present study is to describe and assess the secondary end-point hearing function characteristics of children with AS who participated in the EARLY PRO-TECT Alport trial.
Patient Characteristics
The following results are based on 51 patients out of the 66 patients (77%) from the EARLY PRO-TECT Alport trial, from which additional data regarding hearing function were obtained ( Table 1). The 2 female and 49 male patients had a mean age of 9.0 ± 4.2 years at baseline, when 18 patients were in AS stage 0, 23 patients were in stage I and 10 patients were in stage II. Mode of inheritance was in 82% X-linked (42/51), in 16% autosomal (8/51) and unknown in one patient (2%). The median albuminuria at baseline was 61 mg albumin/gCrea (IQR 227.4 mg albumin/gCrea). Of these patients, 18 of 51 reported relatives with hearing loss (35%). Within the trial, 35 of 51 patients were openly treated with Ramipril and 16 of 51 patients entered the randomization arm (seven patients received placebo and nine patients received Ramipril).
Clinical Audiological Characteristics
Hearing loss was diagnosed in nine of 51 patients (18%), while 39 of 51 children had a normal hearing ability (77%). The audiological report was not conclusive in three patients (6%). A previously normal-hearing eight-year old did not cooperate during audiometric testing; in a three-year old child, pure tone audiometry was not performed but the report suggests reduced amplitudes of otoacoustic emissions. In a third patient, only an ambiguous Transient Evoked Otoacoustic Emissions (TEOAE) without interpretation was transmitted.
The youngest child with hearing impairment was a seven-year-old girl with XLAS. Mode of inheritance was X-linked in six and autosomal recessive in three of nine children with hearing loss. Hearing was assessed in eight of nine patients using audiograms. In one patient hearing loss was documented in the medical history without severity. Severity of hearing loss was determined by 4-pure tone average (4PTA) of the better ear (normal hearing: 4PTA ≤ 25 dB, mild hearing loss: 26-40 dB, moderate hearing loss: 41-60 dB, severe hearing loss: 61-80 dB, profound hearing loss >80 dB). At baseline, one child had a mild hearing loss and four children had a moderate hearing loss. Three children developed hearing loss during the trial. All three patients developed a mild hearing loss, and one of three patients progressed to a moderate hearing loss within two years.
The genotype-phenotype correlations described for AS are based on the progression of the kidney disease.
Correlation between Renal Function and Hearing
As all children with AS in the EARLY PRO-TECT Alport trial were at the early stages of renal disease with normal glomerular filtration rate, renal function was assessed by the amount of albuminuria: in 36 children an audiogram and a test for albuminuria were performed simultaneously. In two patients with normal hearing an albumin excretion was not available at the time of the audiogram. Patients with normal hearing (n = 28) showed a median albuminuria of 45.8 mg albumin/gCrea, while median albuminuria of children with hearing impairment (n = 8) was higher (300 mg albumin/gCrea) ( Figure 1). The logarithmic scale of albuminuria differed significantly (p ≤ 0.05) between children with hearing loss and children with normal hearing.
Correlation between Renal Function and Hearing
As all children with AS in the EARLY PRO-TECT Alport trial were at the early stages of renal disease with normal glomerular filtration rate, renal function was assessed by the amount of albuminuria: in 36 children an audiogram and a test for albuminuria were performed simultaneously. In two patients with normal hearing an albumin excretion was not available at the time of the audiogram. Patients with normal hearing (n = 28) showed a median albuminuria of 45.8 mg albumin/gCrea, while median albuminuria of children with hearing impairment (n = 8) was higher (300 mg albumin/gCrea) (Figure 1). The logarithmic scale of albuminuria differed significantly (p ≤ 0.05) between children with hearing loss and children with normal hearing. At baseline five of the nine children with hearing loss were in stage I of AS (microalbuminuria: 30-300 mg albumin/g creatinine (gCrea)) and the remaining four hearing impaired children were in stage II of AS (proteinuria: >300 mg albumin/gCrea). Hearing loss was not observed in children in stage 0 (microhematuria without microalbuminuria).
Audiograms
In 38 children (30 patients with normal hearing and eight patients with hearing loss), one or several audiograms were available. The audiograms were divided into three age groups, 5 to 9, 10 to 13 and 14 to 20 years ( Figure 2). 30 patients had one audiogram, seven patients had an additional follow-up audiogram and one patient with hearing loss had an audiogram in each age group (three audiograms). The average time between examinations was 40 months. Accordingly, including nine follow-up audiograms, a total of 47 audiograms were analyzed. The mean 4PTA of the 5-9 years old At baseline five of the nine children with hearing loss were in stage I of AS (microalbuminuria: 30-300 mg albumin/g creatinine (gCrea)) and the remaining four hearing impaired children were in stage II of AS (proteinuria: >300 mg albumin/gCrea). Hearing loss was not observed in children in stage 0 (microhematuria without microalbuminuria).
Audiograms
In 38 children (30 patients with normal hearing and eight patients with hearing loss), one or several audiograms were available. The audiograms were divided into three age groups, 5 to 9, 10 to 13 and 14 to 20 years (Figure 2). 30 patients had one audiogram, seven patients had an additional follow-up audiogram and one patient with hearing loss had an audiogram in each age group (three audiograms). The average time between examinations was 40 months. Accordingly, including nine follow-up audiograms, a total of 47 audiograms were analyzed. The mean 4PTA of the 5-9 years old children was 8.9 dB (n = 15) for patients with normal hearing and 43.8 dB (n = 2) with impaired hearing. For the 10-13 years old children, the mean 4PTA was 4.8 dB for children with normal hearing (n = 12) and 41.4 dB for children with impaired hearing (n = 6). For the [14][15][16][17][18][19][20] year old children, the mean 4PTA Life 2020, 10, 360 5 of 13 was 7.0 dB (normal hearing; n = 9) and 48.2 dB (hearing impaired, n = 3). Respectively among the 5-9-year olds, the proportion of hearing loss was lower (12%) compared to the 10-13-and 14-20-year olds (33% and 25%). The mean 4PTA of 14-20 year old children was higher in comparison to the 10-3 year old children. The follow-up of five hearing impaired children showed an annual progression of hearing loss between 0.3 dB up to 9.7 dB.
Life 2020, 10, x FOR PEER REVIEW 5 of 13 children was 8.9 dB (n = 15) for patients with normal hearing and 43.8 dB (n = 2) with impaired hearing. For the 10-13 years old children, the mean 4PTA was 4.8 dB for children with normal hearing (n = 12) and 41.4 dB for children with impaired hearing (n = 6). For the [14][15][16][17][18][19][20] year old children, the mean 4PTA was 7.0 dB (normal hearing; n = 9) and 48.2 dB (hearing impaired, n = 3). Respectively among the 5-9-year olds, the proportion of hearing loss was lower (12%) compared to the 10-13-and 14-20-year olds (33% and 25%). The mean 4PTA of 14-20 year old children was higher in comparison to the 10-3 year old children. The follow-up of five hearing impaired children showed an annual progression of hearing loss between 0.3 dB up to 9.7 dB. The audiograms showed clear differences between children with and without significant hearing loss (Figure 3). Thresholds of normal-hearing subjects were typically 5-10 dB across all frequencies. Only four of 36 audiograms classified as normal-hearing according to WHO guidelines showed a 4PTA between 16 and 25 dB, of whom two later developed hearing loss, one patient was not followed up and in one patient the threshold increase was attributed to acute middle ear effusion. The audiograms showed clear differences between children with and without significant hearing loss ( Figure 3). Thresholds of normal-hearing subjects were typically 5-10 dB across all frequencies.
Only four of 36 audiograms classified as normal-hearing according to WHO guidelines showed a 4PTA between 16 and 25 dB, of whom two later developed hearing loss, one patient was not followed up and in one patient the threshold increase was attributed to acute middle ear effusion. The average tone audiograms of children with hearing loss showed a typical broad mid-cochlear dip with a maximum at 1-3 kHz (trough-type). The curve of the 5-9 year old patients has a maximum dip at 1-2 kHz. In the 10-13 year old patients, the dip spread to 1-4 kHz. In the oldest group of the 14-20 year old patients, there was additional high-frequency hearing loss at 6 and 8 kHz (plateau- The average tone audiograms of children with hearing loss showed a typical broad mid-cochlear dip with a maximum at 1-3 kHz (trough-type). The curve of the 5-9 year old patients has a maximum dip at 1-2 kHz. In the 10-13 year old patients, the dip spread to 1-4 kHz. In the oldest group of the 14-20 year old patients, there was additional high-frequency hearing loss at 6 and 8 kHz (plateau-type). The maximum hearing loss did not exceed 60 dB.
Out of eleven audiograms with a 4PTA > 25 dB, six were classified as symmetrical and five as asymmetrical. From a total of 22 audiograms (eleven per side) from eight children with impaired hearing, the maximum hearing loss was in the mid-frequency range in 64% (14/22), and in 18% (4/22) it was in the high frequency range ( Table 2). In 18% the maximum hearing loss was between mid and high frequencies (4/22). The 22 tone audiograms assumed the following configurations: two times a flat-type, one time a descending-type, nine times a trough-type with a depression in the mid-range frequencies ("cookie bite") and ten times when thresholds were normal for low frequencies and elevated in both the middle and high frequency regions (plateau-type). Table 2. Classification of maximum hearing loss per ear.
Long Term Tracking of Hearing Impairment in Individual Patients
In two patients we were able to analyze the course of hearing impairment over a time-span of 8 years and 4 years: Hearing loss of patient A was found before AS was diagnosed when he was eight years old and was treated with hearing aids. The first audiogram showed a symmetrical broad trough-type with a maximum at 1-2 kHz ( Figure 4A). The medical history of the patient indicated regular language development in early childhood, suggesting normal hearing function. Consistent with an impairment of active cochlear amplification, Transient Evoked Otoacoustic Emissions (TEOAE) were not detectable on both sides. Over the years the maximum hearing loss increased in the high frequencies.
The 4PTA changed from a minimum of 43.8 dB to a maximum of 50 dB. Overall, hearing loss hardly changed between the ages of 8 and 16 in the frequencies of 0.25 to 2 kHz, but increased to 12.5 dB at 3 kHz, to 20 dB at 4 kHz, 15 dB at 6 kHz and 25 dB at 8 kHz.
Patient B developed bilateral hearing loss during the trial. At the age of 8 years, the audiogram showed mild threshold elevations, but hearing loss was not significant according to WHO classification and TEOAE were present in the right ear for all frequencies and in the left ear at 2-4 kHz. The perception of monosyllabic words (Göttinger speech test 2) was slightly impaired with 80/90/90% correct discrimination 55/65/80 dB. At the age of 10 years, hearing loss had progressed especially in the right ear and hearing aids were prescribed. In addition to renal symptoms and hearing loss, the patient also suffered from dyslalia and hyperopia. Within four years, hearing loss (4PTA) increased by 35 dB (right ear) and 37.5 dB (left ear). The last audiogram at the age of twelve showed a symmetrical hearing loss with a broad trough-type pattern with a maximum at 2 kHz ( Figure 4B). Patient B developed bilateral hearing loss during the trial. At the age of 8 years, the audiogram showed mild threshold elevations, but hearing loss was not significant according to WHO classification and TEOAE were present in the right ear for all frequencies and in the left ear at 2-4 kHz. The perception of monosyllabic words (Göttinger speech test 2) was slightly impaired with
Discussion
The EARLY PRO-TECT Alport trial evaluated the safety and efficacy of early therapy with Ramipril in children with AS. This study was the first clinical trial ever that investigated hearing loss in children with AS (as secondary end-point). The proportion of children with hearing loss in our trial was lower (18%) than described in the literature (about 40% of 11-year-old children with hearing loss) [11,21]. In adults, previous studies showed that approximately 70% of patients with AS develop hearing loss over time. Possible reasons for the smaller proportion of hearing loss in our trial could be the early disease stages and the milder variants in our study population.
Hearing loss in AS is never congenital and described as first appearing in late childhood or early adolescence [18][19][20][21]. In the EARLY PRO-TECT Alport trial, hearing loss was not observed before children reached elementary school age with the youngest child with severe hearing loss being seven years old. The small number of cases enables only a limited description of the progression of hearing loss. In all of our affected patients, hearing loss did not exceed 60 dB, which is consistent with the Life 2020, 10, 360 8 of 13 literature [11]. According to our data, bilateral sensorineural hearing loss is progressive, with hearing loss between 0.3 dB up to 9.7 dB per year.
Based on these data, we would suggest audiometric testing in normal-hearing AS patients every three years. Children with minimal threshold elevations (4PTA between 16 and 25 dB) should be followed closely, as they have a higher risk of progression to manifest hearing loss. Hearing impaired patients should be fitted bilaterally with hearing aids for which routine technical and audiometric check-ups are typically performed at least once per year. Rehabilitation with hearing aids is usually successful and. despite the fact that the 2 kHz region is particularly important for speech perception, deficits in language acquisition are exceptional in AS.
Our study has several limitations: first, due to limited financial resources in this Government sponsored trial, we only recommended a hearing test including 4PTA at the start and end of the trial and after three years but were not able to include this as part of the official study protocol. This translated to many hearing tests in most of the children, but not to a complete data set. Second, the EARLY PRO-TECT Alport trial included toddlers with a limited ability to perform hearing tests. Finally, the trial included children with AS at very early stages of disease. Therefore, our trial included a number of children, who are likely to develop hearing impairment later during their course of disease and their hearing impairment will develop over time. In conclusion, we expected the number of children with AS and hearing loss to be less in numbers than in a scenario with adult AS-patients.
Hearing loss is greatest in the mid-frequency range with maximal hearing loss centered around 2 kHz. Hearing loss in AS used to be described as symmetrical [25]; however, in our study only six of eleven audiograms were symmetrical. With age, there is often an additional loss in the high-frequency regions, transforming the audiograms from a broad trough-type towards a gradually sloping pattern. The reason for the tonotopic pattern is unknown. The audiograms are clearly distinct from classical noise-induced hearing loss, which usually presents with a sharp notch at 4 kHz. Animal experimental data suggest that AS may lead to an increased vulnerability to noise-induced hearing loss [26]. Noise exposure could thus possibly be one factor in explaining the variability in the hearing phenotype in humans, and it is conceivable that the tonotopic frequency range of exaggerated noise-induced damage differs from normal ears. Further studies are required to address these questions both in animal experiments and in clinical datasets.
Regarding the mechanism of hearing loss, early human (and mouse) temporal bone studies demonstrate primarily atrophy of the stria vascularis, but also loss of inner and outer hair cells, and sometimes neural degeneration, whereas a later study described damage near the basilar membrane [27][28][29][30]. Our clinical data would be consistent with global defect of cochlear function, either due to altered cochlear micromechanics or due to stria deficiency. Where tested, changes in otoacoustic emissions, speech perception and auditory brainstem responses were as expected for the pure tone audiometry results of the same patients.
Children with impaired hearing had higher amounts of albuminuria, which corresponds with the association between hearing loss and faster loss of kidney function in AS described in the literature [5]. In our trial, hearing impaired children have a higher amount of albuminuria compared to children without hearing loss, but not all children with high amounts of albuminuria showed hearing loss. Possible reasons for this discrepancy could be the severity of the variants causing AS or external factors such as noise exposure.
Patients
The primary endpoints on renal function of the EARLY PRO-TECT Alport trial have been published recently [24]. Briefly, the EARLY PRO-TECT Alport was the first randomized and placebo-controlled trial to evaluate safety and efficacy of the ACEi Ramipril in children with AS in an up to 6 years Life 2020, 10, 360 9 of 13 treatment period between 2012 and 2019. The Ethic Approval Code is 11/6/11. This study is registered with ClinicalTrials.gov, NCT01485978.
Children with definite diagnosis of AS aged between 24 months and 18 years and normal glomerular filtration rate were included in the trial.
Stages of AS were defined as: • Stage 0: Microhematuria without microalbuminuria • Stage I: Microalbuminuria: 30-300 mg albumin/g creatinine (gCrea) • Stage II: Proteinuria: >300 mg albumin/gCrea Written informed consent was obtained from all legal representatives and from all patients, who were six years old or older. Children were either randomized, treated with Ramipril or placebo, or openly treated with Ramipril. Children needed to be untreated with an ACEi and to be in stages 0 or I of disease to qualify for randomization. ACEi-pretreated children, children on stage II of disease or those for whom legal representatives denied randomization could be openly treated with Ramipril. During the trial, clinical information about the patient and the medical history of the family were collected using a standardized questionnaire. Family history was considered positive when a family member mentioned any symptoms of hearing loss. All collected data were pseudonymized.
In 51 of 66 patients, additional data regarding hearing function was obtained from medical reports from specialists in pediatric audiology or otolaryngology, including medical history and audiograms. The data regarding hearing function was collected in the context of regular patient care (hearing test every three years recommended). A pre-existing diagnosis reported in the medical history or an audiogram with a 4PTA > 25 dB of the better ear was classified as hearing loss. In 38 of 51 children, one or several audiograms were available. Genetic testing for the underlying Alport variant was performed in all children included in this present study.
Audiograms
The 4PTA of the better ear was used to classify hearing loss according to the WHO criteria (1997) into mild (26-40 dB hearing loss (HL)), moderate (41-60 dB HL), severe (61-80 dB HL) and profound (>81 dB HL). The audiogram curves were divided into flat-type, trough-type and plateau-type configurations [20,21]. The frequencies of the audiograms were divided into low (0.125, 0.25 and 0.5 kHz), middle (1, 2 and 3 kHz), middle-high (2, 3 and 4 kHz) and high (4, 6 and 8 kHz) frequency regions. 4-Pure tone average (4PTA) was calculated as the mean hearing loss of the frequencies 0.5, 1, 2 and 4 kHz. Air conduction thresholds were used for analysis, unless an air-bone gap ≥10 dB in two neighboring frequencies indicated additional conductive hearing loss. In these cases, bone-conduction thresholds were used. Hearing loss was considered symmetrical if the difference in the same frequency between right and left ear was less than 15 dB [21]. To correlate hearing function with kidney function, each audiogram was matched with albuminuria values obtained within six months from the audiogram.
Conclusions
Our long-term follow up data originating from a clinical trial confirm that inner ear deafness in children is a very important early sign of AS, which can also be considered to be prognostic factor for progressive kidney disease. Patients with Alport syndrome should have audiometric check-ups to ensure adequate early treatment with hearing aids, and check-ups should start just before elementary school age. In a child with hearing loss and hematuria, genetic testing should exclude or diagnose AS as underlying disease. Future studies should place a special focus on the sociocultural burden and pathogenesis of hearing loss in children with AS, which limits the quality of life before the actual kidney problems show up. | 6,162.8 | 2020-05-01T00:00:00.000 | [
"Medicine",
"Physics"
] |
Pure and Twisted Holography
We analyze a simple example of a holographically dual pair in which we topologically twist both theories. The holography is based on the two-dimensional N = 2 supersymmetric Liouville conformal field theory that defines a unitary bulk quantum supergravity theory in three-dimensional anti-de Sitter space. The supersymmetric version of three-dimensional Liouville quantum gravity allows for a topological twist on the boundary and in the bulk. We define the topological bulk supergravity theory in terms of twisted boundary conditions. We corroborate the duality by calculating the chiral configurations in the bulk supergravity theory and by quantizing the solution space. Moreover, we note that the boundary calculation of the structure constants of the chiral ring carries over to the bulk theory as well. We thus construct a topological AdS/CFT duality in which the bulk theory is independent of the boundary metric.
Introduction
In the last twenty five years, our understanding of quantum gravity has significantly improved. Major steps forward include the construction of statistical mechanical models of the thermodynamics of supersymmetric black holes [1], and the identification of concrete examples of holography in quantum gravity [2]. In the present paper, we wish to add a twist to the latter story.
Holographically dual theories of gravity in anti-de Sitter space and conformal field theories on the boundary are under more calculational control in the presence of extended supersymmetry. Moreover, extended supersymmetry in the boundary conformal field theory is known to allow for topological twisting [3]. The latter procedure identifies a topological field theory that captures limited aspects of the original field theory, and that is much simpler. Thus, it is a natural question to ask for a holographic duality of topological theories, obtained by topologically twisting a holographic pair with extended supersymmetry. Indeed, the idea to twist the AdS/CFT correspondence is in the air of the times (see e.g. [4][5][6][7][8]).
We take a novel attitude towards identifying such a holographic pair. We firstly formulate a supersymmetric version of the proposed duality between the bosonic Liouville conformal field theory and a quantum gravity dual in anti-de Sitter space in three dimensions [9]. The gravitational bulk theory which by definition matches the consistent and unitary dual Liouville theory has peculiar properties that were scrutinized in [9]. In this paper, we extend the logic of this proposed duality to a theory with extended supersymmetry. Thus, we define a quantum theory of supergravity in three-dimensional anti-de Sitter space as the holographic dual to the N = 2 Liouville superconformal field theory. Again the attitude is that since the latter is a well-defined unitary and consistent conformal field theory, the bulk gravitational dual shares these properties. The resulting bulk theory we call three-dimensional supersymmetric Liouville quantum gravity.
We thus generate a minimal holographic pair to topologically twist. Topologically twisting the N = 2 Liouville theory is standard [10][11][12], yet subtle because of its non-compact nature.
The twisted theory of supergravity in the bulk is not standard -indeed one of our main motivations for this work was to understand better the meaning of topologically twisting quantum theories of gravity in anti-de Sitter space. After defining the bulk theory, we can ask to which extent we can corroborate the duality through independent calculations performed in the boundary and in the bulk supersymmetric theory of quantum gravity in AdS 3 .
The plan of the paper is as follows. In section 2 we define the bulk supersymmetric Liouville quantum gravity theory and identify some of its properties using holography. We then twist the bulk supergravity theory in section 3. We define the theory through twisted boundary conditions and argue that a topological conformal algebra emerges as the asymptotic symmetry algebra. We determine the spectrum of the topological theory both from a boundary and from a bulk perspective. We also summarize why the structure constants of the topologically twisted pair are bound to match. We wrap up with a summary and a discussion of related open research directions in section 4.
A Simple Supersymmetric Holography
We study the simplest supergravity theory in three-dimensional anti-de Sitter space that gives rise to a supersymmetric conformal field theory on the boundary with extended supersymmetry. The latter symmetry will allow us to topologically twist the theory. The supergravity theory has been analyzed in [13]. Appropriate boundary conditions were prescribed and the asymptotic superconformal symmetry algebra was derived [13,14]. The supergravity theory with the least number of fields and an N = 2 superconformal symmetry is constructed using two osp(2|2, R) Chern-Simons theories [13]. We first review and update the relation between the supergravity and the boundary conformal field theory actions. We then provide a large class of exact solutions. Finally, we propose to extend the duality between the supergravity and the super Liouville theory to the quantum realm.
The Bulk and Boundary Actions
We recall and mildly refine the analysis of the bulk supergravity action and its boundary reduction, performed in [13]. To that end, we start with the supergravity action in three dimensions. We prescribe a negative cosmological constant Λ = −l −2 where l is the radius of curvature of the locally AdS 3 space-time. The supergravity action S can be written as the difference of two osp(2|2, R) Chern-Simons actions [13,[15][16][17] where the Chern-Simons action S CS on a line times a disk equals with the level k R given in terms of the three-dimensional Newton constant G N by k R = l 2G N . 1 The osp(2|2, R) valued connections Γ andΓ can be decomposed in terms of the Lie algebra generators as where the index α takes the two values α = 1, 2. These generators satisfy the osp(2|2, R) commutation relations, where the metric η αβ = δ αβ and its inverse η αβ can be used to lower and raise the α indices, and the λ matrix is related to the two-dimensional epsilon symbol through the equation λ α γ η γβ = ǫ αβ . See [13] for more details. The sl(2, R) components A a andà a of the connection are related to the dreibein e a µ and the Hodge dual of the spin connection ω a µ through the formulas A a µ = ω a µ + 1 l e a µ andà a µ = ω a µ − 1 l e a µ . We pick a bulk space-time of the form of a real line times a disk, with a cylindrical boundary, and choose a radial coordinate r that increases towards the boundary. The connections Γ andΓ satisfy generalized Brown-Henneaux boundary conditions at large radius r [13,14] where the boundary light cone coordinates are x ± = t ± ϕ and the coordinate ϕ is compact with identification ϕ ≡ ϕ + 2π. The fluctuating components of the metric, gravitinos and gauge field on the boundary are given by the quantities L,L, Q +α ,Q −α , B andB which are arbitrary functions of the boundary coordinates x ± . In order to make the action and the boundary conditions Γ − = 0 =Γ + compatible, one has to add the term to the supergravity action S, where Σ 2 = R × S 1 is the asymptotic cylinder at r → ∞. The extra term ensures that the variation of the total action is zero when the equation of motion and boundary conditions are satisfied. The total action (again denoted S) equals The time components Γ 0 andΓ 0 of the connections are Lagrange multipliers that implement the zero flux constraints F rϕ = 0 =F rϕ . By solving these constraints one finds the spatial components Γ i of the gauge connections, for i = r, ϕ, where the elements G 1,2 are elements of the group OSp(2|2, R) or a finite cover. Near the boundary the fields G 1,2 behave as such that the boundary conditions (2.5) are indeed satisfied. Following the approach for the bosonic case [15], the action can be written as a difference of two chiral supersymmetric Wess-Zumino-Witten actions . (2.10) The two chiral actions are given by where Γ[g] is the Wess-Zumino term in the Wess-Zumino-Witten action. As we did in the bosonic case [9], we gauge the action that rotates the zero modes of the fields anti-diagonally, such that the zero modes are locked and give rise to a single boundary field with one set of zero modes and both left moving and right moving oscillations. This is crucial to our purposes. The action is then a non-chiral supersymmetric Wess-Zumino-Witten action which only depends on a new group valued variable g = g −1 1 g 2 , The boundary conditions induce a Drinfeld-Sokolov reduction of the degrees of freedom. Indeed, the boundary conditions in terms of the new variables read where (· · ·) (a) represents the component corresponding to the generator labeled by a. We apply a Gauss type decomposition to the group element g g = g + g 0 g − , (2.14) where the group element factors are In terms of these coordinates the action becomes while the boundary conditions are where we have used the variable u = exp(θλ) = cos θ + sin θλ. The equations of motion for the fields ρ, θ and ψ ±α given by the action (2.16) are Using the constraints deduced above from the boundary conditions, the equations of motion can be simplified to These equations are the same as the equations of motion one obtains starting from the N = 2 super Liouville action 2 (2.20)
Discussion
We reviewed the classical link between the supergravity action and the Liouville action with extended supersymmetry [13]. We pause to make various conceptual remarks on this connection, and prepare the ground for treating the quantum theories. Firstly, we note that the Liouville interaction potential naturally appears in the Chern-Simons formulation of the 2 To obtain fields in a more standard normalization, with the self-dual radius √ α ′ set equal to √ α ′ = √ 2, one would define new fields ρ stan = √ k R ρ and θ stan = √ k R θ, and similarly for the fermions.
gravitational theory. For interesting discussions of how this relates to the boundary term in the action in the metric formulation, we refer to [20,21]. In the following, the presence of the potential terms is essential. Secondly, we note that the bulk gravitational theory is characterized by two integers. The first integer identifying our theory is the level k R . The fact that it is an integer follows from our choice of gauge group, which we take to have a compact SO(2) = U(1) factor. Indeed, the level k U (1) of a U(1) Chern-Simons theory with action is an integer. 3 The U(1) R part of the supersymmetric gravitational Chern-Simons action thus enforces the quantization of the R-symmetry level k R . The second integer arises as follows.
The angular coordinate θ can be chosen to be identified modulo 2πN where N is an integer. For simplicity, we take the integer N to be positive. Thus, our theory is characterized by the pair of positive integers (k R , N).
To summarize, we have reviewed the analysis of [13] that identifies the classical actions of supergravity and the extended supersymmetric Liouville theory living on the boundary. We have proposed to glue the left and right moving zero modes in such a manner as to reproduce an aspect of the consistent Liouville conformal field theory spectrum on the boundary. We analyze the quantum theory further in subsection 2.3, but first we obtain a large class of exact classical solutions.
The Exact Solutions
For three-dimensional gravity with a negative cosmological constant, the Fefferman-Graham expansion for a metric solution to Einstein's equations terminates. The explicit all order solution in the bosonic case was determined in [23]. The method exploited the Chern-Simons formulation of the three-dimensional Einstein-Hilbert action. Presently, we demonstrate that this method can be extended to the supergravity case. We solve the Chern-Simons equations of motion with the boundary conditions given above, and thus provide an all order (truncating) solution to the Fefferman-Graham expansion in supergravity. Firstly, we impose the gauge condition This gauge condition is compatible with the boundary conditions (2.5). The Chern-Simons equations of motion or flatness conditions are The components with µ = − and ν = r combined with the boundary condition Γ − = 0 indicate that the connection component Γ − vanishes everywhere. The remaining equations read (2.25) The first equation implies that the connection component Γ + does not depend on the light-cone coordinate x − , while the second equation fixes its radial dependence, The functionΓ(x + ) is related to the boundary fields L, Q +α and B in equation The solution for the connection is thus One can similarly obtain the solution for the connectionΓ: The metric and the R-symmetry gauge fields are therefore given by The boundary energy-momentum tensor component T and the R-symmetry current J are related to the boundary functions L,L, B,B through the equations [13] T These currents and the supercurrents satisfy an asymptotic N = 2 superconformal algebra [13]. Therefore, the metric and the R-symmetry gauge fields can be written in terms of the energymomentum tensor and R-symmetry current as In similar fashion the gravitini are related to the boundary fields Q +α andQ −α as well as the boundary supercurrents. Thus, we have obtained an exact classical solution for the Fefferman-Graham expansion for the minimal N = 2 superconformal AdS 3 supergravity.
Supersymmetric Liouville Quantum Gravity
In [9] a bulk theory of quantum gravity in three-dimensional anti-de Sitter space-time was defined as the dual of the bosonic Liouville conformal field theory. The latter theory is unitary and consistent and the spectrum and three-point functions are explicitly known. These properties are thus inherited by the dual bulk theory. Various original characteristics of the resulting bulk theory were discussed in [9] to which we do refer for a broader discussion that also largely applies to the present generalization. Let us only mention the lack of a microscopic as well as a macroscopic picture of black hole thermodynamics in this theory.
We extend the approach of [9] to include supersymmetry. We consider the N = 2 superconformal Liouville theory on the two-dimensional boundary to be the definition of a quantum theory of supergravity in the anti-de Sitter three-dimensional bulk. The classical actions agree, as demonstrated in [13] and recalled in subsection 2.1. We also identify the measures of the quantum theories. Our discussion of the quantum mechanical model is mainly based on the boundary conformal field theory since it is considerably better understood than the (reduced) quantum Chern-Simons theory on the super group.
Matching Bulk and Boundary Parameters
We parameterize the central charge of the N = 2 Liouville conformal field theory as c = 3 + 6/k where we will refer once more to the parameter k as the level. 4 Semi-classically, the gravitational level k R = l/(2G N ) is related to the central charge by the formula c = 3k R [13,14]. At large central charge and large level k R , we therefore have the relation k R ≈ 2/k.
In the quantum theory, when k ren R represents the level of the quantum U(1) R current and the renormalized value of the cosmological constant in Planck units, the central charge must still be related to the level through the relation c = 3k ren R by the structure of the N = 2 superconformal algebra. Thus, the relation k ren R = 1 + 2/k is exact. 5 Thus, we have identified bulk and boundary central charges, and therefore a first parameter of both theories.
The level k of the Liouville theory and therefore its central charge can be arbitrary. In particular, the level k can be small and therefore we can reach the semi-classical regime where the central charge c is large. Note that when we take into account the quantization condition on the U(1) Chern-Simons level k R , we find that the level k is twice the inverse of an integer.
We remark in passing that the identification of the level k R with (twice) the inverse of the level k is reminiscent of the FZZ or T-duality between Liouville theory and the cigar coset conformal field theory. This duality here obtains a holographic counterpart. The bulk is semiclassical when the central charge and level k R are large, while the coset curvature interactions are small when the level k is large and the central charge c is close to three. 4 It is the level of the sl(2, R) algebra that governs the parent Wess-Zumino-Witten model of the T-dual coset conformal field theory. 5 It is natural to ask how the renormalized radius k ren R is related to the classical coefficient k R in the Chern-Simons action. The Chern-Simons level may be one-loop perturbatively renormalized by the dual Coxeter number which is 1 for osp(2|2) [24]. If we assume this to be the case, we obtain k ren R = k R + 1 for positive levels. Then c = 3k ren R = 3k R + 3 = 3 + 6/k and k R = 2/k is the exact relation between the classical coefficient k R and the level k. While this is natural from a boundary current algebra perspective, it is hard to solidly justify from a three-dimensional path integral perspective on Chern-Simons theory on super groups. See Appendix E of [22] for a detailed critical discussion.
Secondly, we note that the radius of the U(1) R direction that corresponds to the angular direction in the N = 2 Liouville conformal field theory is determined by the action (2.16) and the equivalence relation on the angular coordinate θ. Since we chose a radius which is N times the minimal radius in the bulk theory, we have a radius N α ′ /k in the Liouville theory. This matches the semi-classical identification of the radius from the action in equation (2.20).
The Spectrum
Now that we matched the two parameters of the bulk and the boundary theories, we can exploit our knowledge of the quantum theory on the boundary to make statements about the holographic dual in the bulk. The spectrum of N = 2 Liouville theory as well as its threepoint functions are known. We review only a few salient features of the conformal field theory and remark on their gravitational counterparts. It will certainly be interesting to explore the dictionary further.
In this subsection, we concentrate on the spectrum and remark on its gravitational counterpart. The spectrum of N = 2 Liouville theory operators is classified in terms of N = 2 superconformal primaries. Moreover, it is convenient to parameterize the spectrum of N = 2 superconformal primaries in terms of variables natural in the cigar coset model SL(2, R) k /U(1). Superconformal primary operators are characterized by their conformal dimension h and Rcharge q. We parameterize these charges in the NS sector in terms of the sl(2, R) spin j and u(1) charge m as (see e.g. [25][26][27][28] for background) : Since the theory has N = 2 superconformal symmetry, spectral flow will determine the spectrum in the R sector. The spectrum consists of continuous representations that have sl ( , there is a considerable gap with respect to the non-normalizable SL(2, C) invariant vacuum. We recall that the continuous states create hyperbolic monodromies [29] and correspond to black hole geometries in the bulk [30]. In contrast to the bosonic theory discussed in [9], the supersymmetric theory at hand contains black hole primary states with spin, when the boundary excitation has non-zero winding. Since the left and right conformal dimensions are positive in the unitary N = 2 Liouville theory, the cosmic censorship bound for spinning black holes is satisfied in the bulk. Furthermore, there are discrete representations in the spectrum of the N = 2 Liouville conformal field theory [25][26][27]. These operators have quantized spins j and angular momenta m whose absolute value ranges from j to +∞. The range of allowed spins is 1/2 ≤ j ≤ (k + 1)/2. These discrete representations correspond to elliptic monodromies and can be identified with bulk geometries that correspond to particles with spin. 6 This too contrasts with the bosonic set-up [9].
The Chiral/Chiral Spectrum
We argued above that the radius of the angular direction of the supersymmetric Liouville theory is R = N α ′ /k . The multiple N determines the Witten index [31,32] and therefore the number of chiral primaries. The left and right moving quantum numbers m andm are quantized as To list chiral-chiral primary operators, we impose the relation h = q/2 which in turn forces the equality j = m for a chiral primary on the left. A similar reasoning holds on the right such that for chiral-chiral primary operators we have that m =m and the winding w equals zero. The number of allowed chiral/chiral primaries equals N since the range of allowed discrete spins j is of width k. 7 We will study the bulk counterpart to the spectrum of chiral primaries in subsection 3.3.
In summary, we have briefly discussed a few consequences of the proposed holographic duality between three-dimensional supersymmetric Liouville quantum gravity and the boundary conformal field theory. The duality certainly deserves further scrutiny -we concentrate on defining and analyzing its topological counterpart in the remainder of the paper.
A Twisted Holography
In section 2 we set up a simple holography with extended supersymmetry. In this section, we topologically twist the two members of the duality. In a first part, we twist the bulk supergravity theory by defining a twisted set of boundary conditions in the presence of a non-trivial boundary metric. We argue that the boundary conditions give rise to a topological N = 2 superconformal algebra as the asymptotic symmetry algebra. Secondly, we review how every theory with N = 2 superconformal symmetry in two dimensions gives rise to a topological quantum field theory. We recall properties of the observables, corresponding to chiral ring elements in the physical theory. Finally, we argue that the topologically twisted theories match in both their observables and their structure constants.
The Twisted Supergravity Theory
In this subsection, we describe the topologically twisted supergravity theory. In order to identify the twisted theory, we study the holographic duality in the presence of a non-trivial background boundary metric. Indeed, we know from the boundary perspective that the difference between the physical and the topological theory lies in the manner in which they couple to a boundary metric. We concentrate on the case where the boundary metric is conformally flat for simplicity. Importantly, we propose the boundary conditions that will give rise to the topologically twisted boundary theory. We then compute the action and verify that it is equivalent to the topologically twisted boundary action.
As a by-product, we make several observations. Firstly, the bulk action and boundary conditions in the presence of a conformally flat metric can be obtained by a formal gauge transformation from the standard case. 8 The boundary Liouville action in the presence of a non-trivial conformally flat boundary metric is obtained by a field redefinition closely related to the formal bulk gauge transformation. Secondly, this observation holds both in the correspondence between pure gravity and bosonic Liouville theory and in the relation between pure supergravity and supersymmetric Liouville theory. Thirdly, we show that a further formal gauge transformation in the bulk transports us from the bulk extended supergravity theory to its topologically twisted version. The latter satisfies new boundary conditions. 9 We confirm that the bulk theory gives rise to a boundary action which is topologically twisted, and of total central charge zero.
After this conceptual introduction, it is time to delve into the details. Concretely, we propose that the bulk supergravity theory after topological twisting and coupling to the conformally flat boundary metric g (0)µν = exp(2ω)η µν corresponds to a bulk Chern-Simons theory of the type discussed in section 2, supplemented with the new boundary conditions: These boundary conditions are related to those described in equation (2.5) by the gauge parameter: and a similar gauge transformation on the right where the group valued factors f 1,2 and h are given by Crucially, the factors e ±ω iT 2 are responsible for the topological twisting. 10 Under this gauge transformation, the original Chern-Simons action becomes a Chern-Simons action for the gauge transformed fields plus a boundary term linear in the fields and a term which only depends on the gauge transformations. We will drop the latter term since it contains no dynamical degrees of freedom. Indeed, we consider the metric to be a static background. To make the action compatible with the boundary conditions on the gauge transformed fields, we not only need to add the extra term given in section 2.1, but also a term whose variation cancels the variation of the additional boundary term. Since this term is linear in the fields, the additional term will serve to cancel it. In summary, we can start from the same action as in subsection 2.1, but for the gauge transformed fields. Making use of this formal connection, one can work out the consequences on the various steps of the derivation of the boundary action reviewed in section 2. These steps lead to the new glued Wess-Zumino-Witten field: As a consequence, the boundary action undergoes the shift of fields: where the hatted variables correspond to the Gauss decomposition (2.14) of the group valued fieldĝ. 11 The fermions ψ ± andψ ± are defined by and the same definition holds for their hatted counterparts. The resulting boundary action is The ω dependence arises from the metric, its determinant and the two-dimensional Ricci scalar R (2) . The boundary action equals the action of the topologically twisted N=2 Liouville theory [37] after field redefinition. Finally, we analyze how the energy-momentum tensor depends on the background field ω after the shift (3.6). A useful point of view is that the ω dependence arises from rendering the derivatives in the energy-momentum tensor covariant. As such, the term proportional to ∂ω∂θ in the shifted energy-momentum tensor must arise from a two-derivative term acting on θ (where the second derivative of the scalar must be made covariant). This reasoning, combined with the fact that the only linear way to shift the energy-momentum tensor by symmetry currents is by the derivative of the R-current fixes the energy-momentum tensor to beT The same shift can be gleaned from the term proportional to the field θ in the twisted action (3.8).
The Topological Conformal Field Theory on the Boundary
We turn to recall how to topologically twist the boundary theory. It is understood that a two-dimensional theory with N = 2 superconformal symmetry provides a starting point for defining a topological quantum field theory [10,11]. Indeed, the twisted energy-momentum tensor component T top = T + ∂J/2 gives rise to a zero central charge conformal field theory that is independent of the metric [10,11]. The BRST charge whose cohomology defines the space of observables of the resulting topological quantum field theory is Q = G + where the supercurrent G + of positive R-charge is a current of dimension one in the twisted theory. The current G − of negative U(1) R charge becomes a current of dimension two after twisting, and is the pre-image of the BRST exact energy-momentum tensor: The observables of the theory originate in the chiral-chiral ring elements of the physical theory when we assume that the left and right topological twist are of the same type. The energymomentum tensor is BRST exact and this indeed implies that after coupling to the metric on the boundary, there is no metric dependence in the topological theory. In the topological Landau-Ginzburg model that we deal with presently, the action is in fact not BRST exact, but rescaling the boundary metric inside the action still leads to localization of the path integral on constant field configurations [38]. 12 The localization of correlators was thoroughly exploited in the solution to topological N = 2 Liouville theory [12]. There, the theory with Witten index N equal to the positive integer level k was solved. The topologically twisted N = 2 Liouville theory that we have at hand is a generalization of the one studied in [12]. We momentarily digress to sketch the generalization of the analysis of [12] to include all cases of interest here. Firstly, we allow for a general positive level k.
Secondly, we allow for any Witten index N. This has as a consequence that the Landau-Ginzburg superpotential of the N = 2 Liouville theory can be written as where the chiral superfield Y is related to the Liouville superfield Φ by Y −1 = exp 1 N k 2 Φ (in the conventions of [12] to which we must refer for background). This operator is well defined due to the particular choice of radius. The spectrum and superpotential lead to (strictly normalizable) chiral/chiral primary operators of the form Y −2 , Y −3 , . . . , Y −N . This description agrees with the one given in subsection 2.3. To find agreement with the strictly normalizable NS-NS sector states of [32] these operators must act on the almost-normalizable chiral state at the bottom of the continuum with j = 1/2 = m. The resulting states have R-charges n/N + 1/k where n = 1, 2, . . . , N − 1. When we compute the two-point function in the chiral state at the bottom of the continuum, corresponding to the operator e 1 √ 2k Φ , we find from anomalous R-charge conservation: 13 Because of the form of the superpotential (3.11) and the chiral ring operators Y −j , we surmise that the resulting topological conformal field theory and its deformations are governed by the N-th reduced KdV integrable hierarchy. It would be good to substantiate this prediction along the lines of [12,39].
The Gravitational Chiral Primaries
In subsections 2.3 and 3.2, we described the chiral ring elements from the perspective of the N = 2 superconformal Liouville theory on the boundary. In this subsection we investigate the chiral ring from the viewpoint of the bulk supergravity theory in three-dimensional anti-de Sitter space-time. We demonstrate that a quantization of the classical supergravity chiral solution set agrees with the space of observables in the twisted topological conformal field theory.
To set up the calculation, we remind the reader of a property of the chiral ring [40]. Chiral primary states are defined to be the states annihilated by all the operators G ± r>0 as well as G + −1/2 . Such states obey the equality h = q/2. In fact, the opposite is also true. A state |φ obeying the equality h = q/2 can be proven to be a chiral primary state [40]. Indeed, first note that the condition h = q/2 is equivalent to the operator equalities Consider the states J n>0 |φ which have conformal dimension h − n and charge q. These states do not satisfy the unitarity bound h q/2 and thus have to be zero. Then, one can use the commutation relation [J n , G ± r ] = ±G ± n+r to obtain the vanishing of the states G + n−1/2 |φ = J n G + −1/2 |φ − G + −1/2 J n |φ = 0, (3.14) for n > 0. Therefore the state |φ is chiral primary. In summary, if we demand that a configuration is annihilated by the operators G + −1/2 and G − 1/2 , then it is a chiral primary solution. Thus, we look for the supergravity solutions that are annihilated by these two operations. We perform the calculations in the physical theory of section 2.
The variation of the supergravity fields under supersymmetric transformations with parameter ǫ α is [13] The variation generated by the operator G + −1/2 corresponds to the parameter choice ǫ 1 = iǫ 2 = 1 2 exp( i 2 θ). Demanding that the variation δB equals zero then provides the constraint Q +1 = iQ +2 on the gravitini. That constraint also guarantees that the variation δL vanishes. Finally, the demand δQ +α = 0 gives rise to only one independent equation, Defining the tensorL = L + 2π 2k R B 2 [13], the equation simplifies to . A similar analysis leads to the equations Q +1 = −iQ +2 as well as If we require the bulk solution to be invariant under both transformations generated by the charges G + −1/2 and G − 1/2 , the solutions satisfy which corresponds to the classical values in planar coordinates. These are the expectation values corresponding to the insertion of a state with conformal weight h at 0 and ∞, and of charge q = 2h. In other words, these classical values do correspond to chiral primary sources, as we wished to demonstrate. The quantization of the space of chiral solutions can be understood as follows. The quantization of the parameter q = 2h is implied by the quantization of U(1) R charge. The periodicity of the angular coordinate θ and the consequent periodicity of the U(1) R Wilson line prescribe that the coefficient 2h of the U(1) R connection A (R) is quantized in units of 1/N. Otherwise, the source that generates the classical configuration (3.24) would not be gauge invariant in the quantum theory. The constraint of bulk gauge invariance agrees with the periodicity constraint in the dual theory. The bounds on the spin j are more intricate to argue. We provide a sketch of the reasoning that leads to those bounds. The lower bound is a consequence of only allowing normalizable discrete representations as sources. The upper bound can most easily be viewed as a consequence of spectral flow [41]. The action of spectral flow and its relation to boundary spectral flow was analyzed in [13]. We surmise that the boundary argument for the upper bound on j can be carried over to the bulk. Clearly, these arguments use in part the microscopic description of the bulk theory implied by its definition as the dual of N = 2 Liouville conformal field theory. Thus, the analysis of the chiral solution space illustrates well at which stage the definition of the bulk theory through the boundary quantum field theory intervenes.
The Bulk Chiral Ring
Chiral primary operators have an operator product at coinciding points which turns the set of chiral primaries into a ring. In the case where the level k is an integer and the Witten index N is chosen equal to the level, the topological Liouville theory and its deformations were carefully studied in [12]. The chiral ring structure constants at the conformal point were fixed by anomalous U(1) R charge conservation. We claimed that the same property holds in the theory with more general Witten index N in subsection 3.2. In the following, we stress that the bulk supergravity theory is subject to the same anomalous R-charge conservation argument.
Indeed, the anomalous R-charge conservation follows from contour deformation on the sphere, as well as the anomalous transformation of the R-current under conformal transformations. The latter is an immediate consequence of the form of the topological energy-momentum tensor: As a result of the twist, the conformal transformation property of the U(1) R current changes due to the third order pole in its operator product expansion with the energy-momentum tensor, proportional to c/3. As a consequence, the charge conservation rule on a boundary sphere reads where the q i are the R-charges of vertex operator insertions. Equivalently, the anomalous conservation rule comes from the term in the action of the form which arose upon twisting. This term then feeds, on the sphere, into the conservation rule that follows from integrating over the zero mode of the field θ, again leading to anomalous R-charge conservation. An important point to realize is that the same argument applies, mutatis mutandis, to the bulk supergravity theory. Indeed, the above reasoning depended only on the symmetry structure of the theory, which is known to be present in the bulk supergravity theory as well. Equivalently, we derived the term (3.27) from the twisted bulk supergravity theory in subsection 3.1. In summary, anomalous R-charge conservation fixes the structure constants at the conformal point both on the boundary and in the bulk.
It is important to note that in our duality, we assume that the bulk and boundary measures of integration in the path integral are the same. When applying localization in the style of [38], we assume this in particular for the zero mode integration over bosonic as well as fermionic zero modes.
Finally, we make the observation that when we talk about the gravitational side, we have made it manifest that what we mean by a topological theory of gravity in anti-de Sitter spacetime is a theory that is twisted in such a manner as to become independent of the boundary value of the metric.
Conclusions
We studied a pure supergravity AdS/CFT correspondence in three dimensions. Firstly, we defined a quantum theory of supergravity as the holographic dual to the two-dimensional N = 2 Liouville superconformal field theory on the boundary. Secondly, we topologically twisted the conformal field theory and the bulk theory. We analyzed how the known boundary prescription for the topological twist is reflected in the bulk theory of gravity, thus providing valuable insight into the topological twisting of quantum theories of gravity with extended supersymmetry. In particular, the boundary theory is topological when it is independent of the boundary metric, and for the gravity dual, the same property must hold. Thus, by a topological theory of gravity in anti-de Sitter space, we mean a bulk theory of quantum gravity with negative cosmological constant that is independent of the boundary metric.
The topologically twisted N = 2 Liouville theory, as well as its deformations were solved for in [12] for positive and integer level k equal to the Witten index N. To reach the semiclassical limit of our theory of gravity, we need a small level k. We sketched the generalization of the topological Liouville theory to these circumstances. It will be interesting to flesh out the sketch along the lines of [12,39]. Mapping out the gravitational equivalent of the space of deformations of the topological quantum field theories should be even more interesting. A hitch in this program potentially arises from the fact that the parameterization of the bulk metric and measure in terms of the Liouville field still requires a better conceptual understanding. These exercises are bound to contain lessons for general classes of topological holographic dualities, and perhaps for the untwisted dualities as well.
It could be interesting to apply the formalism of [7] for bulk path integral localization to the example at hand. A judicious choice of background could be the BTZ black hole, while the equivariant BRST charge would be related to the super isometry generated by the charge G + −1/2 . It should be noted that the asymptotic symmetry transformation of fluctuations in our example depends on those fluctuations [13]. This non-linearity is introduced via the condition that asymptotic boundary conditions must be preserved, and dealing with this non-linearity requires a slight generalization of [7]. Certainly, a worthwhile project is to apply the lessons learned in this example to AdS 3 superstring theory. That will provide a topological AdS/CFT duality in a context that incorporates a microscopic and macroscopic description of black hole entropy.
In summary, we believe the field of topological AdS/CFT is promising and hope our contribution spawns more conceptual entries in the holographic dictionary. | 9,301.6 | 2019-11-14T00:00:00.000 | [
"Physics"
] |
Study on the Properties of Wire Rope Grease Added with Lithium Grease
Hydrocarbon grease is used to lubricate and protect wire rope usually, but there are some problems have been found during lubricating process, such as high and low temperature working conditions resistance and waterproof performance and so on, in this paper we try to employ lithium grease to lubricate wire rope, according to the experiments, it comes a conclusion that lithium grease has the advantages which hydrocarbon grease doesn’t have by comparing the performance of both, the properties of wire rope grease have been successfully improved by adding some lithium grease, and the percentage of addition is a key factor.
Introduction
Wire rope is one of the key factors for usual operation of many lifting equipment, which is widely used in mining, forestry, industrial production, agriculture, transportation and housing construction [1]. Extreme temperature conditions, dust and corrosion, water washing and heavy loads are common in these working environments, and many authors believe that proper lubrication can improve the service life of the wire rope [2]. Wire rope is comprised of continuous wire strands wound around a central core (Figure 1). The individual wires and strands rub against each other in order to adjust themselves to the curvature of the rope wound around the core. Research shows the life-time of wire rope lubricated properly is about 2 times of which didn't lubricated properly at least [3].
Firstly, Some unique requirements for the selection of wire rope grease are put forward on account of the extremely weather, rain and snow, wind and sand, corrosive gas and so on in the working environment [4]. The grease should not be brittle in freezing weather, not blown at high temperature and not thrown off during long time operation, because of this, wire rope grease should have high and low temperature resistance, adhesion, water resistance and shear stability.
Secondly, lubricating performance is essential to the motional wire rope, lubrication prevents to wear on the individual wires and strands [5,6]. Hydrocarbon grease is usually used for anticorrosion and lubrication of wire rope, which is prepared from the blending of solid hydrocarbons and mineral oil at a certain proportion, the mineral oil lost the flowability and formed the grease by net structure which is formed in the cooling process. The grease has the following advantages: 1) It has excellent chemical stability and colloidal stability and will not become softening due to the metamorphism and decomposition of the thickener. 2) It is almost insoluble in water and non-emulsifying, prevent water and air from entering the surface of the wire rope [7]. 3) With added the functional additives, the grease show better lubricating, anti-rust and anti-wear performance [8].
The general lithium grease is suitable for lubrication of rolling, sliding bearing and other friction parts of mechanical equipment because of its advantages of great lubricating property, water-resistance property, mechanical stability and anti-rust property, etc [9]. When external forces are applied its soap particles are relatively soft and easy to deform, which are widely used in industry but rarely in steel wire rope lubricating [10]. But, in this paper, we tried to add lithium grease to hydrocarbon grease and want to improve the performance of wire rope grease.
Specimen Preparation
The preparation of wire rope grease:Weighed two equal parts of base oil, heated to 100°C stirring for 1 hour,then added thickener A and thickener B to the two base oils separately, heated to 145±5°C, continue stirring for 2 hours. And then the sample was cooled and analyzed The preparation of lithium grease: Add about a half of the base oil and 12 -hydroxy stearic acid to the kettle and heat up to about 90°C, then add the lithium hydroxide aqueous solution, heat up to 100°C, stirring for 30 minutes, up to about 170°C to dehydrate for 10 minutes, continue up to 200°C~205°C keep 5~10 minutes. Add the rest half of oil and cooling the mixture.
Properties Comparison of Lithium Grease and Hydrocarbon Grease for Wire Rope
Dropping point, low temperature performance and slid test are the three main performances in analysis of wire rope grease. In summer, coating on the surface of wire rope, after sun baking and friction heating, the temperature at this point can be up to 50~60°C, higher dropping point of grease will guarantee it not to flow, to drip and to adhere to the surface of steel wire rope firmly to lubricate and protect the wire rope. In winter, wire rope grease which has excellent low temperature performance, under -30°C, still lubricate and protect the wire rope, not cracked, not drop off. Adhesion is an important performance of lifting rope grease in field operation. Good adhesion grease can adhere to the surface of wire rope without shedding, so as to protect and lubricate the wire rope effectively for a long time and improve the service life of wire rope. Comparison of the properties of lithium soap and hydrocarbon thickeners shows in table 6 The results show that the lithium grease had excellent performance in the wire rope grease performance test. The grease prepared by thickener A shows good low-temperature performance, its low dropping point indicates it is better used in a relative low temperature situation. The grease prepared by thickener B has high-and-low temperature performance can meet the requirements of the use of wire rope grease. But compared with lithium grease, under -40°C the semi-solid hydrocarbon lubricant become brittle. However, lithium based grease not only has good performance at high temperature, but also shows better anti-brittle performance at low temperature, and the lithium grease performed well in the sild test.
Water-Resistance Test
At present, there is no requirement to test the water-resistance property of wire rope grease in domestic standard, while Hildebrant and Slack of imperial oil company of Canada proposed to test the anti-water spray of the steel wire rope grease at the 58th annual conference of international grease association [3], ASTM D4049 method is equivalent to the domestic standard SH/T 0643-1997(2005) test method, testing the water-resistance performance of wire rope grease, in order to testthe application effect of the product in the presence of water. Thickener B has better water resistance while lithium soap has poor water resistance. Under the condition of water washing, 83.9% of the samples are washed, which will lead to the exposed or semi-exposed state of the wire rope rusting and breaking. So the blend of lithium grease and hydrocarbon grease were anticipated to solve the problem.
The Experiment of Blending Lithium Grease and Hydrocarbon Grease
Lithium grease has shown good performance in the wire rope grease performance test, but water-resistance performance is not so good. Accordingly, the hydrocarbon grease and lithium grease were mixed in proportion to test the properties, Table 7 shows the specific experimental scheme and test results. Table 7 shows, it can be seen that the slid test of the wire rope grease are non-slip on six sides after adding 5% lithium grease to the hydrocarbon grease, which was not affected by the addition of lithium grease, and the dropping point, low-temperature performance and water-resistance have been improved, which enabled the product to have a wider range of operating temperature. Compared with the hydrocarbon wire rope grease, the water-resistance has not been reduced by the adding of lithium grease. Based on the same base oil they used it is believed that different thickeners lead to this situation. Scanning electron microscopy measured the thickeners A fiber and lithium soap fiber, as shown in figure 2 and figure 3 respectively.
As can be seen from the microscope photograph structure diagram of the thickener A and lithium based thickener, the latter has longer twisted fiber structure, while the fiber of the thickener A is short and not evenly distributed, but the fibers are close to each other and denser, and this compact fiber can prevent water vapor from immersing in thus having better water-resistance property. Lithium based thickener, a long, twisted fiber structure, is prone to roll when water washing, so it has poor water-resistance. However, when the lithium based thickener is mixed with the hydrocarbon thickener A, the content of lithium based thickener is less and distributed among the hydrocarbon thickener. Surrounded by this compact structure, it is difficult to move under the condition of water washing, and it can wrap the big fiber in the hydrocarbon grease so that it does not move. Therefore, the water-resistance performance is improved.
Increase the Proportion of Addition of Lithium Based Grease in the Mixture
The samples were prepared to increase the proportion of addition of lithium grease in the mixture and its' properties were tested see in table 8. As shown in the table 8, when the proportion of lithium based thickener increases, the slip tests are still passed. The dropping point increase and the low-temperature property become better, but the water-resistance become worse. According to the electron microscope images of the two thickeners, it can be concluded that when the proportion of lithium based thickener added increases, the hydrocarbon based thickener A is dispersed between the twisted fibers of lithium based thickener, forming a structure similar to that of a net holding a stone. Therefore, when washed, the main structure of the twisted fiber net moves. Under the action of gravity of the dense hydrocarbon fiber, the movement is increased, and more samples are lost and the water-resistant becomes worse.
Conclusion and Suggestions
It comes the conclusion from our tentative experiment of using lithium grease to lubricate wire rope as the following: compared with hydrocarbon grease, lithium based grease not only has good performance at high temperature, but also shows better anti-brittle performance at low temperature, and the lithium grease performed well in the slid test. When lithium grease is added to the hydrocarbon grease at a proportion of 5%, the properties of the sample prepared become better than pure hydrocarbon grease, especially the water resistance performance. The dropping point of the sample become higher as the proportion of amount of lithium grease was increased, but its water-resistant performance was greatly changed, which was worse than pure lithium grease, its water washing lost weight reaches to more than 98%. Therefore,it is suggested that adding a small amount of lithium grease into the hydrocarbon grease can not only improve the high and low temperature performance of the samples, but also improve the water-resistant spray performance of the samples. | 2,417.8 | 2019-01-21T00:00:00.000 | [
"Materials Science"
] |
A Systemic-Relational Ethical Framework for Aquatic Ecosystem Health Research and Management in Social–Ecological Systems
: This paper argues that if the goal of slowing global ecological degradation, and of sustained improvement in aquatic ecosystem health is to be achieved, then a departure is required from the traditional, discipline-focused approach to aquatic ecosystem health research and management. It argues that a shift needs to be made towards systemic, integrative, and holistic approaches, drawing on diverse disciplines, with values and ethics as fundamental to such approaches. The paper proposes the systemic-relational (SR) ethical framework to aquatic ecosystem health research and management as an essential contribution to addressing the potential intractability of the continuing deterioration of aquatic ecosystem health. The framework recognises the centrality of values in aquatic ecosystem health management, and the role of ethics in negotiating, and constructively balancing, conflicting values to realise healthy ecosystems in social–ecological systems (SES). The implications of the framework in terms of the research-practice interface, decision making, policy formulation, and communication are discussed. arising from ecosystem (dis) services into societal constituencies and its impact on well-being, manifested in physical, economic, and social capitals. Values, institutions, management, and governance contexts are the key influencers of drivers of pressures and flows of benefits and costs. Framework developed from the basic idea of the ecosystem service cascade model [43].
Introduction
The aquatic ecosystems are critical to socio-economic development. Society relies on them for a variety of ecosystem services [1]. However, the continuing deterioration of aquatic ecosystem health globally despite some progress in science and practice with regard to ecosystem health presents a potentially intractable wicked problem. Part of this intractability arises because of insufficient appreciation by both the scientific (e.g., ecologists) and practice (e.g., planners, policy makers, and resource managers) stakeholders of the complexity inherent in the interconnectedness and interdependence between the ecological and social subsystems within any social-ecological system (SES) context. The conceptualisation of the SES recognises that the ecological and social subsystems together form an inseparable, integrated, coupled unitary system, i.e., the SES. We here argue that if the goal of slowing ecological degradation, and of sustained improvement in aquatic ecosystem health, is to be achieved, then a departure is required from the traditional, discipline-focused approach to aquatic ecosystem health research and management. A shift needs to be made towards systemic, integrative, and holistic approaches, drawing on diverse disciplines, with values and ethics as fundamental to such approaches. We take values to be what specific societal groupings or constituencies express or believe at a generalised level to be good or bad, and ethics as a systematic concern with the principles by which conduct, morals, and values are clarified and justified, as we seek to distinguish between right and wrong in our behaviour towards other people and towards nature [2].
In this paper, we propose the systemic-relational (SR) ethical framework to aquatic ecosystem health research and management as an essential contribution to addressing the potential intractability of the continuing deterioration of aquatic ecosystem health. The paper has six main parts. In Section 1 (introduction), we revisited and analysed the concept of ecosystem health, and placed it within the SES context, arguing that it is an integrative rather than a pure ecological framing. In Section 2, we present the traditional view of ecosystem health, and tools, indicators, and approaches currently used for its assessment. We posit that the traditional biophysical approach only deals with the ecological component of the SES, and therefore not sufficiently integrative, and therefore unlikely to slow the current trajectory of the deterioration of ecosystem health. We then presented (in Section 3) the SR as an integrative framework for aquatic ecosystem health research and management in the SES. An argument is made in Section 4, that value judgement permeates all aspects of the use and protection of aquatic ecosystem in as much as societal and professional value judgements are involved in defining an acceptable ecosystem health condition. The implications of the SR framework are argued in Section 5, in terms of the imperative for integrative assessment tools, the flow of information, ethics of benefits and costs sharing regarding the ecosystem (dis) services flow, the UN sustainable development goals (SDGs), as well as a call for collaborative research and transformative communication. We conclude (in Section 6) by emphasising that policy, managerial, and research efforts need to be redirected to focus on the SES as an integrated whole, to see it as the unit of worth, towards which decision-making, and developmental and preserving action, is directed.
The SR ethically grounded approach recognises that ecological and social-economic components together form an integrated and dynamic complex system and that these two major components are in ongoing complementary and co-supportive interactions, with multiple, cross-scale dynamic feedbacks [2]. The approach recognises the centrality of values in aquatic ecosystem health management, and the role of ethics in negotiating, and constructively balancing conflicting values in order to realise healthy ecosystems. It postulates that within the SES, values relate to and are derived from the SES as a whole, its components as well as the relationships between the components and from the emergent properties of the system [2]. In this paper, we thus propose an SR ethical framework for integrative and holistic aquatic ecosystem health research and management in the SES. Before proceeding to lay out the perspective of the SR framework to aquatic ecosystem health research and management, it is critical to briefly analyse what is meant by aquatic ecosystem health, as well as the approaches and indicators with which it is currently being assessed.
In the 1990s ecologists led the debate on whether or not the concept of ecosystem health, being value-laden, was amenable to scientific analysis [3][4][5]. Since then, it has gained scientific acceptability and ecologists, policy makers, and resource managers now widely and frequently use the term to describe ecological conditions, using a variety of indicators and endpoints [6,7]. It is not our intention to provide a review of aquatic ecosystem health research as this has been done elsewhere (e.g., [7], but to provide some analytical consideration of the concept of ecosystem health as a point of departure for our argument. We agree with [5,8] who distinguishes aquatic ecosystem integrity from health, with the former referring to a state or condition in which the natural processes, structure, dynamics, activities, functions, and all related biophysical attributes of an ecosystem, are maintained with no human or with minimal human influence, and are only influenced by natural evolutionary and biogeographical processes; and the latter (health) referring to a human construct (i.e., value-laden), describing a preferred or an acceptable condition of an ecosystem that has been influenced by humans [5,9]. Thus, the concept of ecosystem health recognises that humans are an integral component of the ecosystem and that the continuing supply (sustainability) of ecosystem services is necessary for sustainable human development [10]. Ecosystem health can therefore be viewed as having two components: the biophysical component (referring to the state of the biological, physical, and chemical conditions of the ecosystem) and the social-economic component, which depends on the supply of vital ecosystem services to meet human social-economic development. The continuing supply of ecosystem services is dependent on the maintenance and functionality of the processes, organisation, structure, and function of the biophysical component of the ecosystem [9]. Thus, a "healthy" aquatic ecosystem is that which has the capacity to provide social-economic benefits (benefits flowing from ecosystem services) while still being able to sustain its ecological functioning.
Inherent in the concept of ecosystem health, are the notion of human dependence on aquatic ecosystems, as well as the capacity of human activities to alter ecosystem properties, so that sustainability can only be achieved if there is a balance between human uses of ecosystems and their protection. The achievement of this balance must be guided by ethical considerations, taking into account the distribution of costs and benefits between all stakeholders (both present and future generations) and environmental externalities. This would involve selecting and invoking criteria to relate societal values to each other, and to determine the relative value given to human and non-human components of the ecosystem, as well as to shorter and longer term sustainability agendas.
With regard to access to the benefits of aquatic ecosystem services, integrated water resource management (IWRM) ushers in an interest-based, consensus-seeking approach to negotiating access to aquatic ecosystem services. However, the ethical challenge is not only to ensure equal participation by all interested and affected stakeholders e.g., the privileged versus the marginalised, the weak versus the strong, the urban dwellers versus the rural dwellers, the informed versus the uninformed. That is, the challenge is not only a matter of facilitation to bring out the various viewpoints in spite of 'unequal starting blocks'; it is how to reconcile different principles for taking account of differing viewpoints in taking matters further, such as allowing for those unequal starting blocks, and trade-offs between different value positions. Addressing the underlying power relations, e.g., between the privileged versus the marginalised that shape negotiated access to the benefits of ecosystem services presents a fundamental ethical challenge to resource managers, policy, and decision makers [11]. In this regard, a distinction between practical and ethical challenges needs to be made. With good facilitation skills, managers can ensure that all stakeholders air their views, and all values are documented and accommodated. However, beyond this facilitation process, what is done with the multiple interests for both ecological and social subsystems, perspectives and values and how values are traded off and by what principles and criteria these are done, are fundamental ethical challenges that need to be addressed if benefits accruing from ecosystem services are to be accessed in a manner that is ethically sound.
By access to the decision-making process relating to ecosystem services, we mean the creation of an enabling environment that empowers all stakeholders to have equal voice and influence over decision-making around resource control, use and allocation, as well as the sharing of costs and benefits. A place at the table does not, however, guarantee effective participation in negotiations about a project's outcome. The danger remains that "participatory approaches may mask different levels of power and influence, exaggerate the level of agreement reached, and expose disadvantaged groups to manipulation and control by more powerful stakeholders" [12]. Stakeholders' empowerment must be seen as a precondition for effective engagement and participation [13]. Differences in empowerment between stakeholders can be perceived as threats to mutual cooperation, as weaker groups may feel alienated and become resistant to cooperation. In [14], it was noted that although relatively endowed stakeholders in the Sabie Catchment in South Africa for example, seem to cooperate about decision-making around aquatic ecosystem services within their catchments, their appetite for sustained and broad-based cooperation with other stakeholders dwindled over time because of their perception of risks regarding returns on time and effort invested in cooperative deliberations. Thus, risk perception relating to the importance and value of participating in vehicles for natural resource management needs to be addressed as a way of broadening participation in decision-making processes.
A social-ecological system (SES) view of the concept of aquatic ecosystem health has several implications for the research-practice interface. First, the centrality of values regarding multiple claimants to ecosystem services at any given time and place, and the need to identify criteria by which values underpinning such claims can be judged-not only between societal constituencies, but also between societal and ecological constituencies. Second, a clear recognition of the interdependence and mutually constitutive nature of the health of the biophysical and social-economic subsystems as components of the SES. Third, the inherent and inescapable dependence of humans on aquatic ecosystems through exertion of pressure and consequent ecosystems response, which may supply ecosystem services and disservices [15,16]. Fourth, the ethical implications that the term "acceptable ecosystem health condition" being value-laden, raises, signifying the centrality of collaboration between ecologists and other stakeholders as a prerequisite for addressing the deteriorating ecosystem health, and finally, the imperative for holistic and integrative approaches to addressing the complex, interwoven challenges of the deteriorating aquatic ecosystem health.
We argue that ecologists should take advantage of the full scope of the concept of aquatic ecosystem health to meaningfully engage in research capable of informing practice, particularly with regard to decision making, policy formulation, and resource management-doing so by taking cognisance of the complexity of the SES is where the SR ethical framework for aquatic ecosystem health research and management holds significance. Thus, this paper argues for an SR ethical framework as an enabler of holistic and integrative research and management of aquatic ecosystem health, by taking cognisance of the complexities of the SES within which ecosystems should be viewed and managed.
A View of Aquatic Ecosystem Health from the Traditional Ecological Perspective
Traditionally, ecologists have been the primary leaders within the science of ecosystem health, which has had the biophysical component of the SES as its focus. To this end, different approaches and indicators have been developed to assess, evaluate and indicate the health of the biophysical component. The biophysical approach to aquatic ecosystem health research focuses on assessing and indicating the health of ecosystem structure and function [17] without explicit consideration of ecosystem services flow, benefits, and their impact on the social component of the SES. The aquatic ecosystem structure, which relates to the organisation, patterns, abundances, heterogeneity, distribution, and diversity of the biotic and abiotic components of the ecosystem, forms a critical part of the biophysical ecosystem health assessment. Chemical, biological, and physical indicators are the pillars of structural ecosystem health research and management [18,19]. These indicators of ecosystem health are used to assess the departure of the current state of the system from a reference, predevelopment and/or baseline condition. Metals, pesticides, nutrients, dissolved oxygen, pH, temperature, turbidity, and total dissolved solids are examples of physico-chemical indicators of ecosystem health.
The field of biomonitoring/bioassessment is devoted to using biological indicators, from molecular to ecosystem levels, for assessing ecosystem health. It relies on the sound understanding that resident biota (biological indicators) are able to provide an indication of ecosystem health, integrating the effects of chemical, physical, and biological stressors [6]. Commonly used biological indicators include macroinvertebrates, fish, and vegetation [20][21][22]. Approaches to biomonitoring for assessing ecosystem health include multivariate, multimetric, biotic indices, and the use of multiple biological traits [6]. Apart from physico-chemical analysis and biotic assessment, other aquatic ecosystem structural components often assessed include the habitat heterogeneity and complexity, hydrology and stream morphology, all of which may or may not be integrated to provide a holistic view of aquatic ecosystem health from a biophysical perspective.
Assessing the aquatic ecosystem function is the second component of the traditional approach to ecosystem health research and management [23]. The ecosystem function, which includes material compartments, processes, and fluxes, is critical to maintaining and sustaining aquatic ecosystem health. Processes of the aquatic ecosystem function that are often assessed include nutrient cycling, organic matter processing and decomposition, productivity, biomass turnover, top-down and bottom-up controls, carbon and energy fluxes and pools of materials [24]. As with the structure-based assessment, indicators of the ecosystem function have been developed and the most frequently used in ecosystem health assessments include food web dynamics, leaf litter breakdown rate and decomposition, ecosystem respiration, and an analysis of functional traits and feeding groups [23]. Both the ecosystem structure and function are linked, with the former operating as the organising constraint for the latter [25]. The biophysical approach to assessing aquatic ecosystem health is widely used, and in some jurisdictions, e.g., the United States of America (USA), Europe, Australia, and South Africa, states or their equivalents, are mandated through legislative provisions to monitor the biological, chemical, and physical conditions of aquatic ecosystems, and where necessary to take steps to restore degraded systems to acceptable conditions. The biophysical approach supposes that aquatic ecosystems have an inherent value in their own right, and setting health criteria based on biotic and abiotic components would protect the ecosystem as well as assure the long-term supply of ecosystem services. While this is true to some extent, people may find it difficult to respect such criteria if their aspirations and desired ecosystem services are not considered in setting such criteria, hence the imperative for an integrative approach. Further, the biophysical approach does not go far enough to include the social-economic context inherently embedded in the concept of ecosystem health, because, assurance of ecosystem services supply alone, does not guarantee fair, equitable and just distribution of the benefits and costs arising from such services. Nevertheless, the biophysical approach is widely use for the assessment of ecosystem health in many countries, e.g., [26]. While the approach indicates the health of the structure and function of the ecosystem, which provide the necessary basis for ecosystem services upon which humans depend, we argue that any approach that is not sufficiently integrative of the entire SES is unlikely to slow, halt, and/or reverse the current trajectory of aquatic ecosystem health deterioration. This is particularly true given the overriding influence of humans on ecosystems health in the Anthropocene [27]. Therefore, an integrative approach should integrate the assessments of both the biophysical and social components of the SES, and the ways in which these components interact. Here, we have thus argued for an SR framework for aquatic ecosystem health research and management, integrative of the entire SES.
A Systemic-Relational Framework for Managing Aquatic Ecosystem Health
As we have already argued, inherent in the concept of aquatic ecosystem health is the clear recognition of the coupling of social and ecological systems, thus requiring that an SES view is made explicit in the science and practice of aquatic ecosystem health. To do so, a framework that recognises the centrality of values and that develops a systemically interrelated set of environmental ethical principles, which enable working with diverse values is required [2]. The SR framework for ecosystem health management recognises the centrality of values and the overarching importance of ethics in order to achieve healthy systems. Within any given SES context, humans exert pressure on aquatic ecosystems. We argue that the trajectories of such pressure exerted by humans on aquatic ecosystems are largely influenced by societal values (Figure 1). Key drivers of pressure on aquatic ecosystems include escalating human population growth, land use change, economic activities such as agriculture and industry, as well as consumerist lifestyle [28], all of which are partly driven by value systems.
Aquatic ecosystems subjected to pressure often respond, of which a proportion of the response includes the supply of ecosystem services and disservices, components that are respectively beneficial and detrimental to society ( Figure 1). For example, a stream system receiving a point source pollution from a wastewater treatment plant may offer the service of waste purification and disposal, but as a part of its broader response may become breeding sites for harmful pathogenic bacteria and disease vectors harmful to society.
Thinking about ecosystem services and disservices is value-laden [29], as various societal constituencies prioritise and rank the relative importance of ecosystem dis (services) in decisionmaking and policy matters. For example, in some rural African communities [30], certain rivers/parts thereof are considered sacred because they are regarded as places where the ancestors manifest themselves. In such communities, people may act to protect the health of such rivers and to keep them clean because of the high priority accorded to the cultural services provided by the river, yet the same river for a more distant urban dweller may be seen only as a sewage disposal pipe. While the river is valued by both constituencies, the value accorded the river differ in relation to the benefits derived from the ecosystem services. If the ways people relate to and derive value and benefits from Sustainability 2019, 11, 5261 6 of 17 the ecosystems are not made explicit in relation to aquatic ecosystem health research, management, and policy formulation, there is the potential danger of prioritising and ranking certain values over others in relation to the ecosystem services, without rigorous debate and negotiation. Thus, in seeking to address the management of ecosystem health from a holistic SES perspective, the SR framework regards societal values as central to the conception of ecosystem health. It is thus critical to make explicit value systems underpinning claims to ecosystem services in relation to the overarching value of maintaining the overall health of the SES. Debating and reconciling these values in relation to the entire SES is where the SR ethical framework holds significance to aquatic ecosystem health research and management. and policy formulation, there is the potential danger of prioritising and ranking certain values over others in relation to the ecosystem services, without rigorous debate and negotiation. Thus, in seeking to address the management of ecosystem health from a holistic SES perspective, the SR framework regards societal values as central to the conception of ecosystem health. It is thus critical to make explicit value systems underpinning claims to ecosystem services in relation to the overarching value of maintaining the overall health of the SES. Debating and reconciling these values in relation to the entire SES is where the SR ethical framework holds significance to aquatic ecosystem health research and management. Human interactions with nature do not only result in the supply of ecosystem services, it may also lead to negative consequences, whether intended or not. Such negative consequences have been termed "ecosystem disservices" [29,31,32]. While there is much debate in the literature regarding the appropriateness of the concept of ecosystem disservices, and its implications for science and society [31], we have used the concept here to highlight ethical considerations in achieving the health of the SES. We argue that if the ultimate value to be pursued is the overall health and functionality of the Human interactions with nature do not only result in the supply of ecosystem services, it may also lead to negative consequences, whether intended or not. Such negative consequences have been termed "ecosystem disservices" [29,31,32]. While there is much debate in the literature regarding the appropriateness of the concept of ecosystem disservices, and its implications for science and society [31], we have used the concept here to highlight ethical considerations in achieving the health of the SES. We argue that if the ultimate value to be pursued is the overall health and functionality of the SES as postulated by [2], then an explicit attention needs to not be paid not only to the benefits arising from human-environment interactions, but also to the costs arising from such interactions. This is particularly critical because over time, negative effects on any component(s) of the SES, however tangential, may distort the overall systemic functionality of the SES. Further, from a social perspective, people who often bear the most costs of human-environment interactions are the marginalised, tangential groups and the weak, who are hardly visible and are less able to adapt [33]. We thus argue that if the SES functionality is to be sustained, then the benefits and costs arising from ecosystem (dis) services need be to be treated and debated equally in policy and managerial matters, in a manner that all components of the SES, their interactions, and relationships are treated equitably and are accorded equal moral and managerial regard. Such an explicit consideration of ecosystem (dis) services is likely to draw attention of managers and policy makers to the ethical implications of their actions. Our call here has significant implications for the ecosystems research: (i) Distinguishing between disservices arising from inherent ecological processes within ecosystems and those arising because of human transgression of ecological boundaries, (ii) developing the SES indicators for assessing and quantifying ecosystem (dis) services, (iii) developing valuation methods/tools of costs arising from ecosystem (dis) services, (iv) developing communication strategies for ecosystem (dis) services, (v) context-specificity of ecosystem disservices as what might be considered a disservice in one context may not be in another context, and (vi) integrating ecosystem (dis) services and biophysical structure and function within a holistic assessment framework for policy and management.
If the goal of achieving healthy ecosystems is to be realised as captured in the relevant UN SDGs, then the ways in which the components of the SES interact, and the dynamic interactive processes within constituencies in each of the components/subsystem, need to be fully accounted for in research and practice, and treated equitably as far as practically possible in policy and managerial matters. In the SR framework presented in Figure 1, within the social and ecological subsystems, the circular shapes represent different constituencies, e.g., the haves versus the have nots, the urban versus rural dwellers for the social subsystems, and, e.g., taxonomic versus functional richness/diversity, water quality versus quantity for the ecological subsystem. Implicitly, these different constituencies are often treated inequitably in decision-making, represented by different sizes of the circles, where larger circles indicate that it is being accorded a higher value in decision, managerial, and policy matters. The SR framework therefore seeks to highlight the potential danger of such often inadvertent inequitable treatment and thus advocates for equity for all components of the SES as a deliberate managerial strategy.
The SR framework extends equity beyond its conventional use, which is usually associated with social constituencies, e.g., equity between the rich and the poor, gender equity, to include equity between social and ecological constituencies, e.g., equity in water allocation between constituencies of society and those of the aquatic ecosystems. Some progress is already being made in this direction through the implementation of the instream flow requirements [34]. The equity advocated by the SR approach implies due regard for the rights of each of the SES components and their constituencies, without which achieving a heathy aquatic ecosystem within any SES context could be difficult. For example, a social grouping denied access to water services for whatever reason(s), may cause chaos, which may lead to overall SES dysfunctionality. Likewise, continuing pollution of the aquatic ecosystem indicates a lack of due regard to the ecological component of the SES, which may also lead to overall SES dysfunctionality over time.
Value Judgement Pervades All Aspects of the Use and Protection of Aquatic Ecosystems in Social-Ecological Systems
As already indicated, human activities exert pressure on aquatic ecosystems, altering their structure, function, and processes. The alteration of ecosystem integrity leads to a particular state of the aquatic ecosystem health condition and associated ecosystem services (Figure 2). While the natural sciences can provide evidence for the magnitude, frequency, and nature of alteration to ecosystems, it is society that ultimately has to judge what constitutes an acceptable alteration and whether or not the resulting ecosystem health condition is sustainable in perpetuating vital biophysical structure, function, and processes and the supply of associated ecosystem services [35]. whether or not the resulting ecosystem health condition is sustainable in perpetuating vital biophysical structure, function, and processes and the supply of associated ecosystem services [35]. Biophysical indicators have been developed to determine when an ecosystem health is in 'good' or 'poor' state from an ecological perspective [18], but in the policy and decision arena, the distinction between "good" and "poor" is the domain of ethics in as much as societal and professional value-judgements are involved. When the health condition is deemed acceptable (value-judgement) based upon the biophysical condition, and the benefits derived from the ecosystem with positive social-economic consequences, no intervention to slow or halt alteration may be needed ( Figure 2). However, when the health condition is deemed unacceptable (value-judgement) based upon either the biophysical condition, or diminished ecosystem services supply, interventions such as policy formulation, setting up specific environmental programmes and environmental targets may be initiated.
Depending on the degree of alteration, the extent of ecosystem damage, and the spatial-temporal scales of the damage, the ecosystem may be restored so that it continues to maintain its biophysical processes and also to provide society with benefits. We have argued that an anthropocentric view, with exclusive focus on benefits derived from the ecosystems by humans, or a non-anthropocentric view with the idea of protecting nature for its intrinsic values, are not sufficient to achieve the health of aquatic ecosystems, in as much as they fail to pay attention to the systemic properties and dynamic interactions, relationships, and the emergent properties of the SES. We therefore argued that the SR approach to environmental ethics [2] could better be relied upon in determining when alteration is acceptable or not, and when interventions are necessary to bring the components of the SES, their constituencies, interactions, and properties into balance.
What Then Constitutes an Acceptable Aquatic Ecosystem Health Condition in Social-Ecological Systems?
There is no simple answer to this question, because of the inherent complexity of decisions that are value-laden. In South Africa, for example, a set of biophysical indicators and characteristics are used to assess the degree of deviation of the present state of a site, e.g., of a river from its predevelopment condition, or that condition which would be expected if human impacts/alterations were minimal [36]. Depending on the present state of the site, a recommendation can then be made to restore the health condition to a 'desired' future condition. In biophysical terms, the desired future condition is usually expressed between the Ecological Category A-D [36], where A is pristine/natural and D is poor or a largely modified condition. Depending on the recommended desired future condition, several management interventions, including policy formulation/alteration, ecological target setting, designing restoration programmes, and awareness raising, can be triggered.
Values, and ethical contextualisation of these values in relation to aquatic ecosystems, often underpin recommendations for the desired future condition, but this is usually not made explicit in the scientific methods and approaches used in the biophysical assessment processes. For example, in South Africa, in recommending the desired future condition, the present ecological state (PES) and the ecological importance and sensitivity (EIS) are both taken into account [36]. The EIS refers to the importance of the particular aquatic ecosystem in terms of sustaining critical ecological and biodiversity elements and functions, and supplying ecosystem services, as well as the system's potential to retain its resilience [31]. Aquatic ecosystems considered to be of high EIS are often accorded high protection priority whereas, those with low EIS, low protection priority [37], but in Europe for example, authorities are mandated through the water framework directive (WFD) to restore all waterways to a good ecological condition.
Though not necessarily made explicit, the decision to assign one ecosystem a high protection priority over another, is a reflection of societal value judgements underpinned by various worldviews, as well as the level of risk society is willing to accept in maintaining a prescribed ecosystem health condition. For example, a society with a purely anthropocentric worldview and with a utilitarian and consumerist value system, is likely to have a negotiated threshold of acceptable limit of ecosystem alteration or ecosystem health condition at near one extreme end of the health continuum, provided vital ecosystem services are still supplied, notwithstanding the severe impact human development could be exerting on the biophysical condition ( Figure 3). A practical example is the case of the four dams in the lower Snake River system in the United States of America [38]. The quest for social-economic development without careful consideration of environmental consequences, led to the construction of the four dams on the river-a river vitally important for salmonid annual migration and for the indigenous tribal American population [38]. The construction of the dams, coupled with industrialisation of the catchment, led to pollution and to the obstruction of salmonid migration and to the eventual severe depletion of the salmonid population, until they were designated endangered. This example illustrates some of the implications of upholding a strongly anthropocentric worldview, which we argue as insufficient in upholding the overall health of the SES.
Likewise, a society with a strongly non-anthropocentric ethical position, with a value system of 'absolute respect' for nature, may allow for an acceptable limit of alteration to ecosystem integrity near the other end of the health continuum, allowing only minimal development, while ensuring that ecosystems supply only basic human needs (Figure 3). A non-anthropocentric position may drastically undermine claims to human social-economic development. For example, the indigenous tribal Americans within the catchment of the Columbia-Snake River system led a relatively simple life, interwoven with nature-respecting nature, and importantly, the seasonal migration of salmon upriver from the Pacific Ocean [39] and did everything possible to resist western forms of development, which undermine ecological integrity, upon which their lives and those of the salmons depend [40,41]. The SR ethical approach argues for a considered balanced position, recognising the inherent complexity, interconnectedness, and interdependence of social and ecological subsystems. This position, in our view align with the idea of sustainable development, which recognises that social-economic development must not undermine either ecological functioning, or the interactions, relationships, and properties of the SES. From the SR perspective therefore, it is not just about the components, but about the whole SES, and the resultant relationships, emergence, and dynamic interactions. In this regard, the limit/threshold of acceptable pressure on, use, and exploitation of, aquatic ecosystems vis-à-vis acceptable ecosystem health condition, would vary, depending on stakeholders' ethical standpoints. Thus, what is important is the criteria by which these value judgements are brought to be balanced and reconciled. quest for social-economic development without careful consideration of environmental consequences, led to the construction of the four dams on the river-a river vitally important for salmonid annual migration and for the indigenous tribal American population [38]. The construction of the dams, coupled with industrialisation of the catchment, led to pollution and to the obstruction of salmonid migration and to the eventual severe depletion of the salmonid population, until they were designated endangered. This example illustrates some of the implications of upholding a strongly anthropocentric worldview, which we argue as insufficient in upholding the overall health of the SES. Likewise, a society with a strongly non-anthropocentric ethical position, with a value system of 'absolute respect' for nature, may allow for an acceptable limit of alteration to ecosystem integrity near the other end of the health continuum, allowing only minimal development, while ensuring that ecosystems supply only basic human needs (Figure 3). A non-anthropocentric position may drastically undermine claims to human social-economic development. For example, the indigenous tribal Americans within the catchment of the Columbia-Snake River system led a relatively simple life, interwoven with nature-respecting nature, and importantly, the seasonal migration of salmon upriver from the Pacific Ocean [39] and did everything possible to resist western forms of development, which undermine ecological integrity, upon which their lives and those of the salmons depend [40,41]. The SR ethical approach argues for a considered balanced position, recognising the inherent complexity, interconnectedness, and interdependence of social and ecological subsystems. This position, in our view align with the idea of sustainable development, which recognises that social-economic development must not undermine either ecological functioning, or the interactions, relationships, and properties of the SES. From the SR perspective therefore, it is not just about the components, but about the whole SES, and the resultant relationships, emergence, and dynamic interactions. In this regard, the limit/threshold of acceptable pressure on, use, and exploitation of, aquatic ecosystems vis-à-vis acceptable ecosystem health condition, would vary, depending on stakeholders' ethical standpoints. Thus, what is important is the criteria by which these value judgements are brought to be balanced and reconciled. Ecosystem health continuum adapted from [5].
From the conceptual connection between ethics, values, and aquatic ecosystem health, it is clear that defining an acceptable ecosystem health condition and the threshold of alteration that triggers management actions is not straightforward, since values have both spatial and temporal dimensions [42,43]. For example, differences in attitude and values towards aquatic ecosystems between countries can be stumbling blocks to negotiating political agreements for the integrative management of transboundary river systems, particularly if riparian countries have different and potentially irreconcilable priorities [43].
A Need for Holistic and Integrative Ecosystem Health Assessment Tools and Approaches
We have tried to argue that the concept of aquatic ecosystem health is sufficiently robust to integrate biophysical and social-economic dimensions of social-ecological systems. Integrative assessment tools, methods, and approaches are therefore needed to indicate the conditions, information flow, and interactive processes between and within the components of the SES Building on the ecosystem service cascade model [44], which depicts the flow of ecosystem services and benefits from the ecological to social subsystem of the SES, we present a conceptual framework for thinking about and developing such integrative tools and approaches, that integrate ethical dimensions of aquatic ecosystem health research and management. The cascade model was developed to illustrate the relationship between the ecosystem services and benefits derived from them and the biophysical processes and structure that support them. Using the cascade and the driver, pressure, state, impact, response (DPSIR) framework as bases, it is possible to develop integrative tools that draw on all elements/components of the cascade to provide an integrative health assessment of the SES [45]. However, as indicated by [44], the cascade does not sufficiently make explicit values and ethical dimensions of the complex SES interactions, feedback, uncertainty, and emergence. We argue that without making explicit such value and ethical issues, achieving the health of the overall SES as envisaged by the SR approach, would be difficult, and almost impossible. This is because values, which are often hidden, play significant roles in the way we behave towards nature and other people, and ethical principles and criteria are thus necessary to interrogate values when they come into conflict [46]. We thus expand the cascade model, indicating the centrality of values and the imperative for ethical principles for sustainable management of aquatic ecosystem health.
As already argued, societal value systems influence the drivers of pressure on aquatic ecosystems, which alter the state/conditions of those systems, producing impacts that can be positive or negative. Society responds, often through measures that enhance the positive impacts and while reducing or minimising the effects of the negative ones. The framework presented in Figure 4 makes explicit the flows of negative and positive consequences of pressure on the aquatic ecosystems. From an ethical perspective, it is critical to pay attention to both dimensions so as to make explicit the flow of benefits and costs to societal constituencies. We take the positive impact of ecosystem response as ecosystem services [47] and the negative ones as ecosystem disservices. Making both positive and negative flows visible enable resource managers and policy makers to think about intended and unintended consequences of their actions, and the ethical implications of having losers and winners within the broader SES. We argue that the concepts of benefits and costs do not only apply to the social components, but to the entire SES. For example, alteration in the state of the aquatic ecosystem may create favourable habitats for certain organisms (beneficiaries of the impact, e.g., the preponderance of non-biting midges population at a sewage outlets in a stream), while eliminating suitable habitats for others (the losers).
While the benefits derived from ecosystem services enhance human well-being, costs on the other hand diminish well-being and ecological integrity (Figure 4). Enhanced and/or diminished well-being may flow into separate societal constituencies (represented by the solid arrows) or the same constituencies (represented by the broken lines between societal constituency and costs/benefit).
In some instances, societal constituencies receiving either benefits/costs or both, interact, whereas in other cases, interaction is minimal, e.g., cases where forests in one region of the world provide a carbon sink that ameliorates global climate change effects, which benefit people in distant places, or extraction of water resources, e.g., hydropower in one region, which benefit people in other regions. Both the enhanced and diminished well-being manifest themselves in different forms of capital, such as economic capital, (e.g., enhanced/diminished income), physical capital (e.g., improved/impaired infrastructure), and social capital (e.g., improved sense of identity and place, or the loss thereof). Within the context of the UN SDGs, our framework, which make explicit the ecosystem (dis) services flow as well as associated benefits and costs, into societal constituencies, can serve as a tool for analysing the implications of policies, and managerial actions-e.g., with regard to those that stand to gain or lose from a particular policy or managerial position. well-being may flow into separate societal constituencies (represented by the solid arrows) or the same constituencies (represented by the broken lines between societal constituency and costs/benefit). In some instances, societal constituencies receiving either benefits/costs or both, interact, whereas in other cases, interaction is minimal, e.g., cases where forests in one region of the world provide a carbon sink that ameliorates global climate change effects, which benefit people in distant places, or extraction of water resources, e.g., hydropower in one region, which benefit people in other regions. Both the enhanced and diminished well-being manifest themselves in different forms of capital, such as economic capital, (e.g., enhanced/diminished income), physical capital (e.g., improved/impaired infrastructure), and social capital (e.g., improved sense of identity and place, or the loss thereof). Within the context of the UN SDGs, our framework, which make explicit the ecosystem (dis) services flow as well as associated benefits and costs, into societal constituencies, can serve as a tool for analysing the implications of policies, and managerial actions-e.g., with regard to those that stand to gain or lose from a particular policy or managerial position. On the right side of Figure 4 are values, institutions, management, and governance. Values to a large extent define institutional, management, and governance norms and practices as well as priorities. The directions of flow of benefits and costs accrued from ecosystem services and disservices with regard to different societal constituencies are largely influenced by the interactions between values, institutional, management and governance norms, practices, and priorities. That is, the nature, magnitude, frequency of pressure exerted on the biophysical component, and the direction of flows of benefits or costs from the biophysical to society are not value-neutral, but are outcomes of value-laden choices and decisions, made primarily by people, which are indicated in the framework in Figure 4 by the broken arrows connecting values, institutions, management, and governance to the rest of the model. The framework thus raises a range of ethical questions: (i) Under what condition(s) is a component of the ecosystem considered a (dis) service, by and for whom; (ii) how are costs and benefits distributed both in terms of spatial (local versus regional versus global) and temporal (present versus future generations) scales; (iii) how does the ecosystem services/disservices flow impact on ecological health, and the systemic-relationality inherent in the SES; (iv) which value of the ecosystem is prioritised and why, and what are the implications for the value trade-off and the potential conflict arising from the value inequity, (v) how does exerted pressure influence the biophysical condition, and given the imperative to balance use and protection, On the right side of Figure 4 are values, institutions, management, and governance. Values to a large extent define institutional, management, and governance norms and practices as well as priorities. The directions of flow of benefits and costs accrued from ecosystem services and disservices with regard to different societal constituencies are largely influenced by the interactions between values, institutional, management and governance norms, practices, and priorities. That is, the nature, magnitude, frequency of pressure exerted on the biophysical component, and the direction of flows of benefits or costs from the biophysical to society are not value-neutral, but are outcomes of value-laden choices and decisions, made primarily by people, which are indicated in the framework in Figure 4 by the broken arrows connecting values, institutions, management, and governance to the rest of the model. The framework thus raises a range of ethical questions: (i) Under what condition(s) is a component of the ecosystem considered a (dis) service, by and for whom; (ii) how are costs and benefits distributed both in terms of spatial (local versus regional versus global) and temporal (present versus future generations) scales; (iii) how does the ecosystem services/disservices flow impact on ecological health, and the systemic-relationality inherent in the SES; (iv) which value of the ecosystem is prioritised and why, and what are the implications for the value trade-off and the potential conflict arising from the value inequity, (v) how does exerted pressure influence the biophysical condition, and given the imperative to balance use and protection, whether such pressure and resulting conditions are acceptable. It is not our intention to argue these issues further here, but to raise them as matters worthy of consideration in decisions around the flow and management of ecosystem (dis) services.
The framework further indicates that pressure exerted on ecosystems is an outcome of the complex interaction between values, institutions, management, and governance. For example, the decision to construct a dam (pressure) in a river could be underpinned by a utilitarian value of food production (through irrigation) or hydropower, for which institutional, management, and governance contexts that are conducive for its implementation (the dam project) may be put in place.
In order to provide for holistic health of the SES, integration of all aspects of the framework, from pressure, altering biophysical condition, to the flow of ecosystem services and disservices, benefits and costs (to the ecological and social components) as well as the interacting effects of values, institutions, management, and governance on the SES, is needed. In developing such integrative tools and approaches for assessing ecosystem health, attention needs to be paid to describing the winners and losers (both ecological and social) in the SES so that the ethical implications can be highlighted and taken forward in policy and managerial matters. Further, in evaluating aquatic ecosystem health from the SR perspective, values need to be clarified and assessed, using a clear set of ethical principles [2,29], to ascertain how values influence the use, evaluation, measurement, and public perception of ecosystem services and the biophysical structures and processes that support them.
Within the SES, if such an integrative approach is to be followed, then multiple indicators need to be integrated. Indicators are used to provide the conditions or change in the state of an environmentally relevant phenomenon in ecology [47]. Thus, following the framework presented in Figure 4, indicators of the ecosystem structure, function, and services/disservices flow would need to be integrated [48] with those indicating benefits/costs, well-being, institutional values, and norms as well as management and governance context. When indicating costs and benefits, it is important to note the overarching influence of the SES context, as what constitutes a benefit in one context may become a cost in another context [49]. It is critical to note that within the SES, most indicators interact in complex and non-linear ways, giving rise to complex matrices of interactions [47]. As argued by [50] such integration should take into account the spatial-temporal dynamics/scales at which each component of the cascade operates, while ensuring that the linkages between the components of the cascade, are fully accounted for. [50] have also argued for integrative approaches, stressing that ecosystem services assessments should integrate changes in drivers, pressure, states, and responses of the biophysical components as ecosystem services are outcomes of pressure exerted on the biophysical component of the SES. Overall, we posit that efforts in aquatic ecosystem health research be channeled toward integrative methods and approaches capable of providing a holistic health view of the ecosystems within any SES context. To be successful in developing such integrative approaches and methods, different disciplines, knowledge systems would be required and collaboration at the research-practice interface would be critical. Interestingly, realising the complexities inherent in managing natural resources, the field of transdisciplinarity (TD) and translational ecology (TE) have evolved to draw attention to the imperative for collaborative research.
Transdisciplinarity and Translational Ecology
Transdisciplinarity emerged in the academic literature in response to calls to do science with and for society in addressing inherently complex, potentially intractable, and wicked problems [51]. Lange et al. define TD as a "reflexive, integrative, method-driven scientific principle aiming at the solution or transition of societal problems and concurrently of related scientific problems by differentiating and integrating knowledge from various scientific and societal bodies of knowledge" [52]. It recognises the imperative for cooperative and collaborative research, while drawing from multiple knowledge systems in addressing complex societal problems such as the deteriorating aquatic ecosystems health. Transdisciplinarity explicitly calls for knowledge co-production, recognising the centrality of the wider society in producing knowledge that can effect change. In the context of an SR ethical approach to aquatic ecosystem health research and management, it implies regarding all knowledge forms and systems as equally important, while giving careful consideration to critical values of fairness, equity, and sustainability in building the TD team and undertaking adaptive cross-cutting activities aimed at addressing the deteriorating ecosystem health. We argue that drawing on TD research principles and working across disciplines, would enable ecologists and other researchers interested in aquatic ecosystem health research to move towards developing integrative approaches and methods for holistic assessment of ecosystem health, and taking forward complex ethical issues into policy and practice. Indeed, our call for TD is not the first with regard to ecosystem health research. Earlier, [4] had made a similar call.
Translational ecology (TE) has emerged in recognition of the need for ecologists to connect end-users of their research earlier on in the process, bridging research, and action (practice) [53]. TE explicitly calls on ecologists to engage with other disciplines, particularly the social sciences, and to show long term commitment and cooperation in undertaking integrative research. TE is intentional with regard to seeking to collaborate with other disciplines, and in acknowledging shared responsibility for delivering research products that can inform and facilitate effective decision making in complex contexts with regard to natural resource management and conservation. TE has thus emerged in ecology as an approach for ecologists to conduct socially relevant research that is sufficiently integrative and collaborative to transform societal problems and bring about solutions in complex SES. TD and TE have a number of features in common such as emphasis on societally relevant research, co-production/development of knowledge, cooperative/collaborative research and decision-making and societally-relevant research outcomes. We believe that the combination of TD and TE would enable sufficiently integrative research outcomes that can shape our understanding of how best to move towards achieving heathy ecosystems in SES contexts, and this is particularly true if ecologists are to play a leading role in research that contributes to the realisation of the UN sustainable development goals.
Transformative Communication
If the public is to relate to aquatic ecosystems in a more humane, responsible, and respectful way, then there needs to be an effective and transformative communication of the value of 'healthy' ecosystems to society. By transformative we imply communication that effect behavioural change, and social and mutual learning in ways that contribute to sustaining the SES health.
The continuing degradation of aquatic ecosystem conditions, despite investment in research, policy, and management institutions globally, could be attributed at least partly to the perceived 'disconnect' between the human and the ecological subsystems. Communication about the inherent linkages between societal and ecological systems needs to be strengthened in the public and policy domains. It needs to be clear to society that human well-being is explicitly linked to ecological health. Thus, the rationale for protecting ecosystem health becomes systemic, and needs to be underpinned by a systemic-relational ethic that views both the ecological and social systems as coupled.
Furthermore, in communicating the value of protecting aquatic ecosystem health, emphasis also needs to be placed on equity between different social constituencies at the catchment and sub-catchment levels. The variety of aquatic ecosystem services, their benefits to different social consistencies (e.g., poor/wealthy, rural/urban dwellers, etc.), and a shared understanding of the multiple value systems that influence how different constituencies value ecosystem services, needs to be stressed in public and policy domains [54]. Equally important is the appreciation and awareness of the distribution of costs and benefits associated with access to aquatic ecosystem services. Often, access to the benefits of aquatic ecosystems by one constituency could lead to costs/burden carried by another, and therefore communication should address equitable distribution of costs and benefits, and explicitly identify the sets of principles for reconciling and balancing values if and when they come into conflict. This way, communication then makes clear to society decisions regarding access to aquatic ecosystem services and unintended consequences of disservices that may follow. From an academic perspective, it also implies that the traditional mode of disseminating research information through journals is not sufficient, as such mediums can be regarded as exclusive and elitist. If communication is to effect transformative behaviour and attitudes in ways that contribute to sustaining SES health, then it has to achieve social and mutual learning, and empowerment of all interested and affected parties.
Conclusions
Ecosystem health-while it is a human construct-needs to be seen within the context of the SES. The SES may be understood as consisting of two major components, i.e., the biophysical and the social-economic. The management of the interrelationship between components of the SES needs to be done in an integrated, holistic manner, which is sustaining to both components and their relationships.
We must take care not to equate values and ethics; ethics, in an important sense, is a meta-values exercise. Ethics is about criteria for the ways in which one relates values-which are not necessarily compatible in all contexts-to each other. Nor must not we equate ethics, or a specific set of environmental ethics, with a list of values. Different environmental ethical approaches, with emphases on different central principles-whether they are anthropocentric, or non-anthropocentric, or relational would relate values to each other in potentially different ways, but here we have presented our argument based on the recently developed SR approach to environmental ethics.
We have argued that ecosystem health needs to be conceptualised and managed in terms of an approach to the ecosystem as an integrated unit, in which the health of the biophysical and the social-economical aspects are mutually sustaining and interdependent. In our understanding, this calls for a systemic-relationally oriented environmental ethics, in which we move towards locating the central value in the overall SES itself, as a set of components in interrelationship, rather than in any specific component, such as the anthropocentric or the non-anthropocentric component. This implies taking the potentially difficult step-certainly from a policy and administrative perspective-of decentring the human component, which has hitherto been prioritised; instead we need to redirect our focus to the SES as an integrated whole, to see it as the unit of worth, towards which decision-making, and developmental, and preserving action, is directed. | 12,678.4 | 2019-09-25T00:00:00.000 | [
"Environmental Science",
"Philosophy"
] |
Synthetic populations of protoplanetary disks: Impact of magnetic fields and radiative transfer
Context. Protostellar disks are the product of angular momentum conservation during protostellar collapse. Understanding their formation is crucial because they are the birthplace of planets and their formation is also tightly related to star formation. Unfortunately, the initial properties of Class 0 disks and their evolution are still poorly constrained both theoretically and observationally. Aims. We aim to better understand the mechanisms that set the statistics of disk properties as well as to study their formation in massive protostellar clumps. We also want to provide the community with synthetic disk populations to better interpret young disk observations. Methods. We used the ramses code to model star and disk formation in massive protostellar clumps with magnetohydrodynamics, including the effect of ambipolar diffusion and radiative transfer as well as stellar radiative feedback. Those simulations, resolved up to the astronomical unit scale, have allowed us to investigate the formation of disk populations. Results. Magnetic fields play a crucial role in disk formation. A weaker initial field leads to larger and massive disks and weakens the stellar radiative feedback by increasing fragmentation. We find that ambipolar diffusion impacts disk and star formation and leads to very different disk magnetic properties. The stellar radiative feedback also have a strong influence, increasing the temperature and reducing fragmentation. Comparing our disk populations with observations reveals that our models with a mass-to-flux ratio of 10 seems to better reproduce observed disk sizes. This also sheds light on a tension between models and observations for the disk masses. Conclusions
Introduction
Protostellar disks, often referred to as protoplanetary disks, are formed through the conservation of angular momentum during the protostellar collapse.New observational evidences suggest that planets, or at least the gas giants, could form early during the evolution of those disks.The mass content of Class II-III disks indeed seems insufficient to explain observed exoplanetary systems (Manara et al. 2018;Tychoniec et al. 2020).In addition, the sub-structures of young < 1 Myr Class II (e.g., in HL-tau ALMA Partnership et al. 2015) and even < 0.5 Myr Class I (Segura-Cox et al. 2020) disks, in particular rings and gaps, could be indications of the presence of already formed giant planets.There are, of course, other theories for the formation of those structures (see the recent review by Bae et al. 2022), but the hypothesis of the presence of planets in gaps has recently been strengthened by kinematics evidences (Pinte et al. 2018(Pinte et al. , 2019)).In contrast to older disks, Class 0-I disks could still have enough material to form planets. Unfortunately, the properties of these young disks are yet very poorly constrained.They are deeply embedded in a dense protostellar envelope, dominating the mass of protostellar objects during the whole Class 0 phase and are often spatially unresolved in the wavelength range at which they can be observed (Maury et al. 2019;Sheehan et al. 2022).
On the theoretical perspective, one must resort to large scale simulations self-consistently forming disk populations.
Significant efforts toward this challenging modeling have been made by various teams in the pasts.For instance, Küffmeier et al. (2017), and later Küffmeier et al. (2019), investigated the impact of the accretion from large giant molecular cloud scales on the properties of disks, but without focusing on the formation of a full disk populations.This was done the first time by Bate (2018) who investigated a full disk population forming in a massive protostellar clumps.Subsequently, Elsender & Bate (2021) investigated the impact of metallicity on disk population formation in very similar calculations, initially presented by Bate (2019).They mainly concluded that disk radii were decreasing with a decreasing metallicity.However, both studies did not account for the impact of the magnetic field.
Magnetic fields are, however, ubiquitous in observations of Young Stellar Objects (YSOs, e.g., Girart et al. 2006;Rao et al. 2009;Maury et al. 2018).Observations suggest that they may play a key role in shaping the properties of some key features of the star formation process, such as the development of accretion flows, the disk sizes ans masses and the occurrence of multiple stellar systems (Maury et al. 2018;Galametz et al. 2020;Cabedo et al. 2023).On the theory side, their role has been extensively investigated in the ideal (Price & Bate 2007;Mellon & Li 2008;Hennebelle & Fromang 2008;Hennebelle & Teyssier 2008;Joos et al. 2012) and non-ideal (Duffin & Pudritz 2009;Dapp & Basu 2010;Machida et al. 2011;Li et al. 2011;Dapp et al. 2012;Li et al. 2014;Tomida et al. 2015;Tsukamoto et al. 2015;Marchand et al. 2016;Masson et al. 2016;Vaytet et al. 2018;Wurster & Bate 2019;Zhao et al. 2020;Hennebelle et al. 2020b;Marchand et al. 2020;Zhao et al. 2021;Mignon-Risse et al. 2021b,a) MHD frameworks for isolated collapse calculations of low and high mass cores.The magnetic fields have been proven critical to shape the disk through the regulation of angular momentum and for the launching of protostellar outflows.The importance of the large (clump-scale) magnetic field was also pointed out in the zoom-in simulations of Küffmeier et al. (2017) and Küffmeier et al. (2019).In the context of 50M ⊙ mass clumps, Wurster et al. (2019), investigated the effect of all three non-ideal MHD effects on disk formation and concluded that they mostly impacted the small scales and the magnetic properties of the disks but not their size and mass.So far, only Lebreuilly et al. (2021) investigated disk formation in the MHD context with ambipolar diffusion for massive clump calculations while systematically resolving the disk scales and concluded that the clump scale magnetic field was indeed playing a major role in setting the initial statistical conditions of the disks.Here we continue this work by expanding the parameter space of the simulation suite with an overall higher numerical resolution.In this paper and its companion paper, Lebreuilly et al. (2023a), we investigate in detail the impact of the initial clump conditions, i.e. the magnetic field strength, the treatment of the RT and the protostellar feedback (accretion luminosity, jets) and the clump mass.In this work, we will present six models and investigate the impact of the magnetic field and the RT modeling on the initial conditions of protostellar disks.This article is decomposed as follows.In Sect.2, we briefly recall our methods, that are similar to those of Lebreuilly et al. (2021).In Sect.3, we present in detail our fiducial model.This model will be our reference for comparison in this series of paper.In Sect.4, we investigate the impact of the magnetic field and RT treatment on the initial conditions of our disk populations and their evolution.In Sect.5, we describe the main caveats/prospects of our study and a first comparison with observation are then presented.Finally, we present our conclusions in Sect.6.
Dynamical equations
To accurately describe the relevant physics in the context of star and disk formation, we solve the following dynamical equations where ρ and v, E, E r and B are the gas density and velocity, the total energy, the radiative energy and the magnetic field.We also define the thermal pressure P th , the gravitational potential ϕ, the radiative pressure P r , the Rosseland κ R and Planck opacities κ P , the radiative flux limiter λ (Minerbo 1978), the temperature T , the total luminosity source term S ⋆ , the ambipolar resistivity η AD and Λ AD the heating term due to ambipolar diffusion.Finally, we define the gravitational constant G, the Stefan-Boltzmann constant a R and the speed of light c.
To solve these equations, we use the adaptive meshrefinement code (AMR) ramses (Teyssier 2002;Fromang et al. 2006) with RT in the flux-limited diffusion (FLD) approximation (Commerc ¸on et al. 2011(Commerc ¸on et al. , 2014)), non-ideal MHD, and more particularly ambipolar diffusion (Masson et al. 2012) and sink particles (Bleuler & Teyssier 2014).More details about the code and modules that we used in this study can be found in the works mentioned above.
Initial conditions
Our clumps are initially uniform spheres of 500 − 1000M ⊙ , of temperature T 0 = 10 K and with an initial radius given by the thermal-to-gravitational energy ratio α such as k B being the Boltzmann constant, m H the Hydrogen atom mass and µ g = 2.31 the mean molecular weight.Owing to our choice for the values of α, all our clumps have the same initial radius of ∼ 0.38 pc.The box size L box is chosen to be four times larger, i.e., L box = 1.53 pc.Outside of the clump the density is divided by 100.We set an initial turbulent velocity at Mach 7 with a Kolmogorov powerspectrum of k −11/3 and random phases to mimic the molecular cloud turbulence.We point out that, as explained in Lee & Hennebelle (2018), the initial choice of spectrum for the turbulence has little impact on the results because the initial conditions are quickly forgotten has the collapse proceeds.Better ways to model the turbulence would require to start the simulation from even larger (kpc) scales, which is clearly way beyond the scope of the present work.
The magnetic field strength is initialised according to the mass-to-flux over critical-mass-to-flux ratio µ such as (Mouschovias & Spitzer 1976).
Sink particles
Sink particles, following the implementation detailed in (Bleuler & Teyssier 2014), are employed to mimic the behaviour of stars in our models.They are formed when the local density reaches the density threshold n thre = 10 13 cm −3 .This value is chosen in accordance with the analytical estimate of Hennebelle et al. (2020b).Once we form a sink, it automatically accretes, at each timestep, the material with a density above the threshold and within the sink accretion radius 4∆x.
Radiative transfer modelling
We examine two possible ways of modelling the RT.For all models, except one (NMHD-BARO-M500), we include the RT in the FLD approximation, using the solver of Commerc ¸on et al. (2011,2014).
In this approach we consider sinks/stars as sources of luminosity, defined as the sum of the intrinsic and accretion luminosity.The former is computed using the evolutionary tracks of Kuiper & Yorke (2013), while the latter is defined as where R ⋆ is the star radius, also extracted from the tracks of Kuiper & Yorke (2013), M sink and Ṁsink are its mass and mass accretion rate, and 0 < f acc < 1 is a dimensionless coefficient.f acc corresponds to the amount of gravitational energy converted into radiation, in this work, we explore two values for f acc equal to 0.1 and 0.5.We refer to Lebreuilly et al. (2021) for an explanation on how the luminosity source terms are implemented in the code.
In the second approach (run NMHD-BARO-M500) we do not use the FLD approximation but instead assume a barotropic equation of state (EOS) to compute the temperature.This is of course an over-simplification, but it is interesting for two main reasons.First, barotropic EOS models are have lower temperatures than FLD calculations (Commerc ¸on et al. 2010), they allow to investigate the effect of temperature on the disk formation and evolution.Second, because they are simpler than a full radiative tranfer modeling, these EOS are still widely used by the community.
Refinement criterion
We use the AMR grid of ramses which allows us to locally refine the grid according to the local Jeans length.More specifically, we use a modified Jeans length such as λJ = λ J if n < 10 9 cm −3 , min(λ J , λ J (T iso )) otherwise (6) This modification is convenient for studying disk formation especially in the presence of feedback since it is independent from the temperature at T > T iso ≡ 300 K in the dense and heated regions.In all our models, we impose 10 points per modified Jeans lengths within each cell to prevent artificial fragmentation (Truelove et al. 1997).The cell size is computed as a function of the refinement level ℓ as Our resolution always ranges from ∼ 2460 au (∼ 0.012 pc) in the coarsest cells of the simulation down to ∼ 1.2 au (∼ 5.8 × 10 −6 pc) in the fine cells.
Disk selection
The disks analyzed in the present study were selected using the same method as in Lebreuilly et al. (2021), but we slightly modified the pre-selection criteria of Joos et al. (2012).As a reminder, the Joos criterion are n > 10 9 cm −3 , where n is the number density; v ϕ > 2v r , v ϕ > 2v z , where v r , v z and v ϕ are the radial, vertical and azimuthal velocities, the rotation axis being the direction of the angular momentum at the sink vicinity; -1/2ρv 2 ϕ > 2P th , where ρ is the gas density and P th is the thermal pressure.
In this work, we consider only the two first criteria.We have found that the last energy criterion arbitrarily removes the inner hot regions of the disks.Removing this criterion also allows a better comparison of models with a different accretion luminosity efficiency (and hence different temperatures).
Once all the disks of a model are selected, we analyze various of their internal properties (their radius, mass, temperature etc.).For any quantity A, we compute volume average such as where j refers to all the disk cells of size ∆x j .The treatment of the temperature is slightly different, as we select only midplane cells to compute its averaged value.This allows a better estimate of the temperature in the hot regions of the disk.In the remaining of the manuscript, we drop the ⟨⟩ notation for averages as no confusion with the local value is possible.In addition, we estimate the disk radius as the median of the maximal extent in 50 equal-size azimuthal slices and the disk mass as the sum of the mass of every disk cell.
List of models
Our full list of models computed for this work is presented in Tab. 1. From left to right the table shows the model name, the initial clump mass, the thermal-to-gravitational energy ratio α, the
Presentation of the fiducial model
Our reference model NMHD-F01 is a 1000 M ⊙ clump with µ = 10, f acc = 0.1 and a Mach number of 7.This run is essentially the same as the fiducial calculation of Lebreuilly et al. (2021), but with our new improved refinement criterion.
Large scales and star formation
We begin the description of our fiducial run NMHD-F01 by briefly presenting its evolution at the global scale.The panels (a), (b) and (c) of Fig. 1 show column density snapshots at various evolutionary stages for this model.The sinks are represented by the star markers.
As expected, the gravoturbulent motions lead to the formation of a network of highly non-homogeneous filament structures along which star formation mainly occurs (as in other similar studies, e.g., Lee & Hennebelle 2018;Bate 2018;Lebreuilly et al. 2021;Grudić et al. 2021;Lane et al. 2022).It is also very clear that the stars do not form in isolation here.Star formation is in fact more concentrated around one compact hub in the bottom-left part of the clump.This effect, which is most certainly a consequence of the global collapse, off-centered because of the turbulence, was also observed in the models of Lebreuilly et al. (2021) and Hennebelle et al. (2022).The global collapse of the clump is quite noticeable and non-isotropic, as expected in the presence of turbulence, which explains the presence of a main star-forming filament.This filament, connected with the previously mentioned hub, is the second most active site of star formation of the clump.
We now focus on the main hub evolution.The panels (d), (e) and (f) of Fig. 1 show the column density of NMHD-F01 at the same SFE as the top panels but centered around the hub and at a smaller scale (12.5% of the box).Even at those scales, stars are clearly formed in filamentary structures which are connected to the larger scale network seen in the top panels of Fig. 1.These filaments are similar to the bridge structures that were observed in Küffmeier et al. (2019).They connect sinks with their neighbours and represent a shared reservoir of mass.They are relatively quiescent and typically survive a few ≃ 10 kyr.Quite clearly, a compact and highly interacting protostellar cluster is formed at the center of the hub.We point out that, although sinks can get quite close to each others in this hub, we chose to never merge the sinks in our models since we are not resolving the stellar radii scales.This hub is a favoured place to form massive stars in the clump.In fact, the most massive stars formed in the model are part of this cluster.
Between a SFE = 0.015 (t = 97.2kyr) and SFE = 0.15 (t = 116.6 kyr), we observe a clear thickening of the filaments due to the radiative feedback of stars that heat-up the gas and therefore increase its thermal support over time.This increase of thermal support significantly reduces fragmentation and sink formation which essentially halts after a very efficient early phase (Hennebelle et al. 2022).Over the course of the simulation, integrated up to SFE=0.15, about 90 sinks are formed, half of which are either single star or primaries (according to the simplified definition of Lebreuilly et al. 2021).
We point out that this model has formed less stars than its lower resolution counterpart (the nmhd model of Lebreuilly et al. 2021).Very interestingly, the overall higher resolution of NMHD-F01 allows the formation of one massive ∼ 15M ⊙ star over the course of the simulation.This is more massive by a factor of a few than the ones obtained in lower resolution runs (Hennebelle et al. 2022).We stress a clear correlation between sink masses and the mass of their surrounding envelope at a 1000 au scale which indicates that the more massive star form in the more massive environment (see also, Klessen & Burkert 2000;Colman & Teyssier 2020).
For more extended dedicated descriptions (temperature, magnetic field and stellar mass spectrum) of the clump scales and star formation in very similar calculations, we refer the reader to Hennebelle et al. (2022) and references therein.
Disks and small scales
In the following, we describe further the formation of structures, their properties and their evolution at disk scales.
There is a clear variety of disks and a wide diversity of commonly observed small scale features in the NMHD-F01 model and, more generally, in all of our models.We describe here the typical appearance and structures of our disks.As a support for that description we show, in Fig. 2, edge-on or mid-plane density slices of 12 of our fiducial model disks at various times.
-Compact disks: first of all, we observe many compact disks (panels a, b, c, d, f, k, i, j, k and l).They, in fact, represent the majority of the disks in the model as we show later in this section (half of the disks are smaller than ∼ 28 au at birth).As explained in Lebreuilly et al. (2021), they are a clear consequence of the regulation of the angular momentum by the magnetic braking during the protostellar collapse.
It is worth mentioning that disks are indeed expected, from observations, to be compact at the Class 0 and early Class I stage (Maury et al. 2019).The frequent occurrence of these < 50 au disks is compatible with the self-regulated scenario of Hennebelle et al. (2016).In this scenario, it is expected that the interplay between magnetic braking and ambipolar diffusion mostly leads to the formation of compact disks.-Sub-structures: spirals/arcs (panel g,h,k) are also often observed, particularly in the presence of multiple systems or/and when the disk is gravitationally unstable and fragmenting.The latter effect is however restricted to the most massive and young disks (while they are still cold) as it is quite efficiently suppressed by the stellar feedback.Noticeably, ring structures are not present in our models: often attributed to planets-disk interactions, they could in principle form from MHD instabilities in the disks (see the recent review by Lesur et al. 2022).The fact that we do not observe then could come from two reasons, namely that either our resolution is still insufficient for them to occur in our disks or the early conditions in the disks (hot disks, with a massive turbulent envelope) are not favorable for rings to form.
In general the disk sub-structures are quite faint unless they originate from multiplicity in the disk (e.g panel h).This is most likely a consequence of the thermal support due to radiation that stabilises the disk structure.-Magnetised flows: a common consequence of the magnetic field interplay with the gas at the disk scale is the triggering of interchange instabilities in some cases (as revealed by the prominent loop seen in panel c).This instability, that can transport momentum away from the disks (Krasnopolsky et al. 2012), was also observed in the rezoomed models of Küffmeier et al. (2017) in ideal MHD.Our study confirm that they are not always suppressed by the diffusive effect of ambipolar diffusion.Quite noticeably, and as was also noted in Lebreuilly et al. (2021), magneto-centrifugal outflows and jets are however absent in the models.This is most likely due to a suppression by the turbulence for the former (in accordance to Mignon-Risse et al. 2021a) and a lack of resolution in the inner regions of the disks for the latter.In Paper II, we will investigate the impact of protostellar jets on the disk properties using the sub-grid modeling of Verliat et al. ( 2022).-Non-axisymmetric envelopes: the protostellar envelope around the disks are still massive by the end of the computation.Generally speaking, they are very diverse in structure and never spherically symmetrical, as already pointed out by Küffmeier et al. (2017).From as far as a few ∼ 1000 au scale up to the disk scales, accretion streamers are very common around disks in all the simulations.Those channels for high density material accretion are typically (but not exclusively) connected to the disk mid-plane (e.g., panels a and b).-Flybys: close interactions between stars (not orbiting each other) or between a star and dense clumpy gas are common Fig. 2: collection of disks from run NMHD-F01; edge-on or mid-plane density slices.In addition to the density, for each disk we display the sink index, the time of the corresponding snapshot, the mass of the sink and of the disk as well as the disk radius.Some disks, displayed at different time can appear in several panels.during the clump evolution (Cuello et al. 2022, and references therein).Close flybys of two (or more) disks (panel i, j and k) are happening several times in the model.In our simulations, they quite systematically lead to disk mergers, probably through the bridge structures (Küffmeier et al. 2019).
Flybys have been shown to be able to truncate disks and trigger spiral formation in idealized calculations (Cuello et al. 2019).It is however difficult to establish their role in complex clump simulations that include many potential mechanisms to generate structures in the disks.-Disks in columns: a very peculiar structure, with several occurrences at the early stages of the calculations (in all the models) only, is the formation of several disks in a single filament/column (panel l).It is the consequence of the fragmentation of the clump at the filament scale.This is only possible when the latter are still cold.Indeed, this behaviour is not observed later when the stellar feedback heats up the gas and the thermal support precludes further fragmentation.The very short timescale and early disappearance (in less than 10 kyr after the first sink is formed) of these structures probably explains why they have not been observed.
Disk populations
In the following, we discuss the properties of the disk populations for all of our models.They are extracted for each model as explained in section 2.6.
Fiducial model
The disk population of the NMHD-F01 model are described in detail in this section.
In Fig. 3 we show the cumulative density function (CDF) of several key disk properties for run NMHD-F01 at the closest output from their birth time, but also 10 and 20 kyr after.We show the CDF of the radius (a), the mass (b), the ratio between the disk mass and the stellar mass (hereafter disk-to-stellar mass ratio, panel c), the mid-plane temperature (d), the magnetic field (e) and finally, the plasma beta β ≡ 8πP th /|B| 2 (i.e., the ratio between the thermal pressure and the magnetic pressure, panel f).Before describing those quantities individually, we point out that the disk populations are not strongly varying over time in the statistical sense except during the initial 10 kyr (hereafter the disk build-up phase).This does not mean however that disks are not individually evolving.As shown above, they are indeed non-linearly affected by interactions with the clump and other disks.In addition from the histograms, we show in Tab. 2 the mean, median (med.) and standard deviation (Stdev.) of these disk properties for NMHD-F01 at birth time, at age of 10 and also 20 kyr and more.
It is worth mentioning that about ∼ 25 disks are steadily detected in our fiducial model.This number although lower than the findings of Lebreuilly et al. (2021), is consistent with the reduced fragmentation with our improved refinement criterion.The overall disk-to-star number ratio is higher than in this previous study as we now have ∼ 70% of the systems hosting a disk.
Sizes and masses
Size We first focus on the disk radii.This quantity is very interesting because it is perhaps the most reliable observable at all evolutionary stages.We can see, in the radius CDF, that the disks of NMHD-F01 are typically compact, half of them being smaller than 30 au.This is in good agreement with the observations (Maury et al. 2019;Sheehan et al. 2022) and is slightly lower than what we found in Lebreuilly et al. (2021).This was however expected, as we speculated that a higher resolution was needed to model the smaller disks.An interesting aspect of the evolution of the disk radius is that the radius of the smallest disk in the sample shifts from less than 10 au at birth to about 25 au for older disks.At the same time, the median value shifts from ∼ 28 au to ∼ 47 au.However, from 10 to 20 kyr, the PDF does not evolve significantly.To summarize, there is an initial build-up phase in the first 10 kyr of the disks lifetime during which they evolve a lot from compact to more extended disks as they (and their host star) accrete material from the envelope (Hennebelle et al. 2020b).Since this timescale is short, it would be unfortunately almost impossible, for statistical reasons, to observe these disks.After this phase, the disk size does not change much with time, except in case of external perturbations (e.g., flybys/mergers) or when the system is a multiple.
Mass It is of great importance to quantify accurately the disk mass since the mass content of the disks gives valuable information about the budget that is available for planet formation.An useful quantity to keep in mind is the Minimum Solar Mass Nebula (Hayashi 1981, MSMN).This mass, of the order of 0.01M ⊙ and revised downward in more recent studies (e.g., Desch 2007), gives the minimal content needed to form solarlike systems.In addition, the disk-to-stellar mass ratio provides an insight on the importance of self gravity in the disk dynamics.Similarly to the disk radius distribution, there is a clear evolution of the disk mass and disk-to-stellar mass distribution during the build-up of the disks, i.e., over ∼ 10 kyr.During the early stages of their evolution, a significant fraction of mass in the system still belongs to the disk component which is comparable, if not larger, to the mass of the stellar component.This is the stage during which the most massive disks can be gravitationally unstable, which could have consequences for early planet formation.After the bulk of the disk mass has been accreted by the protostar, i.e., after a few kyr, the disk typically represents between 1 and 10 % of the system mass.After this main build-up event the disk masses are still typically larger than their initial value as the disk gets new material from the envelope.At this stage, half of the disks have masses between 0.01M ⊙ and slightly less than 0.1M ⊙ and the other half can reach masses up to ∼ 0.3M ⊙ which is still more than enough for planet formation according to the MSMN criterion.This value is, indeed, close to, if not below, the low disk mass limit of NMHD-F01 after the build-up phase, the mass of the disks in NMHD-F01 is most likely sufficient to form planetary systems similar to the solar system.Noteworthy, the disk masses that we report are in good agreement with those of the hydrodynamical model of Bate (2018).Noticeably, the disk mass quite clearly correlates with the disk radius.This was of course expected as large volumes with the same typical density contain more mass.In Fig. 4, we show this correlation for the NMHD-F01model.Each disk is displayed every 1 kyr and the various markers represent the different evolutionary stages of the disks (from birth up to 40 kyr for the older disks).The correlation between disk mass and disk radius is typically in between ∝ R disk (plain line) and ∝ R 2 disk (dotted line), and closer to the latter, which is expected for a perfectly symmetric disk.Figure 5 shows analogous information to Fig. 4 for the disk-tostellar mass ratio vs the stellar mass.A clear correlation between the disk-to-stellar mass ratio and the sink mass is observed.It is slightly steeper, albeit close, to an inversely linear relation.We point out that this correlation also means that the disk mass is only weakly dependent on the stellar mass.In addition, as can be seen, the disks are more massive than the stellar component almost exclusively in the presence of low mass (below 0.1M ⊙ ) and young (< 10 kyr) systems.This is at odds with the previous study of Bate (2018) who has found a linear relationship between the disk mass and the stellar mass.We point out that, as discussed extensively in Hennebelle et al. (2020b) (see also Sect.5.3), the disk mass depends on the recipe used for the sink particles.This might explain why we do not find the same correlation as Bate (2018).
Temperatures
We now turn our attention to the mid-plane temperature distribution.The disks of NMHD-F01 are typically warm, with a median temperature of about ∼ 100 K during the build up phase and about ∼ 300 K at later stages.As shown in Hennebelle et al. (2020a) and later in Lee et al. (2021a), the temperature at the vicinity of a star can be controlled by its luminosity when it is sufficiently strong.In this context, we expect the disk temperature to scale as ∝ L 1/4 acc .The information that we show in Fig. 6 is analogous to Fig. 4, but for the correlation between disk tem-Fig.6: same as Fig. 4 but for the mid plane temperature perature and their sink accretion luminosity.Those two quantities clearly have a correlation that is close to T ∝ L 1/4 acc (solid black line).We thus conclude that the accretion luminosity is indeed the dominant factor controlling the disk temperature in the model.More generally, we have confirmed this behavior for all the models, except of course NMHD-BARO-M500, for which the temperature is prescribed by the barotropic EOS.
It is worth mentioning that, despite the weak correlation between the temperature and the accretion luminosity, it leads to a broad distribution of disk mid-plane temperatures (ranging from approximately 60 K to about 1000 K) since the accretion luminosity varies over four orders of magnitude.We have verified that the higher accretion rates and hence accretion luminosity correspond to the most massive stars of the system.As a consequence, the hotter disks are also those around the more massive stars in the model.
Magnetic fields
Finally, we bring our attention to the magnetic field strength and topology in the disks.First of all, half of the disks have a magnetic field below ∼ 0.03 G during the build-up phase and below ∼ 0.1 G after.We observe a slight increase of the magnetic field strength during the build-up phase which is probably due to an amplification through the still significant infalling motions that bend the field lines.The distribution of magnetic field is not varying significantly afterward.We also point out that none of the disks has a magnetic field larger than ∼ 0.3 G.These are the two main consequences of ambipolar diffusion.The imperfect coupling between the neutrals and the ions indeed leads to a diffusion of the magnetic field at high density where it is not dragged anymore by the gas motions (see Masson et al. 2016;Mignon-Risse et al. 2021b,a;Commerc ¸on et al. 2022, for similar observations in low and high-mass isolated collapse calculations).A consequence of the low disk magnetisation is the negligible role played by the magnetic pressure inside the disk.The thermal pressure, enhanced by the stellar feedback from the accretion luminosity, is the main source of support (beside rota-Fig.7: same as Fig. 4 for the disk radius vs the absolute value of vertical magnetic field.tion) in the disk.We find that the magnetic pressure is typically about one order of magnitude lower than the thermal pressure and that a majority of disks have 1 < β < 100.As we will see later, this is not the case for the ideal MHD model.It is noteworthy that, as for the magnetic field distribution, the distribution of β is not evolving much over time except during the build-up phase.
In Fig. 7, that is similar to Fig. 4 but for the disks vertical magnetic field vs their radius, we clearly see an inverse correlation between the disk size and the vertical magnetic field strength which very close to the ∝ |B z | −4/9 (solid black line) predicted by the self-regulated scenario of (Hennebelle et al. 2016).Despite being decoupled from the gas at the disk scales, the magnetic field does influence the disk size via the magnetic braking and perhaps via interchange instabilities outside of the disk, i.e., in the envelope.
Impact of the magnetic field
We have computed two additional models with a different magnetic field treatment and strength.The first run, IMHD-F01, is the same as NMHD-F01 but evolving the magnetic field with an ideal MHD framework.The second run, NMHD-F01-mu50, is the same as NMHD-F01 but with a mass-to-flux ratio µ = 50, hence an initial magnetic field five times lower than the fiducial value.
In Fig. 8, we show the evolution of the disk quantities as a function of the SFE for NMHD-F01, IMHD-F01 and NMHD-F01-mu50.We show the disk radius (panel a), mass (panel b), mid-plane temperature (panel c), magnetic field strength (panel d), plasma beta (panel e) and the number of formed objects (stars, primaries and disks, panel f).For panels a to e, the dotted lines represent the median of the distribution and the colored surfaces represent the first and third quartiles of the distribution.Before describing each panel in detail, we note that the disk properties are still relatively steady with time/SFE for all three models (except during the build-up phase).
Fragmentation: Let us first focus on the number of objects, i.e., panel (f) of Fig. 8. Quite clearly, the magnetic field actively stabilises the cloud against fragmentation in NMHD-F01.Indeed NMHD-F01-mu50, that has an initially weaker magnetic field, is able to form more sinks (about 140, including about 60 primaries).Consequently, more disks are also formed in this run.Interestingly, almost all the systems are hosting a disk in the model, contrary to NMHD-F01 for which the fraction is closer to 75%.This is consistent with what was observed in Lebreuilly et al. (2021) where we have shown that the model without magnetic field, i.e. with an infinite mass-to-flux ratio, forms more stars, more disks and has a higher fraction of disk-hosting stars.It is worth noting that star formation is happening more homogeneously in the NMHD-F01-mu50 clump that is visibly more fragmented than the one of NMHD-F01.This indicates that fragmentation is suppressed at a relatively large (larger than the disk) scale in NMHD-F01.This can be seen in Fig. 10, that shows the column density and the stars of NMHD-F01-mu50 at SFE=0.15, which can be directly compared with the panel (c) of Fig. 1.Similar observations were shown in Hennebelle et al. (2022).
If we now focus on the model with ideal MHD, IMHD-F01, we see that it forms even less stars than NMHD-F01.This is a consequence of the flux-freezing approximation that leads to an overall stronger pressure support of the magnetic field in the ideal MHD calculation.We notice that the fraction of primaries, which is the fraction of stars with no neighbours in a 50 au radius, and also the total number of primaries is however slightly higher that NMHD-F01 at the end of the calculation, because small scale fragmentation is suppressed by the stellar feedback.
Disk size: As we can see, whereas there are no strong differences of radii between the ideal MHD and the ambipolar diffusion case (as was also shown in Lebreuilly et al. 2021), a change in the mass-to-flux ratio however has an important impact.The typical disk size is ∼ 30 − 40 au in NMHD-F01 and IMHD-F01 and about twice larger in the case of a weaker initial magnetic field.In fact, disk sizes are actually slightly larger in IMHD-F01, which we explain later in this section.It is interesting to point out that this difference of a factor of ∼ 2 is perfectly consistent with the self-regulated scenario of Hennebelle et al. (2016) from which it is expected that the disk size scales inversely with the square root of the magnetic field strength.The present result is however a generalisation of what is proposed by Hennebelle et al. (2016) because, in the present case, the scaling also appears to be valid for the clump scale magnetic field.As pointed out by Seifried et al. (2013) as well as Wurster et al. (2019), the resemblance in terms of disk size between the ideal MHD and ambipolar diffusion models could be a due to turbulence.Indeed it likely diminishes the influence of magnetic braking by generally producing a misalignment between the angular momentum and the magnetic field.However, as we have shown above, the clear correlation between the magnetic field strength and the disk size does indicate that they do play a crucial role in that regard.This is also supported by the fact that we obtain larger disk when considering a weaker magnetic field.We also emphasize that this similarity in the disk radius between the two models is actually misleading because the disk masses of IMHD-F01 are lower than those of NMHD-F01 .
Disk mass: The disks of NMHD-F01-mu50 are more massive than those of NMHD-F01 and IMHD-F01 by a factor of about 2. This is not surprising since the disk masses are correlated to their radii in our models.Conversely, IMHD-F01 disks are gen-Fig.8: Evolution of the disk properties for NMHD-F01, IMHD-F01 and NMHD-F01-mu50against the SFE.The lines correspond to the median of the distribution and the coloured regions correspond to the area between the first and third quartile of the distribution.From left to right, top to bottom: radius, mass, mid-planet temperature, magnetic field strength, plasma beta and number of objects.erally less massive than those of NMHD-F01.The efficient magnetic braking at high density, in the absence of ambipolar diffusion, leads to more radial motions even in the disk which allows the star to accrete more material.This is a non-linear effect because more massive stars generate stronger feedback that also reduces fragmentation (Commerc ¸on et al. 2011).This aspect is interesting in the light of the similarity of the radii of the disk of NMHD-F01 and IMHD-F01.Essentially, even if their sizes are comparable, ideal MHD disks are typically less dense than the ones with ambipolar diffusion.The similarity in terms of disk size between ideal MHD and MHD with ambipolar diffusion is thus misleading and does not mean that magnetic braking is as efficient in the presence of ambipolar diffusion as it is without.
Disk temperature: Disks are hotter in IMHD-F01 than they are in NMHD-F01, the disks of NMHD-F01-mu50 being the coolest.In IMHD-F01 the fragmentation is suppressed, therefore the stars are able to accrete more material which is further helped by a strong magnetic braking.Therefore IMHD-F01 stars have a stronger accretion luminosity which leads to hotter disks.Conversely, fragmentation is most efficient for NMHD-F01-mu50 , and on top of that, the disks are larger, which means that a large part of their material is far away from the star and therefore colder.We see in panel (d) of Fig 8 that, at all SFE, the typical disk magnetic field is stronger in IMHD-F01 than it is in NMHD-F01 and unsurprisingly, it is the weakest in NMHD-F01-mu50.This explains quite clearly why the disks of NMHD-F01-mu50 are the largest in size.As we have shown earlier, the disk size roughly scales as |B z | −4/9 .We see in both NMHD-F01-mu50 and NMHD-F01 an weak correlation between the disk radius and the poloidal fraction, which is not clear at all for the IMHD-F01 model.This could be due to the more efficient winding-up of the magnetic field lines for the more massive disks that is able to generate a significant toroidal field despite the diffusive effect of the ion-neutral friction which was observed in isolated collapse calculation of massive cores (Mignon-Risse et al. 2021b;Commerc ¸on et al. 2022).As we can see (Fig. 9, panel a) the bottom-right quadrant (small disks, strong poloidal fields) of the plot is dominated by the young 0 to 10 kyr disks that did not have the time to wind up the field, while the top-left quadrant (large disks, weak poloidal fields) is dominated by the older disks.As said earlier, the previously described behaviour is not at all seen in the case of IMHD-F01; moreover, the poloidal fraction is generally lower in this model.This is a key difference that can be explained by the non-negligible impact of the ambipolar diffusion at the envelope scale for NMHD-F01.Without Fig. 9: Correlations between some disk/star properties for runs NMHD-F01, NMHD-F01-mu50and IMHD-F01(from left to right).Top: disk size vs ratio of poloidal over toroidal magnetic field .Bottom: Beta plasma vs primary mass.For the NMHD-F01-mu50and IMHD-F01panels, the information of the NMHD-F01 panel is duplicated with grey markers.any diffusive effects, the magnetic field lines are already efficiently wound-up even before disk formation in the collapsing cores and therefore most disks already have a strong toroidal magnetic field at birth time.Because of that, the correlation between the poloidal fraction and the disk size is not observed in the IMHD-F01 case.As explained earlier, the disk sizes of IMHD-F01 and NMHD-F01 are not strongly different.In fact, and perhaps counter-intuitively, we find that ideal MHD disks are, on average slightly larger than those of NMHD-F01.As explained in Lebreuilly et al. (2021), this is most likely a consequence of the strong toroidal fields, which stabilise the large disks against fragmentation, and that are only formed in ideal MHD.
Quite interestingly, β is similar in the disks of NMHD-F01 and NMHD-F01-mu50, meaning that they reach the same level of magnetisation with respect to the thermal pressure.We clearly see in Fig. 9 that the beta plasma decreases with the sink mass for the three models but also that the typical value of beta is way lower for IMHD-F01than it is for the two other models.The majority of IMHD-F01 disks are distinctly magnetically dominated rather than thermally, whereas the opposite happens for NMHD-F01 and NMHD-F01-mu50.Low disk magnetisation is yet another clear consequence of ambipolar diffusion (see Masson et al. 2016;Mignon-Risse et al. 2021b,a;Commerc ¸on et al. 2022, for similar results in low and high mass isolated collapse calculations).In ideal MHD disks, the infall actively drags the field lines, which causes a dramatic increase of the magnetic field intensity.In the case of IMHD-F01, because there is no diffusion at the envelope scale, the magnetic field is already strongly enhanced at disk birth.The β-plasma then does not evolve much over time and stays, on average, close to 1, as can be seen in panel (e) of Fig. 8.For the two other (MHD with ambipolar diffusion) models, the magnetic field is, as explained before, mostly vertical at birth.We then observed a slow decrease of β with time as a toroidal component is generated by the disk rotation.
We conclude that although, the disk properties of the ideal MHD and MHD with ambipolar diffusion models present some similarities (radius, temperature), the difference in disk/stellar masses and magnetic field properties between the ideal MHD and MHD with ambipolar diffusion disks leads us to the conclusion that treating the ambipolar diffusion is crucial to better capture the disk formation and evolution.We point out that knowing the initial configuration of the magnetic field is important for the onset and properties of MHD winds (see the review of Pascucci et al. 2022, and references therein).The strength of the magnetic field at the clump scale also appears to be an essential parameter that determines the properties of both the disks and star formation through its impact on the stellar feedback and magnetic pressure that acts against fragmentation.
Impact of radiative transfer
In this section, we investigate the impact of the modelling of the temperature through the choice of the f acc parameter and of the RT modeling (FLD, barotropic EOS).We have run the NMHD-F05 model which is similar to NMHD-F01 but with f acc = 0.5, therefore we expect a more significant impact of the radiative feedback in this model.We also computed two additional models to better understand the impact of the RT modelling, NMHD-F01-M500 and NMHD-BARO-M500.To be comparable, both models have been run with α = 0.016, an initial clump mass of 500M ⊙ , a Mach number of 7 and a mass to flux ratio of 10.However contrary to NMHD-F01-M500, that accounted for the RT with FLD and f acc = 0.1, NMHD-BARO-M500 assumed a barotropic EOS.In a sense, NMHD-BARO-M500 gives an idea of what would be the evolution of the clump if no stellar feedback was included.We point out that this parameter choice also allows to probe disk formation in less massive and less dense clumps with respect to NMHD-F01.In Fig. 11, we show the same information as in Fig. 8, but for NMHD-F01, NMHD-F05, NMHD-BARO-M500 and NMHD-F01-M500.
We first focus on the comparison between NMHD-F01 and NMHD-F05.If we look at panel (f), we see that fragmentation is even more suppressed with f acc = 0.5.We note that the effect of f acc is non-linear: suppressing fragmentation leads to more massive stars that are brighter and hotter preventing fragmentation even more.The impact of f acc is clear when looking at panel (c) of Fig. 8 that shows the mid plane disk temperatures.The typical disk temperature quickly gets higher by a factor ∼ 2 in NMHD-F05 than in NMHD-F01.The difference in temperature between the two models does not vary much over time which indicates that it is indeed caused by the accretion luminosity.Although f acc is five times higher in NMHD-F05 than in NMHD-F01, the typical accretion luminosity of the former is almost one order of magnitude higher than in the latter.The additional factor 2 in the accretion luminosity is mostly due to the reduced fragmentation in NMHD-F05 that leads to more massive stars.
Despite the difference of disk temperatures, other quantities, such as the disk radius, mass and magnetic field are similar in NMHD-F05 and NMHD-F01.This hints that the thermal support does not strongly affect the formation mechanism of the disks (magnetic braking vs conservation of the angular momentum) and their evolution (quasi Keplerian rotation with a low viscosity as in isolated collapes Hennebelle et al. 2020b;Lee et al. 2021a).We note however that the disks are typically thicker in NMHD-F05 than in NMHD-F01.This is not surprising at all since the disk scale height is controlled by the thermal support of the disk.
We now focus on the two 500M ⊙ runs, NMHD-F01-M500 and NMHD-F01-M500.By the end of the calculation, at a SFE = 0.11, the barotropic EOS calculation has formed 140 sinks whereas only 106 have been formed in NMHD-F01-M500.This is clearly an effect of the lower thermal support of the NMHD-BARO-M500, where the temperature is close to 10 K at low density.It is also interesting to point out that, in the case of NMHD-BARO-M500, the maximum star mass is of ∼ 2M ⊙ whereas it is around ∼ 10M ⊙ for NMHD-F01-M500.Despite being generally less massive (as disk fragmentation is more important in this model), the disks of NMHD-BARO-M500 are significantly more unstable due to their low thermal support.This can be seen in Fig. 13 that shows examples of fragmenting disks extracted from NMHD-BARO-M500.These disks are ubiquitous in the barotropic calculation, but rare for the other models with RT.This is an important point since disk fragmentation could be favourable for planet formation, either through streaming instability in the pressure bumps or through gravitational instability.
It is also worth mentioning that NMHD-F01-M500 and NMHD-BARO-M500 have a lower initial density that NMHD-F01.As a consequence, the sink accretion rates are generally lower and so is the accretion luminosity (see Sect. 5.2).If this has no consequences for NMHD-BARO-M500, which is computed with the barotropic EOS, the disks of NMHD-F01-M500 are impacted by this lower accretion rate and are colder than the ones of NMHD-F01.This confirms that controlling the accretion rate is crucial to predict the disk temperature and hence its fragmentation.As for the comparison between NMHD-F01 and NMHD-F05, it is interesting to point out that, temperature aside, the disk of NMHD-F01 and NMHD-F01-M500 are however quite similar.In addition for showing that the temperature is apparently not a controlling factor for the disk size, mass and magnetisation (unless the disk are indeed very cold), this seems to indicate that the clump mass, or rather its initial density, is not either.
We conclude that a precise modeling of the RT including the impact of the accretion luminosity seems to be crucial to constrain the disks temperatures and the clump fragmentation.In addition, unless very cold disks are somehow relevant (if the barotropic EOS latter proves to be good approximation), the choice of accretion luminosity efficiency does not impact much the size and mass of the disks.
Comparison with observations
One of the primary goals of the Synthetic Populations of Protoplanetary Disks project is to provide models to compare simulations and observations of Class 0 disks statistically.
We present a first tentative comparison of the disk populations with observed ones.For that comparison, we use survey of disks in the Orion cloud (VANDAM survey, Sheehan et al. 2022).In Fig. 14, we show the cumulative distribution of the disk radius and mass of the populations extracted from all our models (coloured lines) compared with the observed ones (black lines).For the observations, we display the brute data of the sur-vey with dashed lines and a version, re-scaled them by a factor 1/0.63, with plain lines.This re-scaling is done because the truncation radius as used in the (VANDAM survey, Sheehan et al. 2022) could be a better estimate of the radius containing 63% of the disk mass while our estimate better approximates the total disk radius.This crude re-scaling gives a good idea of what the total radius would be in this survey.In our populations, the disks are sampled every 1 kyr to mimic a diversity in evolutionary stages and to enhance the statistics in our clumps.We only selected the disks after the build-up phase, i.e., after 10 kyr, to make sure that the distributions are in their steady phase, more likely to be observed.We also kept the populations of the different clumps separated to see the impact of the initial clump scale properties/physical assumptions on the disks.
In terms of disk radii, there is a moderately good agreement between our models and the observations.We note that the agreement is best for the models (with ambipolar diffusion) with a stronger initial µ = 10 magnetic field.Conversely, the NMHD-F01-mu50 model, that has an initial µ = 50, consistently produces discs larger than those observed.As was noted by Bate (2018), the radius that contains 63% of the mass could be a better measure of the radius in comparison with observations when a truncated power laws is assumed to fit the disks.Considering that this value is more likely the one measured in the observations actually makes the agreement with the µ = 10 models even better.We however point out that there is only a factor ∼ 2 difference between the disk radius across all the populations.In addition it is important to stress a very important point.Disk observations are actually sensible to the continuum flux of the dust and not the gas mass.The conversion from this flux to mass is Fig. 13: collection of fragmenting disks from run NMHD-BARO-M500; mid-plane density slices.In addition to the density, for each disk we display the sink index, the time of the corresponding snapshot, the mass of the sink and of the disk as well as the disk radius.
not at all trivial, as we discuss below, therefore it is not clear at all which of the r 63 or r disk , if any, is actually better probed by observations.This issue might be partly solved in Sheehan et al. (2022) though, as they that performed a careful radiative transfer modelling to fit the observed disks.We will address this issue in a upcoming work, were we post-process our models to produce synthetic observations.This will allow us to compare our models to real observations extracting the disks with the exact same methods.
We also recall that we found good agreement with disks radii from the CALYPSO survey (Maury et al. 2019) in the previous models of Lebreuilly et al. (2021).At that time, the magnetically regulated models were also in better agreement with the observations.This is also consistent with the observational evidence that show that only magnetically regulated models of the evolution of solar-type protostellar cores, with mass-to-fluxes ratio µ < 6 could reproduce the disk properties of the B335 protostar (Maury et al. 2018).
There is a more significant tension between our models and the observations when it comes to the disk mass.We generally report more massive disks than those obtained in Sheehan et al. (2022).We point out that similar tensions between models and observations were previously reported by (Bate 2018, Fig. 25) and (Tobin et al. 2020, Fig. 9) for the disk masses.As explained earlier, the masses obtained with our models are in line with those reported by Bate (2018) despite the significant differences in numerical methods.They indeed report, as we do, typical disk masses between < 0.01 and 1M ⊙ .We point out that the disks of Sheehan et al. (2022), from the Orion cloud actually have lower masses than those of Perseus (Tychoniec et al. 2020) and Taurus (Sheehan & Eisner 2017).Considering that the disk mass depends on the environment might partly solve the problem, as pointed out by Elsender & Bate (2021) (their Fig. 9).At this stage it is worth mentioning that the disk masses are poorly constrained both observationally and theoretically.Disk masses are likely controlled by the relatively unknown inner boundary condition i.e., the star-disk interaction (see Sect. 5.3).On the observational side, the arbitrary choice of dust model and size distribution can lead to potentially large errors in the conversion between the flux from thermal dust emission measure at mm wavelengths, and the disks mass.This issue, discussed in the recent review by Tsukamoto et al. (2022), could be significant as the dust optical properties have been obsered to be very different in protostellar environments than in the diffuse ISM (e.g., Galametz et al. 2019).In addition, the computation of the gas mass from the dust relies on a conversion that assumes a constant gas-to-Fig.14: comparison of the disk distributions of all our models (lines) with the observed ones in a sample of protostars in the Orion molecular cloud.A good agreement is found for the disk sizes but an important tension is observed when it comes to the disk mass.The observations uncertainties are not displayed for readability.
dust ratio, which is usually chosen to be 100.This hypothesis could be wrong in both ways: dust-rich disks (that can form if the dust decouples from the gas during the collapse Lebreuilly et al. 2020) were the gas-to-dust ratio could be lower than 100 and the disk mass would be overestimated, and dust-poor disks, for example if some of the dust has already been converted into planetesimals, the gas-to-dust ratio would be larger than 100 and the disk mass would be underestimated.Unfortunately, molecular tracers might not lead to better estimates for the same reason as they also rely on a conversion factor to get the H2 disk mass.The gas kinematics could provide us with dynamical estimates of the disk mass assuming that the protostar mass is known (Reynolds et al. 2021;Lodato et al. 2023).Although this methods are challenging, they might be our best hope for a precise inference from the observational point of view.
It is also important to point out that most of the observed protostellar disks, including Class 0 disks, are older than 50 kyr, while our older disks are 'only' about 40 kyr by the end of our simulations.Their properties could, in principle evolve quite a lot through the Class 0 leading to an apparent disagreement between the mass estimates from models and observation that is, in fact, only due to an age difference.However, as we explained earlier, the disks properties are not significantly varying in our models for disks older than ∼ 10 kyr and, as was shown in isolated calculations (Hennebelle et al. 2020b), this relatively steady state probably lasts for at least 100 kyr.This supports us into thinking that there might indeed me a fundamental disagreement between models and observations for the disk masses.
Finally, and quite surprisingly, the NMHD-BARO-M500 model seems to be the model that actually fit better the observations for both the mass and the radius.We however stress that this likely is a coincidence.Barotropic models are indeed not supported by the recent e-disk survey (Ohashi et al. 2023) that have shown that young disks are quite hot and do not show obvious sub-structures except in rare and evolved case.This survey is thus is in much better agreement with our RT models.
On the luminosity problem
The luminosity problem (Kenyon et al. 1990) is a long-standing issue of star formation.Observed YSO luminosities are below the values expected from steady-state protostellar accretion.This hints that either some of the accretion luminosity is not fully radiated away or the accretion is highly variable during protostar formation.Both of these issues can in principle be taken into account in the model through the efficiency factor f acc which is a sub-grid modeling for the conversion of accretion luminosity into radiation in our calculations.
The typical accretion luminosities of our models with f acc = 0.1 are typically a few 10 L ⊙ , while they are about one order of magnitude higher in the case of NMHD-F05 that has f acc = 0.5.Conversely, YSO observations seem to indicate lower luminosities with typical values of the order of a few L ⊙ during the Class 0 stage (Maury et al. 2011;Fischer et al. 2017).
As explained earlier, the accretion luminosity might not have a significant impact when it comes to the disk masses, sizes and the properties of the magnetic field.However, constraining the accretion luminosity is of paramount importance for the disk temperature and our understanding of planet formation in those disks.First, the thermal support brought by stellar irradiation can act against the formation of structures in the disks (Rice et al. 2011, see also the comparison between NMHD-BARO-M500 and NMHD-F01-M500).In addition, the position of the snow lines depends on the temperature profile of the disk, which is important since planetesimal formation is expected to be most efficient at their vicinity where the gas and dust properties of the disks abruptly change (see Drazkowska et al. 2022, and references therein).The position of the snowlines also determines the composition of the material available to form planets (gas and solids) at different locations in the disk.For instance, the location of the H 2 O snowline is crucial to understanding in which conditions rocky planets form and how water is delivered to them.At the same time, in a relatively hot disk, some volatile species would never condense.Connected to that, a growing body of literature is studying how to link chemistry in disks to planet formation and to the composition of the exoplanets that we observe today (see for instance Öberg & Bergin 2021; Turrini et al. 2021;Pacetti et al. 2022).All of these works will considerably benefit from better constraints on f acc .
At this stage it is important to emphasize that the real and effective value of f acc , if constant at all, could, in principle be even lower than 0.1.In this case, there must either be an efficient mechanism to convert the gravitational energy into something else than radiative energy (for example magnetic energy or internal convective energy) or most of the radiation should be lost through the outflow cavity.As we will show in Sect.5.3, this issue could be tackled by models that resolve the star-disk connection up to the small scales, i.e. the stellar radii.
As previously mentioned, accretion in strong bursts could be invoked to solve the accretion luminosity problem (Offner & McKee 2011;Dunham & Vorobyov 2012;Meyer et al. 2022;Elbakyan et al. 2023).In fact, large variations of luminosity over the timescale of years have been reported for some protostars (for example for B335 Evans et al. 2023).This is important because with a strong and steady accretion rate, the disk would be consistently kept warm, whereas short burst of accretion would not have long-term consequences on the disk temperature as the cooling timescale by the dust is very short (Hennebelle et al. 2020a) compared with the free-fall timescale.Figure 15 shows the mean accretion rate of the sinks as a function of time for NMHD-F01, NMHD-F05, NMHD-F01-mu50 and NMHD-F01-M500 (left) as well as the individual accretion rate of some sinks of NMHD-F01 (right).For all the models, there is a clear variability of the accretion rate over time.Accretion indeed occurs in short (but frequent) bursts of a few hundred of years.During those bursts the accretion rate rises to around 10 −3 M ⊙ yr −1 , occasionally reaching up to 10 −2 M ⊙ yr −1 .However, the average accretion rates are generally high in NMHD-F01 and NMHD-F05 in spite of these strong bursts; for these models, we observed typical average values around 10 −5 − 10 −4 M ⊙ yr −1 .This explains why, despite having f acc = 0.1, the accretion luminosity of our protostars is above typically observed values.Interestingly, the accretion rate is significantly lower in the case of NMHD-F01-M500 where it is typically around 10 −6 − 10 −5 M ⊙ yr −1 .This clump being less dense than the fiducial run, it is also significantly colder (because it cools faster) and more efficiently fragmenting.Similarly, the weaker magnetic field of NMHD-F01-mu50 leads to more fragmentation and lower accretion rates.We point out that episodic accretion is possibly, if not likely, not fully resolved in out models both in terms of space and time.With shorter and stronger bursts, we could expect a lower typical average accretion rate while still accreting enough stars to build the protostar.
It is not clear at this stage that there is a luminosity problem in these models.The typical accretion rate indeed depends on the initial conditions of the clump and, as a result, the accretion luminosity can vary by a few orders of magnitude between the various models.We suspect that the various clumps that we present in this study are representative of different star forming regions.With that in mind, it is worth pointing out that NMHD-F01/NMHD-F05 are probably more representative of massive star forming regions whereas NMHD-F01-M500 is probably better reproducing less compact nearby clumps.We indeed have a top-heavy IMF in the case of NMHD-F01, contrary to the other runs, and as also shown in Hennebelle et al. (2022).An in-depth exploration of a larger parameter spaces in such simulations and of distant clumps in observations (e.g, Elia et al. 2017;Motte et al. 2022) would be required to confirm that hypothesis.
The star-disk connection
A maximal resolution of ∼ 1 au in the disks is already considerable with simulation boxes of ∼ 1.5 pc.Unfortunately, this minimum cell size still does not allow to resolve the disk-star connection.To achieve that goal, one should be able to resolve the stellar radii, i.e., to increase the resolution by at least two if not three orders of magnitude.This is, of course, still impossible in the context of disk formation simulations.It is even more challenging in our case because we need to integrate the disks for a few tens of thousand of years to get a steady disk population.
Because of that difficulty, we have to rely on the widely used sink particles (Bate et al. 1995;Federrath et al. 2010;Bleuler & Teyssier 2014) as they allow to integrate the model for a much longer time with a somewhat realistic inner boundary condition for the disk.It is important to recall that, unfortunately, the choice of the sink parameters (accretion threshold n thre , sink radii and f acc ) does affect the calculation.Hennebelle et al. (2020b) showed that the mass of the disk was particularly affected by the choice of n thre because the density at the center of the disk is quickly adjusting to this threshold.It is however worth mentioning that the choice of this parameter is not completely arbitrary, we have indeed chosen n thre based on the α-disk estimate of Hennebelle et al. (2020b).Fortunately, as they have shown, the disk radius is much less affected by the choice of n thre , probably because it is rather controlled by the magnetic field (Hennebelle et al. 2016).
In addition, we have shown in Sect.4.3 that the impact of f acc on the disk size, mass and magnetic field is probably limited, provided that the accretion luminosity is not negligible (hence in all our models except for NMHD-BARO-M500).At the same time, f acc does affect both the disk temperature and fragmentation.
To better constrain these two essential parameters, we strongly encourage studies dedicated to understanding the stardisk connection through the modelling of the stellar scales (Vaytet et al. 2018;Bhandare et al. 2020;Ahmad et al. 2023) while integrating the models in time as much as possible (Hennebelle et al. 2020b).This would provide the community with the necessary sub-grid modelling (star-disk connection) to constrain better the initial mass and temperature of protostellar disks.
Dust and planets
Recent studies indicate that dust evolution is not negligible during the protostellar collapse at disk-like densities (Guillet et al. 2020;Elbakyan et al. 2020;Tsukamoto et al. 2021;Bate 2022;Kawasaki et al. 2022;Lebreuilly et al. 2023b;Marchand et al. 2023).This of course has important implications for the coupling between the gas and the magnetic field by means of the magnetic resistivities, and probably on the RT because the opacities are dominated by the dust contribution.
In our collapse calculations, the dust size distribution is, as often, assumed to be a constant Mathis, Rumpl, Nordsieck (MRN) distribution (Mathis et al. 1977) and is only used to compute the resitvities and the opacity.Its evolution should, in principle, be taken into account self-consistently.This is, unfortunately, still very challenging in 3D simulations.In particular, in the context of large star forming clumps the memory cost of simulating multiple dust grains sizes would be extremely high.Fortunately, recent developments of new methods based on the assumption of growth purely by turbulence (Marchand et al. 2021(Marchand et al. , 2023) ) or accurate dust growth solvers that require only a few dust bins (Lombart & Laibe 2021) are opening the way to account for dust in future 3D simulations.
It is also worth mentioning that sufficiently large grains can, in principle, dynamically decouple from the gas (Bate & Lorén-Aguilar 2017;Lebreuilly et al. 2019Lebreuilly et al. , 2020;;Koga et al. 2022).This phenomenon can lead to local enhancement/depletion of the dust material, which would affect the dynamical back-reaction of the grains on the gas.In disks, this mechanism could also self-trigger the formation of sub-structures only in the dust (e.g., Dipierro et al. 2015;Cuello et al. 2019;Riols et al. 2020).Fully including the dust dynamics would also require to take into ac-count the dust growth.This is clearly beyond the scope of this investigation.
Last but not least, as our models do not follow dust evolution in the disks, we are thus unable to make predictions for planet formation and its implication.Again, this is way beyond the scope of the present study.It is however important to keep in mind that fully formed planets or even planetary embryos could probably perturb the disk evolution and trigger the formation of sub-structures (e.g., Dipierro et al. 2016;Bae et al. 2017).To the best of our knowledge, planet population synthesis methods (see the review by Benz et al. 2014) are only used in 1D disks; however, their employment could be a way to tackle the problem in future works.
Hall effect and Ohmic dissipation
Ambipolar diffusion is not the only non-ideal MHD effect with a potential effect on disk formation processes.If the Ohmic dissipation is probably only dominant at very high densities (Marchand et al. 2016) and thus can be more safely neglected, the Hall resistivity might be comparable, if not larger than the ambipolar resistivity in a significant density range in protostellar envelopes and perhaps also in the disks (Wurster 2021).For numerical reasons, we could not run our simulations with the Hall effect, but it is useful to recall its expected impact from our knowledge of isolated collapse calculations.
The Hall effect was indeed investigated in details by several groups over the past decade (e.g.Li et al. 2011;Tsukamoto et al. 2015;Wurster et al. 2016;Marchand et al. 2018;Tsukamoto et al. 2017;Wurster & Bate 2019;Marchand et al. 2019;Wurster et al. 2019;Zhao et al. 2020Zhao et al. , 2021;;Wurster et al. 2021;Lee et al. 2021b).These works typically have found that the Hall effect could either enhance or decrease the rotation of the cloud (and the disk) depending of the initial relative orientation between the magnetic field and the angular momentum.Consequently, the Hall effect is expected to produce a bi-modality in disk properties.In the case where the Hall effect would enhance the rotation of the disk, therefore decreasing the effect of the magnetic braking, counter-rotating envelopes can typically be observed.It is worth noting that the Hall effect could also play a role in the fragmentation of the disks, when it accelerates rotation, as was shown by Marchand et al. (2019).
It is not clear yet whether these effects are to be expected to play a strong role in the birth of disk populations obtained from a star forming clumps because of the dispersion in the relative orientation of the magnetic field and the angular momentum.So far, only Wurster et al. (2019) investigated disk forming in star forming clumps with all three non-ideal MHD effects and have found no strong impact in the disk size.We point out that we also find a similar trend in our pure ambipolar diffusion case, although, as we pointed out, ideal MHD disk do have lower mass than the ones obtained with ambipolar diffusion.In addition, our ideal MHD disks do not have the same magnetic properties as the ones with ambipolar diffusion.In any case, simulations of massive star-forming clumps including the Hall effect resolving the disks scales would be very valuable for the community and should surely be performed in the future years.
Conclusion
In this work, we have explored the formation of protostellar disk populations in massive protostellar clumps with various assumptions and initial conditions.We now recall the main findings and conclusions of this work.
-Disk populations are ubiquitous in these simulations.A disk is found around 70 to 90% of the stellar systems depending on the clump initial conditions.-Disks are born with a variety of sizes, masses, structures and nearby environments reflecting their individual history in a highly interacting gravo-turbulent star forming clump.-We commonly find compact disks (in the presence of a strong magnetic field), non-axisymmetric envelopes (accretion streamers), sub-structures (spirals, arcs), magnetised flows (interchange instabilities), flybys and peculiar structures such as disks formed in a single column.However, we find no ring structures or protostellar outflows/jets.-Accretion luminosity is the dominant source of heating in the disks and it is controlling their temperature.The strength of the accretion luminosity depends on the clump properties.
Clumps that fragment more efficiently also have lower accretion rates/luminosity resulting in colder disks.-The strength of the magnetic field at the clump scale is found to be a controlling factor for the disk size and the clump scale fragmentation.A stronger magnetic field leads to typically smaller disks and a reduced fragmentation.Because of that, the disks in clumps with a stronger initial magnetic field are also hotter.-The accretion luminosity does not seem to be a controlling factor for the other disk properties (size, mass and magnetic field).However, we point out that these properties could change if the disks were much colder than expected.-The disk sizes obtained from our models are in relatively good agreement with the observed protostellar disk sizes from millimeter surveys.Depending on the way the disk radius is measured, either the case µ = 10 or the case µ = 50 better fit the models.However, there is tension with some surveys concerning the masses.Future post-processing of the models with radiative transfer tools should clarify the comparison between models and observations.-Some well known properties of isolated collapse calculations still hold in the context of large scale models.We confirm the important role of the magnetic field in shaping the disk masses and sizes and its combined importance with the radiative transfer in controlling their temperature and fragmentation.In addition, we show that when we account for ambipolar diffusion disks are weakly magnetized while ideal MHD disks are not.Similarly to high mass star collapse calculations, we find that more massive disk generate stronger toroidal magnetic fields.Finally, we find that disks obtained in barotropic calculations are more easily fragmenting than those of RT calculations.This confirms the interest of the isolated collapse approach to model protostellar disk formation despite its incapacity to provide us with the statistics of the disk populations.
In this work, we have shown how diverse the populations of protoplanetary disks can be at early stages and how they depend on their large scale environment (magnetic field, radiation, cloud mass) as well as on the physical effects included (magnetic field with and without ambipolar diffusion, radiative transfer).We strongly encourage future works further exploring the influence of more clump properties (turbulence, size, shape) as well as dedicated studies comparing such models with real observed data with synthetic observations produced with elaborated radiative transfer codes.
Fig. 1 :
Fig. 1: Evolution of the clump of run NMHD-F01 at various times (SFE=0.0017,0.015 and 0.15).(a,b,c) Full column density maps in the (x-y) plane.(d,e,f) Same but centered around the sink 1 (located in the hub) and with an extent of 12.5% of the box scale.Sinks are represented by the star symbols.
Fig. 3 :
Fig. 3: CDF of the disks at birth time, but also 10 and 20 kyr after for the NMHD-F01 model.
Fig. 4 :
Fig. 4: disk radius vs disk mass for the NMHD-F01model.Each disk is displayed once per kyr and the different markers/colors represent the various evolutionary stages for the disks.
Fig. 5 :
Fig. 5: same as Fig.4 but for the disk-to-stellar mass ratio vs the primary mass.The horizontal dashed line represents a mass ratio of 1.
Disk magnetic field/plasma beta: Let us focus on the difference of magnetic field properties between the models.As a com-plement from panel (d) and (e) of Fig 8, we show for the three models (left to right) the correlation of the disk size vs the ratio between the vertical and azimuthal magnetic field (hereafter poloidal fraction, top panels) and the plasma beta vs the disk mass (bottom panels) in Fig 9.
Fig. 10 :
Fig. 10: same as panel (c) of Fig. 1 but for the NMHD-F01-mu50 run.The clump is more fragmented as a result of the lower magnetic pressure support.
Fig. 15 :
Fig. 15: Evolution of the stellar accretion rate as a function of time.(Left) mean accretion rate for NMHD-F01 , NMHD-F05, NMHD-F01-mu50 and NMHD-F01-M500 as a function of the time since the first star has formed.(Right) Same but for the individual accretion rates of a collection of sinks of NMHD-F01.
Table 1 :
summary of the different simulations.From the left to the right: model name, initial clump mass, thermal-to-gravitational energy ratio α, mass-to-flux ratio µ (⋆ means ideal MHD), accretion luminosity efficiency f acc (if applicable), RT modeling, final median sink mass and final SFE and corresponding time t end .
Table 2 :
mean, median (med.) and standard deviation (Stdev.) of the disk properties for NMHD-F01 at birth time and at age of 10 and 20 kyr.(-N) stands for ×10 −N . | 17,886.4 | 2023-10-30T00:00:00.000 | [
"Physics"
] |
Sequential Imputation of Missing Spatio-Temporal Precipitation Data Using Random Forests
Meteorological records, including precipitation, commonly have missing values. Accurate imputation of missing precipitation values is challenging, however, because precipitation exhibits a high degree of spatial and temporal variability. Data-driven spatial interpolation of meteorological records is an increasingly popular approach in which missing values at a target station are imputed using synchronous data from reference stations. The success of spatial interpolation depends on whether precipitation records at the target station are strongly correlated with precipitation records at reference stations. However, the need for reference stations to have complete datasets implies that stations with incomplete records, even though strongly correlated with the target station, are excluded. To address this limitation, we develop a new sequential imputation algorithm for imputing missing values in spatio-temporal daily precipitation records. We demonstrate the benefits of sequential imputation by incorporating it within a spatial interpolation based on a Random Forest technique. Results show that for reliable imputation, having a few strongly correlated references is more effective than having a larger number of weakly correlated references. Further, we observe that sequential imputation becomes more beneficial as the number of stations with incomplete records increases. Overall, we present a new approach for imputing missing precipitation data which may also apply to other meteorological variables.
INTRODUCTION
Precipitation is an important component of the ecohydrological cycle and plays a crucial role in driving the Earth's climate. It serves as an input for various ecohydrological models to determine snowpack, infiltration, surface-water flow, groundwater recharge, and transport of chemicals, sediments, nutrients, and pesticides (Devi et al., 2015). Numerical modeling of surface flow typically requires a complete time series of precipitation along with other meteorological records (e.g., temperature, relative humidity, solar radiation) as inputs for simulations (Dwivedi et al., 2017(Dwivedi et al., , 2018Hubbard et al., 2018Hubbard et al., , 2020Zachara et al., 2020). However, meteorological records often have missing values for various reasons, such as due to malfunctioning of equipment, network interruptions, and natural hazards (Varadharajan et al., 2019). Missing values need to be reconstructed or imputed accurately to ensure that estimates of statistical properties, such as mean and co-variance, are consistent and unbiased (Schneider, 2001) because inaccurate estimates can hurt the accuracy of ecohydrological models. Reconstructing an incomplete daily precipitation time series is especially difficult since it exhibits a high degree of spatial and temporal variability (Simolo et al., 2010).
Past efforts for imputing missing values of a precipitation time series fall under two broad categories: autoregression of univariate time series and spatial interpolation of precipitation records. Autoregressive methods are self-contained and impute missing values by using data from the same time series that is being filled. Simple applications could involve using a mean value of the time series, or using data from 1 or several days before and after the date of missing data (Acock and Pachepsky, 2000). More sophisticated versions of autoregressive approaches implement stochastic methods and machine learning (Box and Jenkins, 1976;Adhikari and Agrawal, 2013). To illustrate some recent studies, Gao et al. (2018) highlighted methods to explicitly model the autocorrelation and heteroscedasticity (or changing variance over time) of hydrological time series (such as precipitation, discharge, and groundwater levels). They proposed the use of autoregressive moving average models and autoregressive conditional heteroscedasticity models. Chuan et al. (2019) combined a probabilistic principal component analysis model and an expectation-maximization algorithm, which enabled them to obtain probabilistic estimates of missing precipitation values. Gorshenin et al. (2019) used a patternbased methodology to classify dry and wet days, then filled in precipitation for wet days using machine learning approaches (such as k-nearest neighbors, expectation-maximization, support vector machines, and random forests). However, an overarching limitation of autoregressive methods is the need for the imputed variable to show a high temporal autocorrelation, which is not necessarily valid for precipitation (Simolo et al., 2010). Therefore, such methods have limited applicability when it comes to reconstructing a precipitation time series.
Spatial interpolation methods, on the other hand, impute missing values at the target station by taking weighted averages of synchronous data, i.e., data at the same time, from reference stations (typically neighboring stations). The success of these methods relies on the existence of strong correlations among precipitation patterns between the target and reference stations. The two most prominent approaches are inverse-distance weighting (Shepard, 1968) and normal-ratio methods (Paulhus and Kohler, 1952). The inverse-distance weighting assumes the weights to be proportional to the distance from the target, while the normal-ratio method assumes the weights to be proportional to the ratio of average annual precipitation at the target and reference stations. Another prominent interpolation approach is based on kriging or gaussian processes, which assigns weights by accounting for spatial correlations within data (Oliver and Webster, 2015). Teegavarapu and Chandramouli (2005) proposed several improvements to weighting methods and also introduced the coefficient of correlation weighting methodhere the weights are proportional to the coefficient of correlation with the target. Recent studies have proposed new weighting schemes using more sophisticated frameworks (e.g., Morales Martínez et al., 2019;Teegavarapu, 2020). In parallel, studies have also been conducted to account for various uncertainties in imputation. For example, Ramos-Calzado et al. (2008) proposed a weighting method to account for measurement uncertainties in a precipitation time series. Lo Presti et al. (2010) proposed a methodology to approximate each missing value by a distribution of values where each value in the distribution is obtained via a univariate regression with each of the reference stations. Simolo et al. (2010) pointed out that weighting approaches have a tendency to overestimate the number of rainy days and to underestimate heavy precipitation events. They addressed this issue by proposing a spatial interpolation procedure that systematically preserved the probability distribution, long-term statistics, and timing of precipitation events.
A critical review of the literature shows that, in general, spatial interpolation techniques have two fundamental shortcomings: (i) how to optimally select neighbors, i.e., reference stations, and (ii) how to assign weights to selected stations. While selecting reference stations is typically done using statistical correlation measures, assigning weights to selected stations is currently an ongoing area of research. The methods reviewed so far are based on the idea of specifying a functional form of the weighting relationships. The appropriate functional form may vary from one region to another depending on the prevalent patterns of precipitation as influenced by local topographic and convective effects. Using a functional form that is either inappropriate or too simple could distort the statistical properties of the datasets (such as mean and covariance). Some researchers have proposed to address these shortcomings by using Bayesian approaches (e.g., Yozgatligil et al., 2013;Chen et al., 2019;Jahan et al., 2019). These fall under the broad category of expectation-maximization and data augmentation algorithms, thus yielding a probability distribution for each missing value.
An alternative approach for imputing missing data is the application of data-driven or machine learning (ML) methods which are becoming increasingly prominent for imputing using spatial interpolation. These methods do not need a functional form to be specified a priori and can learn a multi-variate relationship between the target station and reference stations using available datasets. Studies have found that the performance of ML methods tends to be superior to that of traditional weighting methods (e.g., Teegavarapu and Chandramouli, 2005;Hasanpour Kashani and Dinpashoh, 2012;Londhe et al., 2015). In addition, studies have been conducted to identify an optimal architecture for ML-based methods (Coulibaly and Evora, 2007;Kim and Pachepsky, 2010). In this work, we use a Random Forests (RF) method. The RF is an ensemble learning method which reduces associated bias and variance, making predictions less prone to overfitting. In addition, a recent study showed that RF-based imputation is generally robust, and performance improves with increasing correlation between the target and references (Tang and Ishwaran, 2017).
Regardless of the imputation technique, an inherent limitation of spatial interpolation algorithms is the need for reference stations to have complete records during the time-period of interest. This limitation is critical for ML algorithms where incomplete records preclude data-driven learning of multi-variate relationships. The success of spatial interpolation, therefore, depends on whether precipitation at the target station is highly correlated with precipitation at stations with complete records. A station with an incomplete record is typically excluded from the analysis even though that station may have a high correlation with the target station. In this work, we hypothesize that stations with incomplete records contain information that can improve spatial interpolation if they are included in the analysis. We propose a new algorithm, namely sequential imputation, that leverages incomplete records to impute missing values. In this approach, stations that are imputed first are also included as reference stations for imputing subsequent stations. We implement this algorithm in the context of imputing missing daily values of precipitation and demonstrate its benefits by incorporating it in an RF-based spatial interpolation.
In what follows, we start by describing our study area and data sources and follow this with a brief introduction to the Random Forests (RF) method. We then describe all our numerical experiments, starting with a baseline imputation that helps evaluate the performance of sequential imputation. This is followed by a description of the sequential imputation algorithm, along with an outline of different scenarios to evaluate sequential imputation. We compare the results of sequential imputation with a non-sequential imputation in which incomplete records are not leveraged for subsequent imputations. Finally, we discuss the implications of our results and provide some concluding thoughts.
Study Area and Data Sources
We conducted this study using data from the Upper Colorado Water Resource Region (UCWRR), which is one of 21 major water resource regions classified by the United States Geological Survey to divide and sub-divide the United States into successively smaller catchment areas. The UCWRR is the principal source of water in the southwestern United States and includes eight subregions, 60 sub-basins, 523 watersheds, and 3,179 sub-watersheds. Several agencies have active weather monitoring stations in UCWRR. For our study, we considered the weather stations maintained by the Natural Resources Conservation Service (NRCS). Figure 1 shows the spatial distribution of NRCS stations in UCWRR. Ninety-seven stations have complete records which primarily belong to the Snowpack Telemetry (SNOTEL) network. We considered data spanning the 10-year window from 2008 to 2017. Over this period, NRCS had 152 active stations in UCWRR which report daily precipitation data. For this study, our dataset is restricted to the 97 stations with complete records. We downloaded the data through the NRCS Interactive Map and Report Generator 1 (accessed Jan 16, 2020).
Spatial Interpolation Method: Random Forests (RF)
RF is an ML-method based on an ensemble or aggregation of decision-trees (Breiman, 2001). A decision-tree is a flowchartlike structure that recursively partitions the input feature space into smaller subspaces (Figure 2). Recursion is carried out till the subspaces are small enough to fit simple linear models on them In regression problems, the decision rules for partitioning are determined such that the mean-squared error between the tree output and the observed output is minimized. The RF model trains each decision-tree on a different set of data points obtained by sampling the training data with replacement (or bootstrapping). Furthermore, each tree may also consider a different subset of input features selected randomly. The final output of the random forest is obtained by aggregating (or ensembling) the results of all decision trees. For regression problems, aggregation is done by taking the mean. Figure 2 shows a schematic of an RF regressor.
The ensemble nature of RF leads to several benefits (Breiman, 2001;Louppe, 2015). First, it makes RF less prone to overfitting, despite the susceptibility of individual trees to overfitting (Segal, 2004). For regression problems, overfitting refers to low values of mean-squared error on training data, and high values of meansquared error on test data. Second, it enables an evaluation of the relative importance of a variable (which, in this work, refers to a reference station) for predicting the output. This is typically done by determining how often a variable is used for partitioning the input feature space, across all trees. Third, the ensemble nature of RF makes it possible to not set aside a test set. Since the input for each decision tree is obtained by bootstrapping, the unsampled data can be used to estimate the generalization error. In addition, RF does not require extensive hyperparameter tuning compared to other ML approaches (Ahmad et al., 2017).
In this study, we implement RF using Python's scikit-learn module (Pedregosa et al., 2011). Precipitation data from reference stations acts as input, and precipitation data at the target station is specified as the output. Unlike typical spatial interpolation approaches, we do not specify distances between the reference and target stations. Distances are static variables and their influence on dynamic precipitation relationships gets learnt as a constant bias, regardless of whether they are explicitly specified or not.
Overview of Numerical Experiments
To investigate if stations with incomplete records contain information that can improve spatial interpolation, we designed three sets of numerical experiments: baseline, sequential, and non-sequential imputation. In baseline imputation, each station in our dataset is modeled using the remaining stations as reference stations. This represents an upper bound on the performance of sequential imputation when we have multiple stations with incomplete records. The baseline imputation provides statistics to help evaluate the performance of sequential imputation. In sequential imputation, a subset of stations in our dataset is marked as artificially incomplete. For each station in the artificially incomplete subset, 20% of the values are randomly marked as "missing." The missing values are imputed by leveraging other artificially incomplete stations in the subset, in addition to using stations outside the subset. Finally, in non-sequential imputation, the same artificially incomplete subset as sequential imputation is considered, and missing values are imputed using just the stations that are outside the subset. We describe the three sets of numerical experiments in detail in sections Numerical Experiments: Baseline Imputation and Numerical Experiments: Sequential and Non-sequential Imputation. Before describing each of these experiments, it would be instructive to discuss our performance criterion for evaluating imputation.
Evaluating Imputation: Nash-Sutcliffe Efficiency (NSE) We evaluated the overall performance of imputation by computing the Nash-Sutcliffe Efficiency (NSE) on test data given by where N is the size of the test set, y o i is i-th observed value, y m i is the corresponding modeled value, and y o is the mean of all observed values in the test set.
The NSE is a normalized statistical measure that determines the relative magnitude of the residual variance (or noise) of a model when compared to the measured data variance. It is dimensionless and ranges from −∞ to 1. An NSE value equal to 1 implies that the modeled (in our case, imputed) values perfectly match the observations; an NSE value equal to 0 implies that the modeled values are only as good as the mean of observations; and a negative NSE value implies that the mean of observations is a better predictor than modeled values. Positive NSE values are desirable, and higher values imply greater accuracy of the (imputation) model.
Two other common statistical measures for evaluating the overall accuracy of prediction are Pearson's product-moment correlation coefficient R, and the Kolmogorov-Smirnov statistic. While the former evaluates the timing and shape of the modeled time series, the latter evaluates its cumulative distribution. Gupta et al. (2009) decomposed the NSE into three distinctive components representing the correlation, bias, and a measure of relative variability in the modeled and observed values. They showed that NSE relates to the ability of a model to reproduce the mean and variance of the hydrological observations, as well as the timing and shape of the time series. For these reasons, the use of NSE was preferred over other statistical measures to evaluate the accuracy of imputation.
We also evaluated the performance of sequential imputation for predicting dry events and extreme wet events. This is because spatial interpolation approaches tend to overpredict the number of dry events and underestimate the intensity of extreme wet events (Simolo et al., 2010;Teegavarapu, 2020). A common practice is to consider a day as a dry event if the daily precipitation does not exceed a threshold of 1 mm (Hertig et al., 2019). We considered a threshold of 2.54 mm since that is the resolution of our dataset. We considered a day as an extreme wet event if the daily precipitation exceeded the 95th percentile of the entire precipitation record for a given station (Zhai et al., 2005;Hertig et al., 2019). To evaluate prediction accuracy for dry events, we computed the percentage error, or the percentage of days that were correctly modeled as dry days. To evaluate prediction accuracy for extreme wet events, we computed NSE values exclusively for days that exceeded the 95th percentile of daily precipitation values; this enabled us to evaluate the predicted magnitude. In what follows, we use the acronym NSEE to denote NSE for extreme events.
Numerical Experiments: Baseline Imputation
For our first set of numerical experiments, we conducted baseline imputations where each station in our dataset is modeled using the remaining stations as reference stations. Our dataset consists of 97 stations with complete records (as outlined in Figure 1 and Table 1). This set of numerical experiments is a test of the RFbased imputation method and provides an upper bound on the performance of the sequential imputation algorithm discussed in the section Sequential Imputation Algorithm. More importantly, it provides estimates of the variance for modeling each station, which will be used to evaluate the performance of the sequential imputation algorithm. Specifically, each station in our dataset was considered, in turn, to be a target station (or model output), with the rest of the stations acting as references (or input features). For each target station, 80% of the data were randomly selected for training, and the remaining 20% were used for testing. The test set effectively acted as missing data to be imputed. We conducted this exercise 15 times for each station. Prior to these runs, we also conducted an independent set of baseline runs to tune the hyperparameters of RF.
Sequential Imputation Algorithm
ML-based spatial interpolation learns multi-variate relationships between the reference stations and the target station. Studies have noted that for imputation results to be reliable, data at reference stations should be strongly correlated to data at the target station (e.g., Teegavarapu and Chandramouli, 2005;Yozgatligil et al., 2013). However, ML-based spatial interpolation excludes stations that have incomplete records, even though they may be strongly correlated with the target station. Here, we develop a technique (i.e., sequential imputation) where stations that are imputed first are used as reference stations for imputing subsequent stations. In what follows, we refer to a station with a complete record as a "complete station, " and a station with an incomplete record as an "incomplete station." The sequential imputation algorithm involves the following steps: In this study, correlation refers to Pearson's product-moment correlation coefficient, hereafter denoted by R. We chose this measure for its simplicity.
Step 3 requires calculating an aggregate correlation of each incomplete station with the reference stations. This step assumes that the incomplete station having the highest aggregate correlation with reference stations will have the most accurate imputation. We will verify this assumption in the Results section. To determine an appropriate aggregate correlation measure for Step 3, we implemented the following procedure: i. Compute correlations of a target station with each of the reference stations. ii. Sort the correlation values in descending order (highest to lowest). iii. Calculate the cumulative sum of the sorted correlations.
Denote each partial sum as S i , where subscript i refers to the first i sorted correlations.
i varies from 1 to N, and N is the number of reference stations in the dataset. Each S i is an aggregate measure of correlation between a target station and the reference stations. For instance, S 2 refers to the sum of first two sorted correlations, S 3 refers to the sum of first three sorted correlations, and so on. We computed values of S i for all the 97 stations in our dataset and compared their values with NSE determined from baseline imputations. The S i having the highest correlation with NSE was picked to quantify aggregate correlation (for Step 3 of sequential imputation). For practical applications, the above procedure to determine an appropriate aggregate correlation may be implemented using non-sequential imputations. Note that other aggregate measures may be envisioned (e.g., mutual information, spearman's correlation), but we sought to pick one that is relatively simple to keep our focus on the sequential imputation approach.
Numerical Experiments: Sequential and Non-sequential Imputation
To investigate the benefits of sequential imputation, we divided our dataset of 97 complete stations into five (almost) evenly sized subsets and labeled them 1 through 5, as shown in Figure 3. The division into subsets was random. We then considered four different scenarios, each of which marked certain subsets as artificially incomplete. These are shown in Table 2. Precipitation records typically have missing values resulting from random mechanisms such as malfunctioning of equipment, network interruptions, and natural hazards. In other words, the probability that a precipitation value is missing does not depend on the value of precipitation itself. These random mechanisms also assume that the location or physiography of a weather station has no bearing on whether its record is complete or incomplete. This missing at random mechanism (Schafer and Graham, 2002) is reflected in our decision to create subsets randomly, and enables us to evaluate the sequential imputation approach in a more generic setting. Figures 4A-D shows the division of our dataset into complete and artificially incomplete subsets for each of the scenarios listed in Table 2. Scenario 1 had 77 out of 97 records marked as artificially incomplete. Each subsequent scenario had fewer records marked as artificially incomplete, culminating with Scenario 4 which had only 19 such records. These scenarios were designed to investigate how the proportion of incomplete records affects imputation. We expected sequential imputation to be more beneficial as the proportion of incomplete records increased in the dataset. The stations belonging to the artificially incomplete subsets had 20% of their data marked as missing. Previous studies on imputation have considered two broad mechanisms for marking missing values. One approach involves marking missing values randomly (e.g., Teegavarapu and Chandramouli, 2005;Kim and Pachepsky, 2010), while the other approach assumes that missing values form continuous gaps in time (e.g., Simolo et al., 2010;Yozgatligil et al., 2013). Since spatial interpolation assumes no temporal autocorrelation and is agnostic to the timestamp of the data, the mechanism for marking missing values is not relevant. For simplicity, we assumed that values were missing completely at random. The missing values were imputed using sequential and non-sequential imputations; both these imputations were compared and enabled us to highlight the benefits of sequential imputation. Specifically, we calculated NSE corresponding to both sequential and non-sequential runs and computed the change (or increase) in NSE for each station as follows: To evaluate improvement in prediction of extreme wet events, NSE in Equation 2 was replaced by NSEE. To evaluate improvement in prediction of dry days, we computed the percentage error (i.e., the percentage of days that were correctly modeled as dry days) corresponding to both sequential and nonsequential runs. We then computed the change (or decrease) in percentage error (PE) as follows:
Baseline Imputation
We performed baseline imputation to estimate statistics to evaluate the performance of the sequential imputation algorithm. Figures 5A-C show results of baseline imputations on missing data for all stations. Each station was modeled 15 times, with different splits of training and testing (missing) data, and the accuracy of each model for imputation was quantified by computing NSE on test data. This provided us with a distribution of NSE values (instead of just one value) for reconstructing each station, from which we estimated the mean µ and standard deviation σ of NSE for each station. For clarity, we denote the mean and standard deviation of a particular station s, by µ s and σ s , respectively. Figure 5A compiles the µ s for all the stations and shows them as a histogram. Approximately 95% of the stations have a mean NSE >0.5, and approximately two-thirds of the stations have a mean NSE >0.65. Figure 5B compiles the µ s and σ s for all stations and shows them as a scatter plot. We see that for each station, the NSE values have a small standard deviation relative to their mean. Figure 5C shows the geospatial distribution of µ s . Figure 6 shows sample scatter plots of true and predicted precipitation on test data using baseline imputations. The dotted line shows the 45-degree line which corresponds to a perfect match (i.e., NSE = 1) between true and predicted values. Note that our dataset has a resolution of 0.1 inch or 2.54 mm, which results in visible jumps in the abscissa (or "true values"). Subfigure (a) corresponds to a relatively high value of NSE (∼0.8), and subfigure (b) corresponds to a relatively low value of NSE (∼0.5). We see from these plots that for a high value of NSE, the relative scatter is smaller and closer to the dotted line.
Aggregate Correlation Between Target Incomplete Stations and Reference Stations
To identify an appropriate aggregate correlation measure for sequential imputation, we analyzed results of baseline imputations. Specifically, we computed values of S i for all the target stations (i.e., S s i ) and compared their values with the corresponding µ s . Since strong correlations with reference stations lead to more accurate imputation, we expect S i to be positively correlated with µ, regardless of the value of i. As defined in the section Sequential Imputation Algorithm, S i for a target station is the sum of first i sorted correlations with reference stations. For clarity, we denote S s i to refer to S i for a particular target station s. Figure 7A shows a scatter plot of S s 2 and µ s for all the stations in our dataset (as outlined in Figure 1 and Table 1). The correlation coefficient was 0.95. Similarly, we computed correlations between S s i and µ s for all values of i [denoted as Corr(µ s , S s i )], and plotted them in Figure 7B. These results show that the correlation between S s i and µ s is higher for lower values of i. On the basis of Figure 7, we used S 2 as the similarity measure for sequential imputation. For practical applications, an appropriate similarity measure may be determined by analyzing results of non-sequential imputations.
Sequential Imputation
To implement the sequential imputation algorithm, the artificially incomplete subsets in each of the four scenarios were reconstructed using sequential and non-sequential imputation (see section Numerical Experiments: Sequential and Non-sequential Imputation). For a given station, sequential imputation was considered to have made a significant improvement if the corresponding s NSE (i.e., NSE for station s computed using Equation 2) was greater than σ s estimated from baseline runs. This was done to ensure that the change in NSE during sequential imputation may not be attributed to noise. Figures 8A-11A show the results of sequential imputation for Scenarios 1-4, respectively, with values of NSE for each station corresponding to sequential imputation. The values are plotted in the order of sequential imputation and are superimposed over the baseline values of NSE. The baseline NSE curve is centered at its mean and the thickness represents its standard deviation (as shown in Figure 5B). The baseline curve provides an upper bound on the performance of the sequential imputation algorithm. Figures 8B-11B show change in NSE for each increment in sequence, when compared to a non-sequential imputation.
Results for the scenarios are summarized in Table 3. Figure 12 shows scatter plots of true and predicted precipitation on test data for a station that showed significant improvement during sequential imputation in Scenario 1. Subfigure (a) shows the scatter for non-sequential imputation, and subfigure (b) shows the scatter for sequential imputation. The dotted line shows the 45-degree line which corresponds to a perfect match (i.e., NSE = 1) between true and predicted values. Recall that our dataset has a resolution of 0.1 inch or 2.54 mm, which results in visible jumps in the abscissa (or "true values"). Figures 13, 14 show the results of sequential imputation for predicting dry [subfigures (a)] and extreme wet [subfigures Scenario 4 19 0 (b)] events for Scenarios 1, 2. The values are plotted in the order of sequential imputation and denote the change in PE or NSEE during sequential imputation when compared to a nonsequential imputation. The values are color-coded according to results of Figures 8-11. The results for Scenarios 3, 4 are not shown for the sake of brevity. Figure 5A shows the mean NSE (µ s ) for all the stations as a histogram. As noted earlier, approximately 95% of the stations have µ s >0.5, and approximately two-thirds of the stations have a µ s >0.65. Moriasi et al. (2007) reviewed over twenty studies related to watershed modeling and recommended that for a monthly time step, models can be judged as "satisfactory" if NSE is >0.5; a lower threshold was recommended for daily time steps. Therefore, our spatial interpolation technique for imputing missing values can be considered to be effective. The geospatial distribution of mean NSE in Figure 5C suggests that lower values of NSE tend to arise when there is a lower density of reference stations in close proximity. This is because distant stations tend to experience dissimilar precipitation patterns than the target station, making them less likely to be reliable predictors of precipitation at the target FIGURE 12 | Scatter plots of true and predicted precipitation on test data for a station that showed significant improvement during sequential imputation in Scenario 1: (A) non-sequential imputation, (B) sequential imputation. Jumps in true values are due to the coarse resolution (of 2.54 mm) of the dataset. station. This observation is why the inverse-distance weighting method is popular.
DISCUSSION
Although proximity of reference stations may be considered necessary for accurate imputation of precipitation values, it is not sufficient (e.g., Teegavarapu and Chandramouli, 2005). We show an example of this in Figure 15, which is a modified version of Figure 5C with an arrow marking a station. The marked station has a low NSE despite having reference stations that exist in close proximity. This is because the reference stations closest to it have significantly different values of elevation (for reference, the marked station has an elevation of 2,113 m, while the closest station has an elevation of 3,085 m). For accurate spatial interpolation at a target location, the reference stations should have physiographic similarity with the target. Factors influencing physiographic similarity are location, elevation, coastal proximity, topographic facet orientation, vertical atmospheric layer, topographic position, and orographic effectiveness of the terrain (Daly et al., 2008). Note that it is not known a priori how these different factors interact with each other and subsequently influence the physiographic properties of target and reference stations. Selecting reference stations based on predefined physiographic criteria may result in an unintentional exclusion of stations that have a high correlation with the target station. Overall, any predefined physiographic criterion will lack the flexibility in selecting stations and may not result in the best imputation performance. Figure 6 shows sample scatter plots of true and predicted precipitation on test data using baseline imputations. We see from these plots that for a high value of NSE, the relative scatter is smaller. In addition, we can also observe that even for a high value of NSE, there is a tendency to overpredict the number of dry days and underestimate the intensity of extreme wet events. For subfigure (a), the 95th percentile threshold is at 15.24 mm, and for subfigure (b), it is at 12.7 mm. Recall that we define events beyond the 95th percentile threshold as extreme wet events. Figures 8-11 demonstrate the benefits of sequential imputations when compared with non-sequential imputations.
In what follows, we will use the phrase "incomplete station" to refer to an artificially incomplete station. Figures 8-11 show that as the proportion of incomplete stations increases, there is a higher percentage of stations benefitting from sequential imputation.
NSE values that correspond to significant improvements (i.e., s NSE > σ s ) tend to be higher than those that do not. A value of NSE that does not correspond to a significant improvement (i.e., s NSE ≤ σ s ) implies that the previously imputed stations do not add extra information for spatial interpolation. This can be for two reasons: (i) the previously imputed stations are weakly correlated to the target station, or (ii) the previously imputed stations show strong correlations with the target station, but also show strong correlations with stations already in the complete subset. The second reason could happen if there is a cluster of stations that have similar physiography and experience similar precipitation patterns. Sequential imputation of stations in a cluster may not add new information if other stations in the cluster already have complete records. For instance, consider Scenario 4 where the proportion of incomplete stations is small and sequential imputation does not provide any benefits. Figure 4D shows that the incomplete stations in Scenario 4 are either isolated (and could be weakly correlated to other incomplete stations) or are a part of a cluster with multiple complete records. Figures 3, 4 show that the stations in our dataset tend to form clusters; these figures help us understand why we observe a smaller percentage of stations benefitting from sequential imputation as the proportion of incomplete stations decreases. The clustering tendency implies that when there is a small subset of incomplete stations, there is a high probability that previously imputed stations do not add any extra information for spatial information. Figure 12 shows scatter plots of true and predicted precipitation on test data for a station that showed significant improvement during sequential imputation in Scenario 1. As noted for Figure 6 as well, these plots help visualize that as the NSE value increases during sequential imputation, the relative scatter decreases demonstrating improved spatial interpolation. Figures 13, 14 demonstrate that the benefits of sequential imputation also carry over to predicting dry events and extreme events despite the underlying limitations of spatial interpolation as noted in the section Evaluating Imputation: Nash Sutcliffe Efficiency (NSE). We observe a general trend that the improvements (or values of ) tend to be higher for stations that correspond to significant overall improvements (i.e., s NSE > σ s ) as discussed above.
Results for aggregate correlations ( Figure 7B) show that the correlation between S i (i.e., partial sum of first i sorted correlations) and NSE is high for lower values of i, and gets progressively weaker as i increases. This implies that for reliable imputation, having a few references that are strongly correlated is more important than having many references that are weakly correlated. This highlights why sequential imputation is a powerful technique, since leveraging even one incomplete station that is highly correlated to the target station can make a significant improvement. We illustrate this further in Figure 16, where we show values of S 2 for all stations at the time of sequential imputation in Scenarios 1 and 2. As expected, values of S 2 during sequential imputation are higher than those during non-sequential imputation, which is consistent with improved imputations.
It is important to note that stations imputed earlier during sequential imputation tend to have a higher NSE, indicating a more reliable imputation. NSE values tend to decrease along the imputation sequence. This is primarily a consequence of the order in which we pick stations for sequential imputation. Stations that are imputed earlier in the sequence have a higher aggregate correlation with reference datasets, implying that missing data would be modeled with greater accuracy. This can be verified by observing the trend of the baseline NSE curve in Figures 8A-11A, which also shows a reduction in NSE values along the imputation sequence. Stations that are imputed later in the sequence will tend to have a lower value of NSE because they have a lower baseline NSE to begin with; they could still exhibit significant improvements during sequential imputation when compared to non-sequential imputation (as shown in Figures 8B-10B).
Finally, we note that the performance of sequential imputation could be negatively impacted if the data gaps among stations occur synchronously. In particular, this could happen if a station earlier in the sequence was poorly imputed and has a high correlation with a station imputed later in the sequence. However, the proposed sequential approach can still be implemented, and this approach will outperform or equally match the non-sequential approach.
CONCLUSIONS
Spatial interpolation algorithms typically require reference stations that have complete records; therefore, stations with missing data or incomplete records are not used. This limitation is critical for machine learning algorithms where incomplete records preclude data-driven learning of multivariate relationships. In this study, we proposed a new algorithm, called the sequential imputation algorithm, for imputing missing time-series precipitation data. We hypothesized that stations with incomplete records contain information that can be used toward improving spatial interpolation. We confirmed this hypothesis by using the sequential imputation algorithm which was incorporated within a spatial interpolation method based on Random Forests.
We demonstrated the benefits of sequential imputation as compared to non-sequential imputation. Specifically, we showed that sequential imputation helps leverage other incomplete records for more reliable imputation. We observed that as the proportion of stations with incomplete records increases, there is a higher percentage of stations benefitting from sequential imputation. On the other hand, if the proportion of stations with incomplete records is small, there is a high probability that sequential imputation does not add any extra information for spatial information. We also observed that the benefits of sequential imputation carry over to improved predictions of dry events and extreme events. Finally, results showed that for reliable imputation, having a few strongly correlated references is more important than having many references that are weakly correlated. This highlights why sequential imputation is a powerful technique, since including even one incomplete station that is highly correlated to the target station can make a significant improvement in imputation.
Although we demonstrated sequential imputation using Random Forests, it can be implemented using other MLbased and spatial interpolation methods found in the literature. Furthermore, we presented a new but generic algorithm for imputing missing records in daily precipitation time-series that is potentially applicable to other meteorological variables as well.
DATA AVAILABILITY STATEMENT
Publicly available datasets were analyzed in this study. This data can be found here: https://www.wcc.nrcs.usda.gov.
AUTHOR CONTRIBUTIONS
UM and DD conceived and designed the study. UM acquired the data, developed the new algorithm, conducted all the numerical experiments, and analyzed the results. DD and JB provided input on methods and statistical analysis. BF provided input on data acquisition and time series analysis. DD helped analyze the results. SP and CS provided input on the conception of the study and were in charge of overall direction and planning. UM took the lead in writing the manuscript. All authors provided critical feedback and helped shape the research, analysis, and manuscript. | 9,153.2 | 2020-08-07T00:00:00.000 | [
"Environmental Science",
"Computer Science"
] |
Hybrid Model : An Efficient Symmetric Multiprocessor Reference
Functional verification has become one of the main bottlenecks in the cost-effective design of embedded systems, particularly for symmetric multiprocessors. It is estimated that verification in its entirety accounts for up to 60% of design resources, including duration, computer resources, and total personnel. Simulation-based verification is a long-standing approach used to locate design errors in the symmetric multiprocessor verification. The greatest challenge of simulation-based verification is the creation of the referencemodel of the symmetric multiprocessor. In this paper, we propose an efficient symmetric multiprocessor referencemodel, Hybrid Model, written with SystemC. SystemC can provide a high-level simulation environment and is faster than the traditional hardware description languages. Hybrid Model has been implemented in an efficient 32-bit symmetric multiprocessor verification. Experimental results show our proposed model is a fast, accurate, and efficient symmetric multiprocessor reference model and it is able to help designers to locate design errors easily and accurately.
Introduction
Recently, the symmetric multiprocessor (SMP) has become a leading trend in the development of advanced embedded systems.Meanwhile, with the rapid improvement of the hardware manufacturing technologies and the help of computer-aided design (CAD) tools, SMP systems become more and more powerful and complex.As a result, the design verification of SMP systems takes up a large part of the total design period.The verification method directly determines the efficiency of SMP system verification and even the whole design cycle.
A variety of techniques have been deployed to efficiently and effectively detect design errors in SMP systems.These techniques can be divided into three categories: formal verification, simulation-based verification, and hardware emulation [1][2][3].Various formal verification methodologies with the relevant environment setup have been proposed and used [4][5][6][7][8][9].Formal verification, such as model checking and theorem proving, takes advantage of mathematical methods to judge whether the behavior of the design follows the rules instituted by designers.With the increasing size of system design, the space needed by formal verification is beyond the ability of tools and the process of formal verification is slow.As a result, the formal verification is not appropriate in large-scale system verification, such as the SMP system verification.Hardware emulation maps a gate level model of the design onto Field-Programmable Gate Array (FPGA) on the emulation system.It is much faster than the simulationbased verification.The main disadvantage of the hardware emulation is that it is difficult to debug when an error takes place.Simulation-based verification [10][11][12][13][14] is the most used method to verify the function of the SMP systems.It generates instruction sequences that are then fed in parallel to the design under test (DUT) and its reference model.Any discrepancy between the two models indicates a design error.Simulation-based verification is able to locate the errors easily and rapidly, and it is not limited by the size of system.As a result, it is widely used in SMP system verification.
The major drawback of the mainstream simulation-based approach is the difficulty of creating an efficient reference model of the DUT in a short time.The success of simulationbased verification depends on the accuracy and the quality of the reference model in use.An efficient and accurate reference model is able to help designers locate errors easily and quickly.Many researchers have already proposed various reference models of the processor at presilicon.During the simulation-based verification, most processors regard the simulator as the reference model.These simulators are normally obtained from earlier stage in processor development, in which simulators are used for performance evaluation under benchmark [15].Some of these simulators cannot support SMP verification, such as SimpleScalar [16].Some other simulators, such as MARSS [17] and PTLsim [18], can be implemented to verify SMP systems.However, these simulators are usually timing-accurate; it is time-consuming for design verification by using these simulators to act as the reference model.In addition, the verification of these simulators themselves is often very complicated due to their architectural complexity [19].As these models are usually timing-accurate, they are called timing-accurate models (TMs).The other type of reference model is Instruction Set Simulator (ISS) that is function-accurate.ISS only cares about the system function and its architecture is simple.These simulators are relatively easy to ensure due to their simpler architectures.This enables them to be used as reference model in the functional verification of the single-core processors.However, as they have no ability to sequence the out-of-order load/store transactions among CPUs perfectly, they cannot be used to verify the SMP system efficiently.As these models are function-accurate, they are called function-accurate models (FMs).It is difficult to test the function of the SMP system by using the timing-accurate models and function-accurate models efficiently.Such difficulties prompt us to create an efficient SMP reference model that is called Hybrid Model (HM).This model is simpler and faster than the timingaccurate model and more accurate than the function-accurate model.SystemC can be very effective in describing the system architecture and functionality to support high-level simulation.So SystemC can be used to obtain the efficient HM.When the reference model has been created, tests are fed in parallel to the DUT and its reference model to check design correctness.
In a simulation process, function coverage analysis is needed to check and show the quality of testing.It helps the verification team to check whether the function points that they want to simulate are covered during the testing phase.Sometime, some direct tests written by hands are needed with the help of function coverage analysis to cover the missing cases.The function coverage analysis is usually achieved from the RTL (Register Transfer Level) code and indicated by one signal or a set of signals.As the verification team is unfamiliar with the RTL code, it is difficult for them to observe the function points in RTL code, especially if the signals needed by the function points do not exist in the RTL code and the verification team has to turn to the designers for help.It is necessary for the designers to add these signals that are useless to the system function.In this way, the function coverage analysis needs the interaction of the verification team and the designers, so it is error-prone.However, the verification team is familiar with the reference model that is created by them.So if they achieve the function coverage analysis from the reference model rather than from the RTL code, the function coverage result can be more accurate.And the direct tests are able to be written by the verification team more effectively.
The main contribution of our work is that an efficient SMP reference model is proposed.It is written with SystemC.Acting as the SMP reference model, HM is simpler and faster than TM and more accurate than FM.The second contribution is that we define a timing sequence called Dependent Timing Sequence (DTS).The function of DTS is the timing interface between two models.The final contribution is that the function coverage analysis is able to be obtained from HM.In this way, the verification team can achieve more accurate coverage result quickly.Then the direct tests can be written by them more effectively.
Hybrid Model
As shown in Figure 1, the Hybrid Model (HM) consists of CPU Pipeline Model (CPM) and Cache Coherence Model (CCM).A common SMP consists of CPU pipelines, Load Store Units (LSUs), caches, and the interconnection between CPUs.The interconnection is responsible for maintaining the cache coherence between CPUs.The reference model of CPU pipeline is the function-accurate CPM.As the interconnection, LSU, and cache are related to load/store transactions, they are called Load Store Module (LSM).LSM is closely related to cache coherence and its reference model is the timing-accurate CCM.CPM and CCM are connected through DTS.The whole SMP system can be verified efficiently with the cooperation of CPM and CCM.
In the validation process, when a test case is stressed on the SMP system and HM simultaneously, the SMP system executes and HM simulates the instructions in this test case one by one.For each single instruction, the CPU pipeline executes it and the execution results of the CPU pipeline are obtained.If this instruction is a load/store instruction, the CPU pipeline needs to send this instruction to LSM.Then LSM executes this instruction and the execution results of the LSM are obtained.In this way, the execution results of the whole SMP system are obtained.On the HM side, first CPM simulates this instruction and the simulation results of CPU pipeline are achieved.If this instruction is a load/store instruction, CPM has to pipe its timing stream to CCM via DTS accordingly.The timing stream makes CCM begin to simulate and the simulation results of LSM are achieved by CCM.In this way, the simulation results of the whole SMP system are achieved.At this time, the tool will compare the execution results with the simulation results to check the correctness.Once any discrepancy occurs, the tool stops the simulation immediately.Then the tool will collect the information of this instruction such as its execution results and simulation results for the verification team.It is convenient for the verification team to locate errors with the help of these messages.only cares about the function of CPU pipeline rather than its timing information.As shown in Figure 2, three important modules of CPM are Loader, Decode, and Simulator.When CPM receives a test case needed to be simulated, Loader fetches the instructions in this test case one by one from memory according to the program counter (PC) first.Then Decode is responsible for decoding and interpreting these instructions.The Simulator is implemented with nonpipeline and it simulates these instructions directly.No matter whether instructions in the SMP system are inorder executed or out-of-order executed, they are retired one by one sequentially.As a result, the simulation results achieved by the nonpipeline Simulator directly are the same as the execution results obtained by the processor after going through complex CPU pipeline.For the instructions that are not load/store transactions, there is no need for them to be sent to LSM, as they are irrelevant to the cache coherence.For these transactions, all the simulation results of them can be achieved by CPM and the simulation is over after updating the value of registers.For load/store transactions, they need not only to go through CPU pipeline but also be sent to LSM.CPM is responsible for piping the timing stream of load/store instructions to CCM via DTS when it has finished its simulation of these instructions.The timing stream makes CCM begin to simulate.The simulation process of a load/store instruction is finished when CPM gets the response from CCM and the value of registers is updated.If an interrupt is found in this process, CPM needs to jump to interrupt handler.The simulation results of CPU pipeline can be obtained rapidly, including much key information of the SMP system, for example, PC, the value of registers, and the state of the target processor.The tool compares these simulation results achieved by CPM with the execution results obtained by DUT.And any discrepancy indicates an error of the DUT.If no discrepancy occurs and the simulating instruction is not a load/store instruction, the simulation of this instruction is finished successfully.If this instruction is a load/store instruction, CPM has to send the complete timing information of this instruction to CCM via DTS.If an error occurs, the simulation will be stopped at once and the simulation results and the execution results are obtained directly to help the verification team to locate and fix this error.
Cache Coherence Model.
The other important part of HM is Cache Coherence Model (CCM) that is timing-accurate.CCM is the reference model of LSM.As CCM is timingaccurate, it needs to care about the details of LSM.However, only the details that have an effect on the function points that the verification team wants to simulate are considerable.The function points are defined manually by the verification team, and they are the combination of the characteristics of the DUT and a series of events that must be verified.In the application, these events are analyzed by observing the signals and states of the DUT.When the verification team has finished listing these events, they would serialize the events that have close relationship and outline their features.Finally the events that have close data relationship are put in one process according to the serialized events and the relationship of data structure between these events.As a result, these processes can be implemented with SystemC and run in parallel.And the processes communicate with each other by FIFO.
Figure 3 shows the common block diagram of the interconnection, cache, and LSU of the SMP system.The load/store transactions are first preserved in the Request Buffer (RB) in LSU.Then these transactions are sent to cache to decide whether the cache lines they want to access are located in cache.Further, they are sent to the appropriate buffers to wait for the chance to access the interconnection.The store miss transactions are sent to Write Buffer (WB), the load miss transactions are sent to Load Buffer (LB), and the store hit transactions are sent to Store Queue (STQ).Then they are sent to the interconnection when they have obtained the permission.The Coherence Unit (COHU) would maintain cache coherence between cores and handle the transactions related to cache coherence.The address domains of these transactions are cacheable and shareable.On the contrary, the function of Noncoherence Unit (NCOHU) is to deal with the transactions unrelated to cache coherence.The address domains of these transactions are other domains.The framework of the LSM is so complex; it is difficult and time-consuming for CCM to be created the same as the hardware.Some unnecessary hardware architectures can be abstracted due to the relationship between the hardware architectures and the function points the verification team wants to simulate.If the abstraction of some hardware architectures has no effect on the function points and the accuracy, these hardware architectures can be removed in CCM.When the number of cores in the multiprocessor system has been changed, the designers would modify some details of the interconnection according to the specification.
As the main memory has a lower load/store speed, buffers are utilized in the NCOHU to save load/store transactions unrelated to cache coherence.However, it is fast to access software memory.As a result, there is no need to create buffers for memory access in CCM.And sometimes more than one transaction attempts to access cache, whereas cache is a oneport element.So buffers are needed to save the outstanding requests to cache.However, CCM can accept and execute all the requests simultaneously, so no buffer is needed to save these transactions to cache in CCM.The abstraction of these buffers not only has an effect on the function, but also can reduce the implementation time of CCM.However, some hardware architectures cannot be abstracted; even any discrepancy between the hardware and CCM may cause fatal functional mistakes.
The interconnection usually works faster than CPU; some of the transactions related to cache coherence need to be saved in COHU.The order of these transactions is maintained by COHU in order to achieve accurate execution results.CCM has to deal with these load/store transactions in the same way with hardware to obtain the right simulation results.Figure 4 shows the different simulation results caused by the different orders of store transactions.A certain cache line is located in both CPU0 and CPU1 cache.At cycle A, CPU0 and CPU1 send store requests to the interconnection simultaneously.As shown in Figure 4(a), as the arbitration result of these two store transactions is that CPU0 could execute the store transaction before CPU1, the store transaction of CPU0 is accepted by the interconnection at cycle A; however, the store transaction of CPU1 is not accepted which is indicated by the symbol * .Then the store transaction of CPU1 is accepted by the interconnection at cycle B.At cycle C, the cache line in CPU1 cache is invalidated by the interconnection and the state of the store transaction of CPU1 is modified from store hit to store miss.At cycle D, the interconnection accepts the load transaction of CPU2, and the data CPU2 loads is 2. On the other hand, as shown in Figure 4(b), if the arbitration result of these two store transactions is that CPU1 could execute the store transaction before CPU0, the data CPU2 loads would be 1 at cycle D.The data CPU2 gets highly depends on the arbitration of these two store transactions of CPU0 and CPU1.Different execution orders lead to different results; hence, CCM has to achieve timing-accurate for these transactions to avoid errors.
Figure 5 shows the block diagram of CCM.The function of NCOHU is the same as that of SMP.But there are no buffers in NCOHU of CCM.The COHU of CCM is the same as that of SMP, not only their functions but also timing.No buffer is needed for cache in CCM.When the number of cores in the multiprocessor system has been changed, the verification team would modify some details of CCM according to the hardware changes made by the designers.Hence, HM can go to perform well even when the number of cores increases to hundreds.
Dependent Timing Sequence. Dependent Timing
Sequence (DTS) is the timing interface between CPM and CCM.For every single instruction, CPM simulation and CPU pipeline execution proceed simultaneously.The tool compares the simulation results with execution results all the time.If no error is found in CPU pipeline and the simulating instruction is a load/store transaction, CPM is responsible for delivering the timing information of this transaction to DTS.CPM is aware of all the timing information of this transaction except for the cycle number whose function is to notify CCM when to begin its simulation.However, CPM can find this information from the execution results of hardware.In this way, the complete timing sequence of this transaction can be obtained and piped to DTS by CPM.DTS includes all the timing information CCM needs.
Then CCM reads the timing information from DTS and begins its simulation.Figure 6 shows the timing information in a simulation process.Transaction type indicates the type of this transaction.Transaction size indicates the byte amount in this transaction.Data means the data CPU stores and x indicates that this transaction is a load transaction.Coherence indicates whether this transaction relates to cache coherence or not.As shown in Figure 6, at cycle number 21, CPU0 stores 1 into address 0x1fff fee8, and CPU1 stores 2 into the same address.If these two store transactions are both store hit transactions, the condition is similar to what is shown in Figure 4.
As different kinds of CCMs may need different timing information, the information in DTS should be adjusted to meet the timing requirements of CCM.
Function Coverage Analysis.
As HM is written by verification team and only includes the considerable function points, it is fast to obtain the function coverage report.Moreover, the isolation between system design and verification due to the proposed function coverage analysis approach can avoid many unnecessary errors in function coverage report and make the analysis more accurate.
Experimental Results
3.1.Verification Platform.We selected the CK810MP of Hangzhou C-SKY Microsystems Co., Ltd., to evaluate the feasibility of HM.As shown in Figure 7, CK810MP system consists of several modified CK810 processors, interconnection, and memory.CK810 is a high-performance 32-bit embedded processor based on CSKY v2 instruction set and its LSU is modified to support cache coherence according to the specification.A number of CK810 processors are connected by a bus-based interconnection that is responsible for maintaining cache coherence and dealing with requests to memory.The data channel and instruction channel are separate to increase bandwidth.Finally, an efficient SMP, CK810MP, is obtained with the addition of memory.We made extensive experiments with a CK810 quad-processor system, as the quad-processor is the mainstream of the embedded systems currently, such as mobile phones and personal computers.In addition, the quad-processor can meet the performance requirement of most of embedded applications, and it is a good tradeoff between performance and power.We chose SystemC to act as our program language and created a timing-accurate model (TM), a functionaccurate model (FM), and a Hybrid Model (HM) to act as the reference models of CK810 quad-processor.As FM is only interested in the design function and easy to be created, it took more than 20 days to complete this model.It took almost 6 months to achieve the TM, as it cares about the majority of details of the target CK810 quad-processor.As the HM pays attention to a part of the details of the target processor, it took almost a month to obtain HM.To compare our proposed model with state-of-the-art simulation models, we selected GEM5 [20], which is a popular open-source timing-accurate multiprocessor simulator, to act as the reference model of the target CK810 quad-processor.GEM5 simulator supports a wide range of processor instruction set architectures (ISA), such as Alpha, ARM, MIPS, PowerPC, and x86.However, the GEM5 simulator cannot support the CSKY v2 instruction set.The CSKY v2 instruction set is much less complex than ARM.Moreover, the similar instructions can be found in ARM instruction set for most of the instructions of the CSKY v2 instruction set.Hence, we can use CSKY-to-ARM instruction translation to make GEM5 support the CSKY v2 instruction set and act as the reference model of CK810MP system.Figure 8 shows the verification platform of CK810MP.DMA (Directly Memory Access) is able to help improve the system performance effectively.TLB (Translation Lookaside Buffer) translates virtual addresses to physical addresses.Each test was generated by a test generator based on random selection from more than 20 types of instructions, such as math, logic, load, store, and jump supported by the CK810 core.The generated tests were stressed on CK810MP system and its four reference models, respectively.The function coverage analysis was performed to direct the verification effort.We obtained four comparison results by comparing execution results of CK810MP system with the simulation results of these four reference models.According to these four comparison results, errors of CK810MP were discovered.
Simulation Speed.
The test generator generated 4000 tests each with 100 instructions, including the boot sequence used to initialize the CK810 core.In the first experiment, we compared the simulation speeds of these four models of CK810MP.To obtain the differential results, these 4000 tests were divided into 10 test groups randomly and each test group has various numbers of tests.The numbers of tests included by these 10 groups gradually increased from the first one to the tenth one.Then these test groups were fed to the reference models of CK810 quad-processor system, respectively, to compare their simulation speeds.Figure 9 shows the average simulation time of these four reference models stressed by these test groups.As shown in Figure 9, the simulation speeds of TM and GEM5 are similar, and they are the slowest in these four reference models as they are both timing-accurate.The simulation speed of FM is about 600 times those of TM and GEM5, and it is the fastest in these four reference models.The simulation speed of HM is about 30 times those of TM and GEM5.In comparison to FM, HM is slower, but it has a much better performance than TM and GEM5 in speed.
Further, we focused on the functional design of CPU pipeline in HM (denoted as CP-FM) and the timing-accurate model of CPU pipeline in TM (denoted as CP-TM) to explain why HM has obvious speed advantages comparing with TM.The test groups were fed to CP-FM and CP-TM to compare their simulation speed.Figure 10 shows the comparison of simulation speeds of CP-FM and CP-TM.The simulation speed of CP-FM is about 720 times as that of CP-TM.This means the speed advantage of HM comes from the functional model of the CPU pipeline.
Accuracy.
In the second experiment, we compared the accuracy of these four models indicated by the number of errors found by them.The 4000 tests in the simulation environment were divided into 10 test groups each with 400 tests randomly.Then these test groups were stressed on CK810MP system and its four reference models, respectively.Figure 11 shows the number of the errors found by these 10 test groups and accumulated errors found by these four reference models.As shown in Figure 11, the abilities of TM and HM to find errors are similar and stronger than those of GEM5 and FM.The accumulated errors found by HM are about 1.5 times as many as those found by GEM5.And the accumulated errors found by HM are about four times as many as those found by FM.The ability of FM to find errors is the weakest in these four reference models.As the GEM5 simulator is developed specifically to evaluate the performance of embedded systems, its details could not be the same as the details of the CK810 quad-processor system.Therefore, the accumulated errors found by GEM5 are much less than those found by TM and HM.As soon as these four reference models' writing is finished, they are put into operation in the CK810 quadprocessor verification.However, here these models are not exactly the correct golden models defined by the specification, especially the TM.The CPU pipeline of the CK810 quadprocessor is a complex dual-emission superscalar 10-stage pipeline; hence some inconsistency between TM and the correct timing-accurate model is unavoidable at the beginning of simulation.The elimination of the inconsistency needs to take a lot of time.Before the TM becomes a correct timingaccurate model, it may obtain wrong simulation results because of some timing inconsistency, whereas the processor achieves the wrong execution results caused by a design error.If the wrong simulation results and the wrong results are the same, unfortunately, TM would take the attitude that the hardware is infallible.Figure 12 shows a simple example, where the results of store exclusive transactions are shown in brackets in red.Y indicates that the store exclusive transaction succeeds, while N shows that the store exclusive transaction fails.Figure 12(a) shows the correct implementation of three exclusive transactions, consisting of a load exclusive transaction and two store exclusive transactions.The first store exclusive transaction is executed successfully, as the exclusive transaction before this store exclusive transaction is a load exclusive transaction and they have the same address.The second store exclusive transaction fails.However, the address of the first store exclusive transaction is modified by a design error in CPU pipeline from address A to address C, as shown in Figure 12(b).As a result, the first store exclusive transaction fails.At the same time, as shown in Figure 12(c), TM inverts the order of two store exclusive transactions and these two store exclusive transactions both fail.In this way, TM cannot find this design error of CPU pipeline.However, HM is able to discover this design error as it can simulate these three exclusive transactions in the right order and obtain the right simulation results as shown in Figure 12(a).Hence, the design errors found by the HM are more than those found by the TM when the first two test groups are simulated.
As the simulation goes on, these models are all modified by the verification team to become the correct golden models gradually.At this time, if some timing errors of CPU pipeline do not influence the function of the CK810 quadprocessor, the TM can discover these timing errors but the HM cannot.As an example, the interval between the load exclusive transaction and the first store exclusive transaction in Figure 12 should be 10 cycles according to plan.However, the store exclusive transaction executed 2 cycles in advance because of the inappropriate change of a request pointer.This store exclusive transaction still succeeds.HM cannot discover this timing error but TM can, as the interval between these two exclusive transactions is ten cycles in TM.As a result, the design errors found by the TM are more than those found by the HM when the last eight test groups are simulated.
And the accumulated errors found by the TM are more than those found by the HM at the end of simulation.However, the design errors that HM cannot discover have no effect on the function of processor and most of them can be discovered with the help of assertion checkers.
To compare the accuracy of four reference models further, we analyzed the coverage of function points that we want to simulate.Figure 13 shows the coverage of function points in these four reference models.The interconnection, cache, and LSU have 253 function points.HM and TM are capable of covering all the function points basically and the GEM5 simulator can cover partial function points.However, FM can only cover a few of function points.
Further, we focused on CP-FM and CP-TM to compare their accuracy and explain why HM has obvious speed advantages comparing with TM, while maintaining similar accuracy, by using the test groups used in Figure 11. Figure 14 shows the design errors found by these 10 test groups and the comparison of the accumulated errors found by CP-FM and CP-TM.As shown in Figure 14, 60 to 70 percent of design errors of the CK810 quad-processor are in the CPU pipeline, and the abilities of CP-FM and CP-TM to find errors are similar.The accumulated errors found by CP-TM are a little more than those found by CP-FM, as CP-TM can find the timing errors of CPU pipeline but CP-FM cannot.However, these errors are not functional errors and most of them can be discovered by assertion checkers.The experimental results in Figures 10 and 14 show that the function-accurate model of the CPU pipeline is much faster than the timing-accurate model of the CPU pipeline, while the accumulated errors found by them are similar.This means the advantages of HM come from the functional design of the CPU pipeline model.
Conclusion
An accurate and efficient symmetric multiprocessor reference model is proposed in this paper.The function coverage analysis is able to be achieved from it to help the verification team to write direct tests more accurately.This reference model has been implemented for a 32-bit symmetric multiprocessor verification.The experimental results show that the number of errors found by our proposed model is about 4 times that found by a function-accurate model.Our proposed model has a better performance in finding errors than the function-accurate model.The simulation speed of our proposed model is about 30 times as high as that of a timing-accurate model in the same condition.In comparison to the timing-accurate model, our proposed model is easier to create and faster, whereas their abilities to find errors are similar.The advantages of the proposed model come from the functional design of the CPU pipeline model.With the help of our proposed model, the verification team can locate design errors more quickly and verify the interconnection more efficiently.The time for symmetric multiprocessor verification can be shortened obviously with our proposed model.
Figure 3 :
Figure3: Block diagram of hardware.RB is used to preserve load/store transactions and maintain their order.WB keeps store miss transactions, LB is responsible for preserving load miss transactions, and STQ keeps store hit transactions.COHU maintains cache coherence between cores and NCOHU deals with the transactions unrelated to cache coherence.
Figure 14 :
Figure 14: (a) Error number found by test groups; (b) accumulated errors found by the function-accurate model and timing-accurate model of CPU pipeline. | 7,109.2 | 2015-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Role of Nanotechnology in Cosmeceuticals: A Review of Recent Advances
Nanotechnology manifests the progression in the arena of research and development, by increasing the efficacy of the product through delivery of innovative solutions. To overcome certain drawbacks associated with the traditional products, application of nanotechnology is escalating in the area of cosmeceuticals. Cosmeceuticals are regarded as the fastest growing segment of the personal care industry and the use has risen drastically over the years. Nanocosmeceuticals used for skin, hair, nail, and lip care, for conditions like wrinkles, photoaging, hyperpigmentation, dandruff, and hair damage, have come into widespread use. Novel nanocarriers like liposomes, niosomes, nanoemulsions, microemulsion, solid lipid nanoparticles, nanostructured lipid carrier, and nanospheres have replaced the usage of conventional delivery system. These novel nanocarriers have advantages of enhanced skin penetration, controlled and sustained drug release, higher stability, site specific targeting, and high entrapment efficiency. However, nanotoxicological researches have indicated concern regarding the impact of increased use of nanoparticles in cosmeceuticals as there are possibilities of nanoparticles to penetrate through skin and cause health hazards. This review on nanotechnology used in cosmeceuticals highlights the various novel carriers used for the delivery of cosmeceuticals, their positive and negative aspects, marketed formulations, toxicity, and regulations of nanocosmeceuticals.
Introduction
Nanotechnology is regarded as the most imminent technology of 21st century and is contemplated as a big boon in the cosmetic industry. The term nanotechnology is the combination of two words: namely, technology and the Greek numerical "nano" which means dwarf. Thus, nanotechnology is considered as the science and technology used to develop or manipulate the particles in the size range of 1 to 100 nm [1,2]. Since 1959, nanotechnology has emerged in different fields like engineering, physics, chemistry, biology, and science and it has been virtually 40 years since nanotechnology has intruded into the field of cosmetics, health products, and dermal preparations. During the era of 4000BC, the use of nanotechnology has been recorded by the Egyptians, Greek, and Romans, with concept of hair dye preparation utilizing nanotechnology [3].
Founding member of US society of Cosmetic Chemists, Raymond Reed, coined the term "cosmetics" in the year 1961. Cosmetics can be defined as the products which amplify the appearance of the skin, intensify the cleansing, and promote the beauty of the skin [4]. As reported, the use of cosmetics was attributed to Egyptians around 4000BC and later Greeks, Romans, Chinese, Japanese, and Americans started using cosmetics. In the late 19th century, the use of cosmetics was secretly done by the women with household items in western countries and by 20th century the cosmetics were being done without concealment. By the 21th century, the cosmetics are being enormously used and with the development in technology, innovative cosmetic formulations are being developed by the incorporation of the latest technologies [5,6].
Cosmeceuticals are the cosmetic products which incorporate biologically active ingredient having therapeutic benefits on the surface applied. These are utilized as cosmetics as they claim to enhance appearance [7]. Cosmeceuticals are chasm between pharmaceuticals and personal care products. Cosmeceutical products have measurable therapeutic efficacy on the skin, as drugs and formulations have diversified from skin to body to hair and they are used for the treatment of various conditions like hair damage, wrinkles, photoaging, skin dryness, dark spots, uneven complexion, hyperpigmentation, and so on [8].
Cosmeceuticals are contemplated as the fastest growing fragment of personal care industry and the market for personal care is increasing enormously [9]. Despite enormous benefits of nanoparticles, little is known about the shortterm and long-term health effects in the environment and organisms. Safety concerns have been raised due to the reported toxicity and possible dangers of the nanomaterials. The present article reviews the diverse classes of nanocarriers like liposomes, niosomes, solid lipid nanoparticles, nanostructured lipid carriers, nanoemulsion, and so on which are being used for delivery of nanocosmeceuticals, marketed products, and positive and negative aspects.
There are a number of advantages of nanocosmeceuticals. Namely, they provide the controlled release of active substances by controlling the drug release from carriers by several factors including physical or chemical interaction among the components, composition of drug, polymer and additives, ratio, and preparation method. They are used in hair care preparations, such as in treatment of hair loss and to prevent hair from turning grey such as Identik Masque Floral Repair, Origem hair recycling shampoo, and Nirvel hair-loss control shampoo. Nanocosmeceuticals make the fragrances last longer, for example, Allure Parfum and Allure Eau Parfum spray by Chanel. These make the skin care formulations more effective and increase the efficacy of sunscreens by improving UV protection in them. By having very small size of the particles, the surface area is increased which allows the active transport of the active ingredients into the skin. Occlusion provides the enhancement in the penetration and skin hydration is increased. Cosmeceuticals have high entrapment efficiency and good sensorial properties and are more stable than the conventional cosmetics. Most of the nanoparticles are suitable for both lipophilic and hydrophilic drug delivery. Nanomaterials are widely used in the preparation of antiwrinkle creams, moisturizing creams, skin whitening creams, hair repairing shampoos, conditioners, and hair serums [11,12]. Several positive aspects of nanocosmeceuticals are discussed in Figure 1 [13].
As the rule of the nature, each and everything in this universe has some positive as well as negative aspects. Some of the drawbacks associated with nanocosmeceuticals are as follows. Due to production of large number of oxygen species, oxidation stress, inflammation, damage to DNA, proteins, and membranes may be caused by nanoparticles. Few ultrafine nanomaterials such as carbon nanotubes, carbon based fullerenes, TiO 2 , copper nanoparticles, and silver nanoparticles may be toxic to human tissues and cells. Titanium dioxide found in sunscreens has been demonstrated to cause damage to DNA, RNA, and fats within cells. No stringent scrutiny was imposed by the regulatory agencies for the approval and regulation of nanocosmeceuticals. Nanocosmeceuticals may be harmful to environment as well. No clinical trials are required for the approval of nanocosmeceuticals, thus raising a concern of toxicity after use [15,16]. Negative aspects of nanocosmeceuticals are discussed in Figure 2 [17].
Novel Nanocarriers for Cosmeceuticals
For the delivery of nanocosmeceuticals, carrier technology is employed which offers an intelligent approach for the delivery of active ingredients. Various novel nanocarriers for delivery of cosmeceuticals are depicted in Figure 3 [18,19].
Liposomes.
Liposomes are most widely used for the cosmeceutical preparations. They are the vesicular structures having an aqueous core which are enclosed by a hydrophobic lipid bilayer [20]. The main component of liposome lipid bilayer is phospholipids; these are GRAS (generally recognized as safe) ingredients, therefore minimizing the risk for adverse effects [21]. To protect the drug from metabolic degradation, liposome encapsulates the drug and releases active ingredients in a controlled manner [22]. Liposomes are suitable for delivery of both hydrophobic as well as hydrophilic compounds. Their size varies from 20 nm to several micrometers and can have either multilamellar or unilamellar structure [23].
Antioxidants such as carotenoids, CoQ10, and lycopene and active components like vitamins A, E, and K have been incorporated into liposomes, which are used to amplify their physical and chemical stability when dispersed in water [24].
Phosphatidylcholine is the key component of liposomes which has been used in various skin care formulations like moisturizer creams and so on and hair care products like shampoo and conditioner due to its softening and conditioning properties. Due to their biodegradable, nontoxic, and biocompatible nature, liposomes are used in variety of cosmeceuticals as they encapsulate active moiety [25]. Vegetable phospholipids are widely used for topical applications in cosmetics and dermatology because of their high content of esterified essential fatty acids. For topical applications in cosmetics and dermatology vegetable phospholipids are being widely used as they have high content of esterified essential fatty acids. After the application of linoleic acid, within a short period of time the barrier function of the skin is increased and water loss is decreased. Vegetable phospholipids and soya phospholipids are used because of their ability to form liposomes and their surface activity. The transport of linoleic acid into the skin is done by these phospholipids [26,27]. In a clinical study, it was proven that flexible liposomes help in wrinkle reduction and show effects like decreasing of efflorescence in the acne treatment and an increase in skin smoothness [28].
Liposomes are being developed for the delivery of fragrances, botanicals, and vitamins from anhydrous formulations, such as antiperspirants, body sprays, deodorants, and lipsticks. They are also being used in antiaging creams, deep moisturizing cream, sunscreen, beauty creams, and treatment of hair loss [29]. Several positive and negative aspects of liposomes are discussed in Figure 4 [30]. Various marketed formulations are given in Table 1 [31][32][33].
Niosomes.
Niosomes are defined as vesicles having a bilayer structure that are made up by self-assembly of hydrated nonionic surfactants, with or without incorporation of cholesterol or their lipids [34].
Niosomes can be multilamellar or unilamellar vesicles in which an aqueous solution of solute and lipophilic components are entirely enclosed by a membrane which are formed when the surfactant macromolecules are organized as bilayer [35]. Size ranges from 100 nm to 2 m in diameter. Size of small unilamellar vesicles, multilamellar vesicles, and large unilamellar vesicles ranges from 0.025-0.05 m, =>0.05 m, and 0.10 m, respectively [36]. Major niosomes components include cholesterol and nonionic surfactants like spans, tweens, brijs, alkyl amides, sorbitan ester, crown ester, polyoxyethylene alkyl ether, and steroid-linked surfactants which are used for its preparation [37].
Niosomes are suitable for delivery of both hydrophobic as well as hydrophilic compounds. As a novel drug delivery system, niosomes can be used as vehicle for poorly absorbable drugs [38]. It provides encapsulation to the drug, due to which the drug in the systemic circulation is for prolonged period and penetration is enhanced into target tissue. Niosomes overcome the problems associated with liposomes, like stability problems, high price, and susceptibility to oxidation [39]. Niosomes are used in cosmetics and skin care applications since skin penetration of ingredients is enhanced because it possesses the property of reversibly reducing the barrier resistance of the horny layer, allowing the ingredient to reach the living tissues at greater rate. There is increased stability of entrapped ingredients and improvement in the bioavailability of poorly adsorbed ingredients. There are many factors affecting formation of niosomes, namely, nature and structure of surfactants, nature of encapsulated drug, membrane composition, and temperature of hydration which influences shape and size [40]. Specialized niosomes are called proniosomes; these are nonionic based surfactant vesicles, which are hydrated immediately before use to yield aqueous niosome dispersions. To enhance drug delivery in addition to conventional niosomes, proniosomes are also being used [41,42]. Niosomes were first developed by L'Oreal in the year 1970 by the research and development of synthetic liposomes. Niosomes were patented by L'Oreal in the year 1987 and were developed under the trade name of Lancome. Various niosomes cosmeceuticals preparations are available in market, antiwrinkle creams, skin whitening and moisturizing cream, hair repairing shampoo, and conditioner [43]. Several advantages and disadvantages of niosomes are discussed in Figure 5 [44][45][46]. Various marketed products and uses are discussed in Table 2 [47][48][49].
Solid Lipid
Nanoparticles. An unconventional carrier system, solid lipid nanoparticle (SLN), was developed at the beginning of the 1990s, over the conventional lipoidal carriers like emulsions and liposomes. 50 to 1000 nm is the size range of solid lipid nanoparticles [50].
They are composed of single layer of shells and the core is oily or lipoidal in nature. Solid lipids or mixtures of lipids are present in the matrix drug which is dispersed or dissolved in the solid core matrix. Phospholipids hydrophobic chains are embedded in the fat matrix. These are prepared from complex glycerides mixtures, purified triglycerides, and waxes; liquid lipid is replaced by solid lipid or blend of solid lipid which is solid at body and room temperature and is stabilized by surfactants or polymers [51]. Lipophilic, hydrophilic, and poorly water-soluble active ingredients can be incorporated into SLNs which consist of physiological and biocompatible lipids. With the use of biocompatible compounds for preparing SLN, toxicity problems are avoided [52]. Two principle methods for preparation of SLNs are high pressure homogenization method and precipitation method. Controlled release and sustained release of the active ingredients are possible; SLN which has drug enriched core leads to a sustained release and SLN having drug enriched shell shows burst release [53,54].
SLNs are popular in cosmeceuticals and pharmaceuticals as they are composed of biodegradable and physiological lipids that exhibit low toxicity. Their small size ensures that they are in close contact with the stratum corneum which increases the penetration of active ingredients through the skin [55]. SLNs have UV resistant properties and act as physical sunscreens on their own, so improved photoprotection with reduced side effects can be achieved when they are combined with molecular sunscreen [56]. Solid lipid nanoparticles as carrier for 3,4,5-trimethoxybenzoylchitin and vitamin E sunscreen are developed to enhance UV protection [57]. SLNs have occlusive property which can be used to increase the skin hydration, that is, water content of the skin [58]. Perfume formulations also have SLNs as they delayed the release of perfume over longer period of time and are ideal for use in day creams as well [59,60].
They have better stability coalescence when compared to liposomes because they are solid in nature and mobility is reduced of the active molecules, so the leakage from the carrier is prevented [61,62]. Benefits and drawbacks of SLNs are depicted in Figure 6 [50,[63][64][65]. Different marketed products and their uses are given in Table 3 [66, 67].
Nanostructured Lipid Carriers (NLC).
Nanostructured lipid carriers are considered as the second generation of the lipid nanoparticles. NLC have been developed so that the drawbacks associated with SLN can be overcome. NLC are prepared through blending by solid lipids along with spatially incompatible liquid lipid leading to amorphous solids in preferable ratio of 70 : 30 up to 99.9 : 0.1 being solid at body temperature [68,69]. NLC are mainly of three types on the basis of which the structure is developed according to the formulation composition and production parameters, namely, imperfect type, amorphous type and multiple type. The particle size ranges from 10 to 1000 nm [70].
There is an increased scientific and commercial attention for NLC during the past few years because of the lower risk of systemic side effects. NLC when compared to SLN show higher drug-loading capacity for entrapped bioactive compound because of the distorted structure which helps in creating more space. Other limitations of SLN like reducing particle concentration and expulsion of drug during storage are solved by the formulation of NLC. They are formulated by biodegradable and physiological lipids that show very low toxicity [71]. NLC have modulated drug delivery profile, that is, a biphasic drug release pattern; in this the drug is released initially with a burst followed by a sustained release at a constant rate. They possess numerous advantageous features like increased skin hydration due to occlusive properties, and the small size ensures close contact to the stratum corneum leading to the increased amount of drug penetration into the skin. There are stable drug incorporation during storage and enhanced UV protection system with reduced side effects [72].
In October 2005, the first products containing lipid nanoparticles were introduced in the cosmetic market, namely, NanoRepair Q105 cream and NanoRepair Q105 serum, Dr. Rimpler GmbH, Germany, offering increased skin penetration. Currently there are more than 30 cosmetic products available in the market containing NLC [73,74]. Depiction of various positive aspects of NLC is depicted in Figure 7 [75,76]. List of marketed products, manufacturers, and their uses is given in in which an oil phase and water phase are in combination with a surfactant. Their structure can be manipulated on the basis of method of preparation, so as to give different types of products. Depending on the composition different types of nanoemulsions are oil in water, water in oil, and bicontinuous nanoemulsion. They exhibit various sizes ranging from 50 nm to 200 nm. These are dispersed phase which comprises small particles or droplets, having very low oil/water interfacial tension [81]. They have lipophilic core, which is surrounded by a monomolecular layer of phospholipids, making it more suitable for delivery of lipophilic compounds. Problems like sedimentation, coalescence, creaming, and flocculation are not associated with nanoemulsions like with macromolecules. Nanoemulsions are transparent or translucent and show properties like low viscosity, high kinetic stability, high interfacial area, and high solubilization capacity [82].
Nanoemulsions are widely used as medium for the controlled delivery of various cosmeceuticals like deodorants, sunscreens, shampoos, lotions, nail enamels, conditioners, and hair serums [83].
In cosmetics formulation, nanoemulsions provide rapid penetration and active transport of active ingredients and hydration to the skin. Merits of nanoemulsion are shown in Figure 8 [84][85][86][87]. Various marketed products name, manufacturers, and their uses are given in Table 5 [88-90].
2.6. Gold Nanoparticles. Nanogold or gold nanoparticles exhibit various sizes ranging from 5 nm to 400 nm. Interparticle interactions and assembly of gold nanoparticles play an important role in determination of their properties [91]. They exhibit different shapes such as nanosphere, nanoshell, nanocluster, nanorod, nanostar, nanocube, branched, and nanotriangles. Shape, size, dielectric properties, and environmental conditions of gold nanoparticles strongly affect the resonance frequency. The color of nanogold ranges from red to purple, to blue and almost black due to aggregation [92]. Gold nanoparticles are inert in nature, highly stable, biocompatible, and noncytotoxic in nature. Nanogold is very stable in liquid or dried form and is nonbleaching after staining on membranes; they are also available in conjugated and unconjugated form [93]. They have high drug-loading capacity and can easily travel into the target cell due to their small size and large surface area, shape, and crystallinity [94].
Gold nanoparticles have been studied as a valuable material in cosmeceutical industries due to their strong antifungal and antibacterial properties. These nanoparticles are used in variety of cosmeceuticals products like cream, lotion, face pack, deodorant, antiaging creams, and so forth. Cosmetic giant companies like L'Oreal and L'Core Paris are using gold nanoparticles for manufacturing more effective creams and lotions [95]. Main properties of nanogold in beauty care consist of assets, namely, acceleration of blood circulation, anti-inflammatory property, antiseptic properties, improvising firmness and elasticity of skin, delaying aging process, and vitalizing skin metabolism [96]. Description of merits of gold nanoparticles is depicted in Figure 9 [97][98][99]. List of various marketed products name, manufacturers, and their uses is given in Table 6 [100-104].
2.7. Nanospheres. Nanospheres are the spherical particles which exhibit a core-shell structure. The size ranges from 10 to 200 nm in diameter. In nanospheres, the drug is entrapped, dissolved, attached, or encapsulated to the matrix of polymer and drug is protected from the chemical and enzymatic degradation. The drug is physically and uniformly dispersed in the matrix system of polymer. The nature of the nanospheres can be crystalline or amorphous [105]. This system has great potential and is being able to convert poorly absorbed, labile biologically active substance and poorly soluble active substance into the propitious deliverable drug. The core of nanospheres can be enclosed with diverse enzymes, genes, and drugs [106].
Nanospheres can be divided into two categories: biodegradable nanospheres and nonbiodegradable nanospheres. Biodegradable nanospheres include gelatin nanospheres, modified starch nanospheres, and albumin nanospheres and nonbiodegradable nanospheres include polylactic acid, which is the only approved polymer.
In cosmetics, nanospheres are used in skin care products to deliver active ingredients into deep layer of the skin and deliver their beneficial effects to the affected area of the skin more precisely and efficiently. These microscopic fragments play a favorable role in protection against actinic aging. Use of nanospheres is increasing in the field of cosmetics especially in skin care products like antiwrinkle creams, moisturizing creams, and antiacne creams [107]. Pictorial presentation of favorable aspects of nanospheres is depicted in Figure 10 [14]. Marketed products name, manufacturers, and their uses are given in Table 7 [10].
Dendrimers. The term "dendrimer" arises from two
Greek words: namely, "Dendron" that means tree and "Meros" meaning part. Dendrimers are highly branched, unimolecular, globular, micellar nanostructure, and multivalent nanoparticles whose synthesis theoretically affords monodisperse compounds. A dendrimer is typically built from a core on which one or a number of successive series of branches are engrafted in an arborescent way and often adopts a spherical three-dimensional morphology [108]. Generation of the dendrimer is determined by total number of series of branches: if it has one series of branches, then it is first-generation dendrimer; if it has two series, then it is second generation and so on. They are extremely small in size, having diameters in the range of 2-20 nm [109]. Its other properties like monodispersity, polyvalence, and stability make it an ideal carrier for drug delivery with precision and selectivity. To attach biologically active substances for targeting purpose terminal groups are modified. Dendrimers provide controlled release from the inner core and drugs are incorporated in interior as well as being attached on the surface [110]. Dendrimers are new class of macromolecular architecture and are also being used as nanotechnology based cosmeceuticals for various applications like in hair care, skin care, and nail care. Dendrimers have utility in various cosmetic products like shampoos, sunscreen, hair-styling gels, and antiacne products [111]. Companies like L'Oreal, The Dow Company, Wella, and Unilever have several patents for application of dendrimers in cosmeceuticals. Descriptions of advantages of dendrimers are represented in Figure 11.
Carbon Nanotubes.
In the field of nanotechnology, carbon nanotubes are represented as one of the most unique inventions. Carbon nanotubes (CNTs) can be described as the rolled graphene with SP 2 hybridization. These are seamless cylindrical hollow fibers, comprised of walls formed by graphene as hexagonal lattice of carbon, which are rolled at specific and discrete "chiral" angles. Individual carbon nanotubes align themselves naturally into "ropes" held together by pi-stacking. The diameter ranges from 0.7 to 50 nm with lengths in the range of 10's of microns [112,113]. Carbon nanotubes are extremely light in weight. These are further of 3 types: namely, single-walled CNTs, doubled-walled CNTs, and multiwalled CNTs. Single-walled CNTs are made up of single graphene sheet which is rolled upon itself with diameter of 1-2 nm, double-walled CNTs are made of two concentric carbon nanotubes, and multiwalled CNTs consist of multiple layers of graphene tubes having diameter ranging from 2 to 50 nm [114]. Major production methods of carbon nanotubes consist of arc discharge method, laser ablation, chemical vapor deposition method, flame synthesis method, and silane solution method [115]. Various patents of carbon nanoparticles have been filed in the field of cosmeceuticals like hair coloring and cosmetic compositions comprising carbon nanotubes and peptide-based carbon nanotube hair colorants and their use in hair colorant and cosmetic compositions [116, 117].
Polymersomes.
Polymersomes are artificial vesicles which enclose a central aqueous cavity, composed of selfassembly of block copolymer amphiphiles. They have a hydrophilic inner core and lipophilic bilayer, so they can be used for both lipophilic and hydrophilic drugs and hydrophobic core provides a protein-affable environment [118]. Polymersomes are biologically stable and are highly versatile. Their drug encapsulation and release capabilities can be easily modulated by applying various block copolymers that are biodegradable or stimuli-responsive. There radius ranges from 50 nm to 5 m or more [119]. Polymersomes are proficient for encapsulating and protecting sensitive molecules, namely, drugs, proteins, peptides, enzymes, DNA, and RNA fragments. For the preparation of polymersomes, generally synthetic block copolymers have been used. The composition and molecular weight of these polymers can be varied, which allows preparation of polymersomes with different properties, responsiveness to stimuli, different membrane thickness, and permeabilities [120,121] of a thick and rigid bilayer, they offer more stability than liposomes [122]. Polymersomes are being investigated in the cosmeceutical industry for their use and various patents have been filed for their use. Patents have been filed using polymersome for improving skin elasticity and in another patent polymersomes are used for skin cell activation energy enhancement [123, 124].
Cubosomes.
Cubosomes are the advanced nanostructured particles which are discrete, submicron, and selfassembled liquid crystalline particles of surfactants with proper ratio of water that provides unique properties. Cubosomes are formed by self-assembled structures of aqueous lipid and surfactant systems when mixed with water and microstructure at a certain ratio [125]. Cubosomes are bicontinuous cubic liquid phase, which encloses two separate regions of water being divided by surfactant controlled bilayers and wrapped into a three dimension, periodic, and minimal surface, forming a strongly packed structure [126]. They consist of honeycombed (cavernous) structure and they appear like dots which are slightly spherical in structure. They exhibit size range from 10 to 500 nm in diameter. They have ability to encapsulate hydrophilic, hydrophobic, and amphiphilic substances. Cubosomes have relatively simple preparation methods; they render bioactive agents with controlled and targeted release, possess lipid biodegradability, and have high internal surface area with different drugloading modalities [127,128]. Cubosomes are an attractive choice for cosmeceuticals, so for this reason a number of cosmetic giants are investigating cubosomes. Various patents have been filed regarding the cosmetic applications of cubosomes.
Major Classes in Nanocosmeceuticals
Cosmeceuticals are contemplated as the fastest growing segment of personal care industry. A plethora of nanocosmeceuticals are assimilated in nail, hair, lip, and skin care. Major classes in nanocosmeceuticals are depicted in Figure 12 [48].
Skin
Care. Cosmeceuticals for skin care products ameliorate the skin texture and functioning by stimulating the growth of collagen by combating harmful effect of free radicals. They make the skin healthier by maintaining the structure of keratin in good condition. In sunscreen products zinc oxide and titanium dioxide nanoparticles are most effective minerals which protect the skin by penetrating into the deep layers of skin and make the product less greasy, less smelly, and transparent [129]. SLNs, nanoemulsions, liposomes, and niosomes are extensively used in moisturizing formulations as they form thin film of humectants and retain moisture for prolonged span. Marketed antiaging nanocosmeceutical products assimilating nanocapsules, liposomes, nanosomes, and nanospheres manifest benefits such as collagen renewal, skin rejuvenation, and firming and lifting the skin [130].
Hair
Care. Hair nanocosmeceutical products include shampoos, conditioning agents, hair growth stimulants, coloring, and styling products. Hair follicle, shaft targeting, and increased quantity of active ingredient are achieved by intrinsic properties and unique size of nanoparticles. Nanoparticles subsuming in shampoos seals moisture within the cuticles by optimizing resident contact time with scalp and hair follicles by forming protective film [131]. Conditioning nanocosmeceuticals agents have purposive function of imparting softness, shine, silkiness, and gloss and enhance disentangling of hair. Novel carriers like niosomes, microemulsion, nanoemulsion, nanospheres, and liposomes have major function of repairing damaged cuticles, restoring texture and gloss, and making hair nongreasy, shiny, and less brittle [132].
Lip
Care. Lip care products in nanocosmeceuticals comprise lipstick, lip balm, lip gloss, and lip volumizer. Variety of nanoparticles can be coalesced into lip gloss and lipstick to soften the lips by impeding transepidermal water loss [20] and also prevent the pigments to migrate from the lips and maintain color for longer period of time. Lip volumizer containing liposomes increases lip volume, hydrates and outlines the lips, and fills wrinkles in the lip contour [133].
Nail
Care. Nanocosmeceuticals based nail care products have greater superiority over the conventional products. The nail paints based on nanotechnology have merits such as improved toughness, fast dryness, durability, chip resistance, and ease of application due to elasticity [134]. New strategies such as amalgamating silver and metal oxide nanoparticles have antifungal properties in nail paints for the treatment of toe nails due to fungal infections [135].
Toxicity of Nanoparticles Used in Cosmeceuticals
Number of workforce and customers exposed to nanoparticles are escalating because of increasing production and application of the wide diversity of cosmeceuticals products that contain nanomaterials. Despite their huge potential benefit, little is known about the short-term and long-term health effects in the environment and organisms. Due to health hazards, product functionality, and environmental concerns, there may be possible constrains. Concerns have been raised on the possible dangers which may arise on the skin penetration of nanomaterials after their application to the skin [136]. Toxicity of nanoparticles immensely depends on variety of factors like surface properties, coating, structure, size, and ability to aggregate and these factors can be altered and manipulated in the manufacturing process. Nanoparticles having poor solubility have been shown to cause cancer and can exhibit more pronounced toxicity [137]. Health hazard may arise due to the surface area of nanoparticles when compared with the same mass concentration of large particles. Toxicity also depends on the chemical composition of nanoparticles which is absorbed on the skin [138]. There is a relationship between particle size and toxicity: the smaller the size of the nanoparticles, the greater the surface area to volume ratio, due to which there is higher chemical and biological reactivity. Health hazard caused by nanoparticles to the humans depends on the degree of exposure and the route through which they access the body. Inhalation, ingestion, and dermal routes are the possible routes by which humans can get exposure to the nanoparticles [139]. Routes of exposure of nanoparticles are given in Figure 13.
Inhalation.
According to the National Institute of Occupational Health and Safety, the most common route for exposure of airborne nanoparticles is inhalation. Consumers may inhale the nanoparticles and may get exposure through respiratory route, while consuming the products, such as perfumes, powders, and aerosol and workers can get exposed to the nanoparticles during the production. Evidences from the studies conducted on animals suggest that vast majority of nanoparticles inhaled enter the pulmonary tract and some may travel via nasal nerves to the brain and gain access to other organs via blood [140]. Silicon dioxide inhalation toxicity study suggests that particle size of 1-5 nm produces more toxicological response than 10 nm equivalent dose. Experiments on carbon nanotubes have revealed that on chronic exposure interstitial inflammation and epithelioid granulomatous lesions are caused in lungs. Some carbon based fullerenes might oxidize cells or may be hazardous when inhaled [141]. Results of pulmonary administration of TiO 2 ultrafine particles than TiO 2 fine particles show that ultrafine particles resulted in more lung injury. Gold nanoparticles of sizes 2, 40, and 100 nm, when exposed to intratracheal route, were found in the liver and macrophages. It has been demonstrated that the exposure to TiO 2 of particle size 20 nm even at low doses causes complete destruction of DNA, whereas 500 nm TiO 2 have small ability for DNA strand breakage [142].
4.2.
Ingestion. Nanomaterials may be ingested in the body from unintentional to intentional transfer from hand to mouth. Nanoparticles can be ingested from cosmeceuticals that are applied on lips or mouth like lipsticks, lip balms, lip gloss, and so on [143].
According to the studies, after ingestion, nanomaterials rapidly pass out of the body but sometimes some amount may be taken up that can migrate to the organs. Studies conducted on the layers of pig skin show that certain nanomaterials can penetrate in the layers of the skin within 24 hours of exposure [144]. When mice were orally ingested with zinc oxide nanoparticles with 20 nm and 120 nm at different doses, as a result spleen, heart, liver, bones, and pancreas became the target organs. Copper nanoparticles are found in variety of commercially available cosmeceuticals. Mice exhibited toxicological effects and heavy injuries to internal organs when exposed to copper nanoparticles [145]. Silver nanoparticles are widely used in wound dressing and antimicrobial formulations and now are being used in cosmeceuticals like soaps, face creams, and toothpaste. Silver nanoparticles are used for their antimicrobial activity in the cosmeceuticals. The silver concentration that is lethal for bacteria is the same concentration which is lethal for both fibroblasts and keratinocytes [146]. Various studies conducted on rats suggest that silver nanoparticles when exposed to rat neuronal cells led to size decrease and irregularities in shape and also demonstrated that mouse germline stem cell even at low concentrations of silver nanoparticles reduced drastically the function of mitochondria and cell viability. When mice were exposed to gold nanoparticles of 13.5 nm by ingestion, significant decrease in the RBCs, body weight, and spleen index was observed [147].
Dermal
Routes. Intracellular, transcellular, and transfollicular are the three pathways by which infiltration across the skin occurs. The dermal exposure of lesser size particles <10 nm can penetrate more easily and are disastrous than greater ones >30 nm. There are possibilities that nanoparticle penetration could be affected by skin barrier alterations such as scrapes, wounds, and dermatitis conditions [148]. Prolonged erythema, eschar formation, and oedema were reported with nanoparticles less than 10 nm. Fullerenes are currently being used in cosmeceuticals like moisturizers and face creams but the toxicity related to them remains poorly understood. Report by Professor Robert F. has identified that face creams which have fullerenes incorporated are found to cause damage in the brain of the fish and have toxic effects in the liver cells in humans [149]. Some studies demonstrated that fullerene-based peptides had the capability of penetrating intact skin and their traversal into dermis could be easy due to mechanical stressor. Quantum dots taken intradermally could penetrate regional lymph nodes and lymphatics. There are proven studies that engineered nanoparticles like single or multiwall carbon nanotubes, quantum dots with surface coating, and nanoscale titania are able to alter gene or protein expression and have lethal effects on epidermal keratinocytes and fibroblast [150]. Currently there are few issues regarding the impact of nanoparticles of titanium dioxide and zinc oxide in sunscreens on health, safety, and environment. Increased production of reactive oxygen species (ROS), including free radicals, is due to greater surface area, greater chemical reactivity, and smaller size. Free radical and reactive oxygen species production is the primary mechanism for toxicity of nanoparticles. Titanium dioxide and zinc oxide generate ROS and free radical when exposed to ultraviolet (UV) radiations, which have potential in inflammation and oxidative stress and can significantly damage membranes, proteins, RNA, DNA, and fats within cells [151]. A research on TiO 2 nanoparticles toxicity demonstrated that when these nanoparticles subcutaneously given to the pregnant mice, they were transferred to the offspring and there was a reduced sperm production in male offspring and brain damage as well. Nanoparticles of cobalt-chromium have potential that they can cross the skin barrier and damage fibroblast in humans [152].
Global Scenario of Nanocosmeceuticals
Drugs are subjected to the stringent scrutiny requirements imposed by FDA for their approval but there are no such requirements for cosmetics. Cosmeceuticals are the products which are on the borderline between cosmetics and pharmaceuticals. The Federal Food, Drug and Cosmetics Act and FDA do not recognize the term "cosmeceuticals" and the aesthetic and functional benefits are enjoyed by the products without crossing over into becoming over the counter drugs [153]. Many cosmeceuticals alter the physiological processes in the skin, but manufactures avoid holding clinical trials and making the specific claims to avoid subjecting their products to expensive and lengthy approval process by FDA. New and unfamiliar challenges are being faced by the cosmetic industry [154].
Extra category is created by some jurisdiction to accommodate cosmeceuticals or borderline products.
Japan. The products that fall between cosmetics and drugs are called "quasi-drugs." Ingredients must be preapproved in the market before including them into the quasi-drugs and require preapproval before selling them in the market [155].
Korea. Cosmeceuticals are classified as "functional cosmetics" by Korea Food and Drug Administration (KFDA). For improving safety and evaluation of functional cosmetics KFDA is responsible [156].
Thailand. According to the ingredients used in cosmeceuticals, they are classified as "controlled cosmetics." Before being marketed in Thailand, controlled cosmetics require controlled ingredients that require the notification from FDA for the use of products.
New Zealand. The category in which cosmeceuticals are accommodated is called "related products." Australia. In Australia goods can be categorized on the basis of claims about the product and product composition; the borderline products are classified as "therapeutic goods." Only approved ingredients are used for the manufacturing of these products. Australian Register of Therapeutic Goods registers the therapeutic goods [157].
Canada. Cosmeceuticals are termed as "dermo-cosmetics" in Canada. Cosmeceuticals are not recognized as an independent cosmetic category; Canadian health authorities have identified Category V to accommodate products falling in category of both cosmetics and drugs. Less regulatory requirements are required for regulation of these products.
USA. There are 3 categories in US: namely, cosmetics, drugs, and OTC drugs, and there is no legal definition of cosmeceuticals according to USFDA. Classification in USFDA depends on the claims of the products [158].
European Union. In European Union, cosmetics are regulated under Cosmetic Directive 76/768/EEC. EU does not have category to be called cosmeceuticals, but it has stringent laws in which any claims made by the company are required to be submitted as a proof. According to new regulation by EU, manufacturers have to list the nanoparticles contained in the product which has to be marketed with European Union. Cosmetic regulation states that any product containing nanomaterials as ingredient should be clearly mentioned and has to insert word "nano" in brackets after ingredient listing [159,160].
China. Cosmeceuticals are regarded as "cosmetics for special use." According to China Food and Drug Administration (CFDA), all foreign cosmetic product manufacturers before selling the product in the Chinese market must complete a safety and health quality test and obtain a hygiene permit. Special use cosmetics have to undergo safety and health quality test such as microbiology, toxicology test, chronic toxicity, carcinogenic test, and conducting safe-for-human-use trials. Imported cosmetics are classified into two categories: ordinary cosmetics and special use cosmetics. Each category of cosmetics requires different type of license from State Food and Drug Administration (SFDA). For the marketing of cosmetics hygiene license or record-keeping certificate from Health Administration Department of the State Council-SFDA must be obtained [161].
If the FDA finds out that there is safety issue regarding use of any cosmetic or the ingredient including nanoparticles, FDA has authority to prohibit the sale and manufacturing of the product or various other options like ban on ingredients, seizing unsafe products, warning letters, and mandatory warning labels and even ban of the product worldwide. New research strategy has been issued by the US Environmental Protection Agency (EPA) so as to proactively examine the impact on environment and human health due to nanoparticles being used in cosmetics, sunscreens, paints, and so on [162]. Under this agency, focus is on the research on seven types of manufactured nanomaterials: titanium dioxide, silver nanoparticles, nanotubes, cerium oxide, fullerenes, and zero-valent [163].
Scientific committee on Consumer Products (SCCP) has raised concern over use of insoluble nanoparticles used in cosmetics that are applied topically because of the toxicity reasons. The royal society world's oldest scientific organization has also raised questions on nanoparticles whether it could enter the bloodstream, taken up by cells and impart its effect [164]. Simultaneously it has also expressed the desire for the conduction of more research in this field to address the chronic effects which may arise as a result of long-term use by people all over the globe [165].
Conclusion
Nanotechnology is considered to be the most promising and revolutionizing field. Over the last dozen of years, nanotechnology is widely being used and is beneficial in the field of dermatology, cosmetics, and biomedical applications as well. New technologies and novel delivery systems have been invented by scientists, which are currently being used in the manufacture of cosmeceuticals. By the increase in use of cosmeceuticals, the conventional delivery systems are being replaced by the novel delivery systems. Novel nanocarriers which are currently being used are liposomes, niosomes, NLC, SLNs, gold nanoparticles, nanoemulsion, and nanosomes in various cosmeceuticals. These novel delivery systems have remarkable potential in achieving various aspects like controlled and targeted drug delivery, site specificity, better stability, biocompatibility, prolonged action, and higher drug-loading capacity. There is lack of convincing evidences for the claims of effectiveness, so industries are required to provide them. There are huge controversies regarding the toxicity and safety of the nanomaterials; various researches are being carried out to determine the possible health hazard and toxicity. Meticulous studies on the safety profile of the nanomaterials are required. Nanoproducts should be fabricated in such a way that their value and health of the customers are improved. Clinical trials are not required for the approval of cosmeceuticals so the manufacturers enjoy the benefit and avoid holding clinical trials and lengthy procedures. Lastly, stringent laws should be imposed on the regulation and safety of cosmeceuticals and nanoparticles used in them. | 9,437.4 | 2018-03-27T00:00:00.000 | [
"Materials Science",
"Medicine"
] |
Fatigue Increases Muscle Activations but Does Not Change Maximal Joint Angles during the Bar Dip
The purpose of this study was to profile and compare the bar dip’s kinematics and muscle activation patterns in non-fatigued and fatigued conditions. Fifteen healthy males completed one set of bar dips to exhaustion. Upper limb and trunk kinematics, using 3D motion capture, and muscle activation intensities of nine muscles, using surface electromyography, were recorded. The average kinematics and muscle activations of repetitions 2–4 were considered the non-fatigued condition, and the average of the final three repetitions was considered the fatigued condition. Paired t-tests were used to compare kinematics and muscle activation between conditions. Fatigue caused a significant increase in repetition duration (p < 0.001) and shifted the bottom position to a significantly earlier percentage of the repetition (p < 0.001). There were no significant changes in the peak joint angles measured. However, there were significant changes in body position at the top of the movement. Fatigue also caused an increase in peak activation amplitude in two agonist muscles (pectoralis major [p < 0.001], triceps brachii [p < 0.001]), and three stabilizer muscles. For practitioners prescribing the bar dip, fatigue did not cause drastic alterations in movement technique and appears to target pectoralis major and triceps brachii effectively.
Introduction
The dip is a popular bodyweight exercise frequently used to strengthen the muscles of the upper limb and trunk, and more specifically, the triceps brachii (TB) and pectoralis major (PM) [1,2]. There are many technique variations of the dip which can either decrease movement complexity (i.e., the bench dip) or increase movement complexity (i.e., the ring dip), which have recently been shown to exhibit differing neuromechanical profiles [3]. However, the most common variation appears to be the bar dip [4].
The bar dip, as depicted in Figure 1, has previously been prescribed to increase upper body push strength and muscular endurance [2,4,5]. More generally, dip variations are prescribed for "prehabilitation" and rehabilitation of upper body injury [6][7][8][9][10] and to increase upper extremity strength and power [11]. Despite its widespread popularity, there is little evidence justifying the use of the bar dip in such rehabilitation or performance programs.
When the bar dip is prescribed in the above contexts, it is common for a high volume of repetitions to be prescribed. For example, the maximal number of repetitions [5] and sets of 10-40+ repetitions [1,2] have been suggested in the literature for the purpose of increasing muscular endurance. These repetition schemes are likely to induce some level of fatigue, with a concomitant exercise-induced reduction in force or power [12]. However, no research has been conducted investigating the effects of fatigue on normal kinematics or muscle activation patterns when performing the bar dip.
The effects of fatigue have previously been investigated in other upper body push exercises such as the bench press [13][14][15] and push-up [16,17] and have been shown to significantly alter repetition characteristics such as increasing the duration of the upwards 2 of 10 phase of these movements [13,14,18,19] and decreasing movement control [14]. These alterations in movement kinematics are often accompanied by changes in muscle activation strategies and coordination dynamics [12,20], with fatigue being characterized by an increase in amplitude and a decrease in spectral frequencies in an electromyography signal [21]. Some of these fatigue-induced changes may represent a practical concern for exercise professionals and exercisers when applied to the dip. Dips are completed some height above the ground and decreased movement control may increase fall risk, forcing the shoulder beyond the maximal range of motion, resulting in traumatic injury. Any potential increase in the risk of injury warrants investigation, particularly when considering that there are currently un-investigated practitioner concerns for a suspected high risk of injury to the shoulder when completing dip repetitions [22]. The effects of fatigue have previously been investigated in other upper body push exercises such as the bench press [13][14][15] and push-up [16,17] and have been shown to significantly alter repetition characteristics such as increasing the duration of the upwards phase of these movements [13,14,18,19] and decreasing movement control [14]. These alterations in movement kinematics are often accompanied by changes in muscle activation strategies and coordination dynamics [12,20], with fatigue being characterized by an increase in amplitude and a decrease in spectral frequencies in an electromyography signal [21]. Some of these fatigue-induced changes may represent a practical concern for exercise professionals and exercisers when applied to the dip. Dips are completed some height above the ground and decreased movement control may increase fall risk, forcing the shoulder beyond the maximal range of motion, resulting in traumatic injury. Any potential increase in the risk of injury warrants investigation, particularly when considering that there are currently un-investigated practitioner concerns for a suspected high risk of injury to the shoulder when completing dip repetitions [22].
One more important potential fatigue-related control issue is the end-range shoulder extension observable at the bottom position when completing the dip. It has been suggested that there is an increased risk of injury to the anterior shoulder capsule and PM when in this position under load [22][23][24][25], especially when fatigued. This line of thinking is related to the perceived decreased neuromuscular and coordination control when fatigued which may put performers of the dip in an injurious position. However, these claims of fatigue-induced coordination dynamics change, and their relation to potential injury risk have minimal supporting evidence, with much of these reports coming from case reports [23,25], anecdotal examples [26,27], and unsupported claims [24]. As a result, having a better understanding of the neuromechanical profile of the dip will provide practitioners with evidence to more appropriately prescribe this exercise within different ath- One more important potential fatigue-related control issue is the end-range shoulder extension observable at the bottom position when completing the dip. It has been suggested that there is an increased risk of injury to the anterior shoulder capsule and PM when in this position under load [22][23][24][25], especially when fatigued. This line of thinking is related to the perceived decreased neuromuscular and coordination control when fatigued which may put performers of the dip in an injurious position. However, these claims of fatigueinduced coordination dynamics change, and their relation to potential injury risk have minimal supporting evidence, with much of these reports coming from case reports [23,25], anecdotal examples [26,27], and unsupported claims [24]. As a result, having a better understanding of the neuromechanical profile of the dip will provide practitioners with evidence to more appropriately prescribe this exercise within different athletic contexts and populations.
The purpose of this study was to profile and compare the kinematics and muscle activation patterns of the bar dip in two conditions: (i) non-fatigued and (ii) fatigued. It was hypothesized that fatigue would cause an increase in the muscle's electrical activity and that there would be significant changes in the peak joint angle's experiences, particularly in the bottom position of the movement. An understanding of the neuromechanical profile of the dip will firstly improve the justification of its use in certain contexts and provide some rationale for the perceived injury risks associated with the exercise.
Participants
An a priori calculation found that a minimum sample size of 12 participants was needed to achieve a power of 0.8, with an alpha level of 0.05. Fifteen healthy males volunteered for this study (height = 172.05 ± 27.17 cm, weight = 87.19 ± 27.20 kg, age = 29.07 ± 6.57 years, and 3.79 ± 5.23 years of regular resistance training experience). All participants regularly incorporated bodyweight dips, or variations thereof, in their weekly structured strength and conditioning programs. Participants were excluded if they had any current or major injuries to the upper limb or trunk in the 12 months prior to participation. This study was approved by the institution's human research ethics committee (ECN-19-223). Participants completed an average of 24.93 ± 7.01 repetitions, which is comparable to collegiate lacrosse players who completed 25.17 ± 7.02 repetitions [5].
Procedures
All dips were performed on a V-shaped dip bar (Iron Edge, St Kilda, VIC, Australia) that was mounted on a free-standing matrix rack (Iron Edge, St Kilda, VIC, Australia) and anchored to the laboratory floor. The laboratory temperature was set to 24 • C for all participants. The matrix rack was surrounded by 14-3D motion capturing cameras (Vicon, Oxford, UK) which sampled data (200 Hz.) using a modified marker set of the University of Western Australia's full-body model [28,29].
Nine surface electromyography (sEMG) electrodes (Delsys Avanti Wireless, Natick, MA, USA) were placed on the muscles of the participant's right upper limb and trunk. These were dry, passive electrodes with a fixed spacing of 10 mm. The muscles were selected as follows: The skin was first prepared by shaving, abrading, and cleaning with an alcohol wipe before the electrodes were placed on standardized sites (Criswell, 2011). The sEMG data were sampled at 2000 Hz and synchronized with the kinematic data using Nexus data collection software version 2.4 (Nexus 2, Oxford, UK).
Data Collection
After signing an informed consent document, participants were familiarized with the equipment, laboratory, and testing protocols. Next, the retroreflective markers and sEMG electrodes were secured to the participant.
Following a standardized warm-up consisting of fifteen arm-swings, five rows, and ten push-ups [30], participants completed a maximal range of motion (ROM) test. This test was conducted with the dip bar at waist height and the participant's feet planted on the ground to support their weight. The participant was instructed to lower themselves as far as possible, as if they were completing a dip, and to hold the bottom position for three seconds before returning to a standing position. Three repetitions were performed with the peak shoulder extension angle achieved, as measured through the 3D motion capture system, used as an indicator of maximal passive shoulder extension ROM.
Finally, participants moved the dip bar to a self-selected height (typically the waist to shoulder height) and were asked to complete one set of bar dips until volitional exhaustion. The research staff provided verbal encouragement throughout. Research staff offered no coaching cues or technique instruction at any point throughout a participant's involvement, ensuring the technique and ROM used were entirely self-moderated.
Data Analysis
The vertical displacement of the mid-pelvis, which was determined geometrically from adjacent pelvis markers, was measured to determine dip depth and height, and to differentiate repetitions. The frame prior to the mid-pelvis lowering from its initial peak was deemed the starting position of each repetition. The participant then lowered their body through the downward phase to the bottom position, which was determined as the time point coinciding with the lowest mid-pelvis position. They then raised the body, through the upward phase, to the next peak in mid-pelvis height which was used to define the end position of one repetition. The final repetition was not counted if the participant did not fully extend their elbows at the end of the upward phase.
Raw kinematic data were processed using a fourth-order low-pass Butterworth filter (f c = 6 Hz) before being time-normalized, represented in 0.5% increments of a total repetition. Next, the average at each 0.5% of the dip was calculated across the three repetitions in each dip condition (non-fatigued and fatigued). The kinematic variables of interest included the peak joint angle, and the angular joint position at the starting position, bottom position, and end positions for: extension, abduction, and external rotation of the shoulder, anterior thoracic lean (relative to vertical), and elbow flexion. The timing of when the peak joint angle occurred, represented as both a percentage of total repetition and as a percentage of repetition away from the bottom position, was also analyzed. All kinematic variables were compared between the two conditions.
The kinematic data were also used to determine the repetition characteristics such as: repetition duration, the rest period between repetitions, the relative timing of when the bottom position occurred (percentage of repetition), and the vertical range of the dip as measured by the overall vertical displacement of the mid-pelvis. All repetition characteristics were compared between conditions.
Raw sEMG data were smoothed and rectified using a root mean squared (RMS) algorithm, with a 0.4 s moving average. The RMS data were then normalized to the percentage of dip repetition and averaged across the three repetitions in each condition to match the processed kinematic data. For all muscles tested, the peak activation amplitude and the relative activation intensity at the point of maximal shoulder extension (normalized to a percentage of the peak amplitude that occurred during the non-fatigued condition, i.e., 100%) were determined and compared between conditions.
Statistical Analysis
All data were checked for normality using the Shapiro-Wilk test and the majority were found to be normally distributed. For normally distributed data, a paired t-test was used to investigate the differences in kinematic and muscle activation variables between the non-fatigued and fatigued conditions. For not-normally distributed data, a Wilcoxon signed-rank test (non-parametric t-test was used) and indicated in the presented results. Significance was set at an alpha level of 0.05, with Cohen's d being calculated to measure the effect size between conditions (d > 0.2 small effect, d > 0.5 medium effect, d > 0.8 large effect, d > 1.2 very large effect [31]). All statistics were calculated using SPSS version 25 (IBM, Armonk, NY, USA).
Kinematics
Peak joint angles and the angular position of each joint in the starting position, bottom position, and end position are presented in Table 1. There were no significant differences in any peak joint angles between the two conditions ( Note: * indicates a significant difference (p < 0.05), § indicates small effect size (d > 0.2), † indicates medium effect size (d > 0.8). Peak external rotation and external rotation at the point of maximal shoulder extension were compared between conditions using the Wilcoxon singed-rank test as the data was not normally distributed, all other variables were normally distributed and were therefore compared using a paired t-test.
During the maximal ROM trial, participants had a maximal shoulder extension angle of 87.3 ± 9.36 • . When displayed as a percentage of the maximal ROM trial, participants used 75.96 ± 10.71% and 76.30 ± 11.13% of their observed maximal passive shoulder extension ROM in the non-fatigued and fatigued conditions respectively, these were not significantly different (p = 0.905, d = 0.03).
The peak joint angles occurred at a significantly different portion of the dip repetition for all joints (p < 0.001-0.004, d = 1.26-1.78) excluding abduction (p = 0.105, d = 0.71). However, when peak timing was normalised to the percentage of dip repetition relative to the bottom position, there was only one significant difference. This was peak elbow flexion, which occurred at 0.10 ± 0.67% after the bottom position in the non-fatigued condition and 1.29 ± 1.85% after the bottom position in the fatigued condition (p = 0.019, d = 0.86). Anterior thoracic lean, shoulder extension, and external rotation occurred at a maximum of ±2.47% away from the bottom position in both conditions and was not significantly different (p = 0.236-0.308, d = 0.34-0.48), whereas peak abduction occurred 10.97 ± 13.95% before the bottom position in the non-fatigued condition and 7.74 ± 20.70% before and during in the fatigued condition and was not significantly different (p = 0.666, d = 0.18).
Of the four failed repetitions not included in the above analysis, two participants Figure 2 illustrates the peak sEMG amplitudes for all muscles tested. Five muscles (PM, TB, SA, LT, and LD) increased their peak amplitude during the fatigue condition (p < 0.001-0.043, d = 0.28-1.56). Four muscles (AD, UT, IS, and BB) had similar peak amplitudes in both conditions (p = 0.088-0.699, d = 0.05-0.39). At the point of maximal shoulder extension, three muscles (PM, SA, and LT) significantly increased their relative activation intensity during the fatigued condition, whereas BB exhibited significantly decrease muscle activity during fatigue (see Table 2). Figure 2 illustrates the peak sEMG amplitudes for all muscles tested. Five muscles (PM, TB, SA, LT, and LD) increased their peak amplitude during the fatigue condition (p < 0.001-0.043, d = 0.28-1.56). Four muscles (AD, UT, IS, and BB) had similar peak amplitudes in both conditions (p = 0.088-0.699, d = 0.05-0.39). At the point of maximal shoulder extension, three muscles (PM, SA, and LT) significantly increased their relative activation intensity during the fatigued condition, whereas BB exhibited significantly decrease muscle activity during fatigue (see Table 2).
Discussion
Overall, the kinematic characteristics of the dip were not different between fatigued and non-fatigued conditions. Fatigue did, however, cause significant increases in some of the muscles tested as hypothesized. Furthermore, there were significant changes in the repetition characteristics including an increase in repetition duration, an increase in rest time between repetitions, and a shift in the bottom position to an earlier point within the total repetition. Fatigue caused an increase in dip repetition duration, associated with a prolonged upward phase (when agonist muscles are acting concentrically) and the shift in the bottom position to an earlier portion of the repetition. Similar findings have been observed in many resistance exercises, such as the bench press [15,32], deadlift [33], and the squat [34], and can likely be explained by the phenomena of the 'sticking' point. The sticking point can be broadly defined as the point, or region, of an exercise where there is a disproportionately large reduction in movement velocity and an increase in movement difficulty [35]. While many factors may influence the sticking point, such as a reduced mechanical advantage [36,37], the most significant influence present in the current study is likely the onset of fatigue. As the agonist muscles fatigue, there is a reduction in the ability of these muscles to control the movement throughout both the downward (eccentric) and upward (concentric) phases; this is seen with the earlier arrival of the bottom position, the prolonging of the upward phase, and thereby the time spent within the sticking point. This could be of practical importance as there may be an increased likelihood of technique failure and subsequent injury within the region of the sticking point [36,38]. However, as participants were able to maintain consistent peak joint angles in both conditions this slight reduction in movement control and repetition characteristics does not appear to be detrimental when one set of bar dips is prescribed to volitional exhaustion. Despite these changes in repetition characteristics, a key finding of this investigation was that there were no significant differences in any peak joint angles nor the joint angles at the point of maximal shoulder extension for any joint motion investigated. This is interesting because fatigue has previously been shown to reduce shoulder [20,39,40] and elbow [41] joint position sense which is likely to be a controlling factor of dip depth. Following the fatigue of a single muscle group, healthy individuals are able to substantially alter the kinematic strategy of multi-jointed actions in order to maintain end-point accuracy [40]. In the current study, seemingly marginal kinematic changes occurred following the fatigue of multiple muscle groups. This indicates that participants were able to compensate for fatigue-induced deficits at multiple joints while maintaining a consistent technical standard. Due to this, it appears the increased concern of injury due to fatigue may not be warranted. While participants compensated at the bottom of the movement, some changes were observed at the top positions, although it should be noted that these positions do not place joints, with typically vulnerable structures, at compromised lengths for increased injury risk.
The vertical displacement of the mid-pelvis was significantly reduced once fatigued. As the peak joint angles were similar, this reduction in vertical displacement has largely resulted from a reduction in the peak height of the dip, as opposed to dip depth (i.e., the bottom position). This can be somewhat explained by an increase in shoulder extension in the starting and ending positions during the fatigued condition. As the technique used was self-moderated, it appears that participants may have subconsciously used elbow angle as an identifier of repetition completion regardless of the shoulder angle. However, while the shoulder angle at these two positions was significantly different between the two conditions, the average change was only 4.33 • which may not entirely explain the overall reduction in mid-pelvis displacement. Rather, other variables not included in this study's analysis may play contributing roles. For example, fatigue during the bench press has previously resulted in a reduction in bar height, despite no change in elbow angle at the top of the movement [14]. These researchers postulated that the height reduction was caused by a decrease's scapula protraction due to fatigue of the scapula protractors, i.e., SA. The scapula kinematics throughout a dip repetition warrants future investigation due to the association of altered scapula kinematics and injury risk [42,43]. Although not assessed, a reduction in scapula depression (an increase in scapula elevation) may have been present in the current study, supported by fatigue being indicated (via an increase peak activation amplitude) in all muscles investigated with possible roles in scapula depression, i.e., LT, LD, SA, and PM. Fatigue can be indicated by an increase in sEMG amplitude [12,21], likely due to an increase in motor unit recruitment. With regard to the peak activation amplitudes of the suspected agonists, some level of fatigue was present in the TB and PM but not the AD. The dip is primarily prescribed to target TB and PM [1,2] rather than the AD; however, due to the AD's function in shoulder flexion [44] it is a likely agonist for the dip. The significant increases in TB and PM activation support the use of the bar dip in strength and conditioning programs when specifically targeting these muscles. In addition, the heightened activation of stabilizer muscles (SA, LT, and LD) indicate that the dip may be an appropriate in rehabilitation context when increasing strength of such muscles is required. However, further investigation is required within clinical populations.
A final finding of this investigation was that four participants failed their final repetition. Two of these participants exceeded their maximal ROM as conducted by the maximal shoulder extension ROM test. It is important to recognize that this is an incidental finding and the generalizations of athletic populations should be done with caution. However, it is also important to acknowledge the risk of traumatic events occurring, such as falls, when completing supramaximal dip repetitions. Exercise professionals should consider the use of a spotter and lowering the height of the dip bar, in addition to considering the frequency that the bar dip is prescribed to volitional exhaustion.
It is important to note the limitations in the practical applications of this investigation's findings, the first being that all participants completed the dip under self-moderated conditions, meaning that if any strict coaching cues or technique standards are enforced, the neuromechanical profile of the dip may be altered. The second limitation is that the data was only analyzed from specific key time-points within the overall dip repetition, rather than assessing the repetition in its entirety as other data analysis methods accomplish (e.g., statistical parametric mapping). However, as this is the first systematic investigation into the bar dip, observing how experienced exercisers perform the dip at specific moments throughout will significantly improve the understanding and the prescription of this exercise. Finally, caution should be used when inferring the results of the current study to beginner exercisers. The participants in this study were experienced and regularly incorporated the dip into their training and therefore may exhibit differing neuromechanical profiles to beginners. Due to the large ranges of motion used during the dip, it is possible that the sEMG electrodes recognized signals from neighboring muscles; this is of particular concern regarding IS.
Conclusions
The bar dip appears to be an effective exercise for activating TB and PM. When prescribing the bar dip in practice, repetition schemes that induce fatigue may be beneficial in increasing the peak sEMG amplitudes of agonist muscles (TB and PM) and some stabilizer muscles (SA, LT, and LD) whilst having minimal effect on the overall movement's kinematics. Indicating that bar dips to fatigue may be an effective training tool in strengthening the TB and PM, or the muscles of scapula depression. Due to the lack of kinematic differences, increased concerns for the dip when conducted to fatigue may not be warranted when prescribed in a single set. However, there is an inherent risk of falls during a dip, which should be considered when prescribing the dip in multiple sets to fatigue.
Author Contributions: All authors were involved at all stages from conceptualization to reviewing and editing. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Southern Cross University (ECN-19-223).
Informed Consent Statement: Written informed consent was obtained from all participants involved in this study.
Data Availability Statement:
All data that support these findings are available from the corresponding author upon reasonable request.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,846.4 | 2022-11-01T00:00:00.000 | [
"Biology",
"Education"
] |
Study of a Transmission Problem with Friction Law and Increasing Continuous Terms in a Thin Layer
: The aim of this paper is to establish the asymptotic analysis of nonlinear boundary value problems. The non-stationary motion is given by the elastic constructive law. The contact is described with a version of Tresca’s law of friction. A variational formulation of the model, in the form of a coupled system for the displacements and the nonlinear source terms, is derived. The existence of a unique weak solution of the model is established. We also give the problem in transpose form, and we demonstrate different estimates of the displacement and of the source term independently of the small parameter. The main corresponding convergence results are stated in the different theorems of the last section.
Introduction
This present article is devoted to the study of the solution of a transmission problem in a non-stationary regime in a 3D thin layer with Tresca's friction law.More specifically and for the ease of the reader, we give notations that specify our domain: we suppose that the nonhomogeneous Ω ε is composed of two homogeneous bodies Ω ε 1 and Ω ε 2 of R 3 .Throughout this work, the index l indicates that a quantity is associated with the domain Ω ε l , l = 1, 2, where (0 < ε < 1) is the thickness that becomes infinitely small, which will tend to zero.Suppose also that the boundary L l of the domain Ω ε l is partitioned into three disjoint measurable parts and belongs to C 1 , where ω is a fixed region in the plane x = (x 1 , x 2 ) ∈ R. The upper surface Γ ε 1 is defined by x 3 = εh(x ), and Γ ε 2 is defined by x 3 = −εh(x ).Additionally, h is a bounded continuous function with 0 < h ≤ h(x ) ≤ h * for all (x , 0) ∈ ω, and Γ ε L l , l = 1, 2 is a lateral boundary.For any function u ε defined on Ω ε , we designate by u ε 1 = (u ε 1i ) 1≤i≤3 (resp.,uε 2 = (u ε 2i ) 1≤i≤3 ) its restriction on Ω ε 1 (resp., on Ω ε 2 ).During the last decades, many authors have studied the problems of contact with the various laws of behavior as well as the various conditions of friction close to this study.In [1][2][3], the authors devoted their studies to the convergence of the solutions of the linearized elasticity system with different boundary conditions to generalized weak equations in the plane.In [4,5], the authors show the reduction of the 3D-1D dimension in anisotropic heterogeneous linearized elasticity.This work is devoted only to strong solutions, with the absence of a friction law.This type of study, governed by the different models of the mechanics of continuum in thin layers is essentially based on the theory of variational inequalities which represents, in a very natural generalization of the theory of boundary problems, and makes it possible to consider new models from many areas of applied mathematics.The variational analysis, existence, uniqueness, and regularity results in the study of a new class of variational inequalities were proved in [6] (see also, e.g., [7][8][9] and references therein).In the case of linear thin elasticity and in a non-stationary regime, Benseridi et al., in [10,11], gave the asymptotic analysis of the solutions whose influence (or not) of the heat on the model with friction did not increase the continuous terms.Several studies of the asymptotic convergence of Newtonian and non-Newtonian fluids are considered in [12][13][14][15], of which the authors have shown that the initial problems are converging towards limit problems represented by weak forms (Reynolds equations).A significant number of researchers have devoted their work to the study of transmission problems in different functional spaces with several types of boundary conditions.For example, Manaa et al., in [16], proved the reduction of the 3D-2D dimension of an interface problem with a dissipative term in a dynamic regime.We would like readers to note that, in this study, the authors are interested in a very particular body that follows Hooke's law (an isotropic case of elastic materials).The asymptotic study of a transmission problem governed by an elastic body in a stationary regime with Tresca has been studied in [17].Another work analogous to this present study, but relating only to the study of the existence and uniqueness of the weak solution of a frictionless contact problem between an elastic body and a rigid foundation, is given by [18].Other recent works on the contact problems are given in [19][20][21][22][23].
In this study, the objective is to make an extension of our previous works [16,17].The novelty of our study can be summarized in the following two major points.First, we take into account a generalized stress tensor compared to what is given in [16]: where E l is a bounded symmetric positive definite fourth-order tensor that describes the elastic properties of the material and e(u ε l ) is the linearized strain tensor.Second, we study the asymptotic behavior of the considered problem with the Tresca friction and the presence of the nonlinear source terms in a non-stationary regime compared to what is given in [17].This choice will create different difficulties in the next section of this study, especially in Theorems 5-7 and the uniqueness theorem.Because the study of the asymptotic analysis is more difficult since in general, the limit problem involves an equation that takes into account the anisotropy of the medium, and it is therefore important to identify the elastic components of (E l ijpq ) that appear in the (2D) equation model.The remainder of our paper is organized as follows: Section 2 will summarize the description of the problem and the basic equations.Moreover, we introduce some notations and preliminaries that will be used in other sections.Section 3 is reserved for the proof of the related weak formulation.We also give the problem in transpose form, and we establish some estimates of the displacement that do not depend on the parameter ε in Section 4. The corresponding main convergence results are stated in different theorems in Section 5.
The Domain and Notations
We denote by S 3 the space of the second-order symmetric tensor on R 3 , and |.| is the inner product and the Euclidean norm on R 3 and S 3 , respectively.In addition, Throughout this article, i, j, p, q = 1, 2, 3, repeated indices are implied, and the index that follows a comma represents the partial derivative with respect to the corresponding component of x.
Following the notations presented in the introduction, we denote by Ω ε the domain where We assume that the boundary l is partitioned into three disjoint measurable parts and belongs to C 1 .We also use the usual notation for the normal components and the tangential parts of vectors and tensors, respectively, by: For the displacement field, we use three Hilbert spaces where H 1 (Ω ε l ) 3 is endowed with the inner products (., .)1,Ω ε l and the associated norms .1,Ω ε l .W ε is endowed with the canonical inner product (., .)W ε and the associated norm .W ε , which are defined by For the stress, we use the real Hilbert space endowed with the inner product Likewise, for the displacement variable, we use the real Hilbert space endowed with the inner product ), e(w 2 )) Q and the norm .H , where the deformation operator e(u) = (e ij (u)) and e ij (u) = u i,j + u j,i /2.We denote by Q ∞ the real Banach space (see [6]): endowed with the norm and, moreover, Finally, for a real Banach space (X, .X ), we use the usual notation for the spaces L p (0, T; X), where 1 ≤ p ≤ ∞; we also denote by C(0, T; X) and C 1 (0, T; X) the spaces of continuous and continuously differentiable functions on [0, T] with values in X.
The Problem Statement and Weak Variational Formulation
We consider two bodies made of an elastic material that occupy the domain L l and a unit outward normal ν.For any displacement vectors u ε defined on Ω ε , we designate by The stress-strain relation is expressed as , and i, j, p, q ∈ {1, 2, 3}, where the elasticity operator E l is assumed to satisfy the conditions: Next, we adopt these assumptions: • On Γ ε l × ]0, T[, the upper surface is assumed to be fixed: • On Γ ε L l × ]0, T[, the displacement is known and parallel to the w-plane: • On ω × ]0, T[, we suppose that the normal velocity is bilateral, that is: Let us suppose that we have the condition of the Tresca friction law on the part ω × [0, T] with κ ε being the friction coefficient: Solving the problem posed is equivalent to finding u = (u ε 1 , u ε 2 ) satisfying the constitutive law and the boundary conditions, using the following initial conditions: Finally, the dissipative terms g li : R −→ R, i = 1, 2, 3, l = 1, 2 are a continuous increasing function and satisfy the following hypothesis: g l (0 3 ) = 0 3 for l = 1, 2; 3.
For all w 1 , w 2 ∈ R, there exists a positive constant c g independent of w 1 and w 2 , such that For the given body forces f ε l , l = 1, 2, the classical model for the process is as follows.
Problem 1 solution of the problem P ε , then it is also a solution of the following variational problem: where Remark 1.Using the previous properties and by Korn's inequality (as in [6]), one easily checks that the bilinear form a(., .) is coercive and continuous, i.e., where M = max 1≤i,j,p,q≤3 Using the integral by parts on Ω ε 1 and Ω ε 2 , and then using Green's formula, the results of Remark 1, and ( 7)-( 12), we obtain the variational problem, (13).
The existence and unique results of the weak solution to problem (13) are obtained in the following Theorem.Theorem 2. If the following assumptions are realized There exists a unique solution Proof.Since the function J ξ is not regularized, then we will regularize it by J ε ξ : Next, we formulate the associated approximate problem For the rest of the proof, we apply Galerkin's method as in ( [24,25]), with hypothesis (H 1 ) − (H 5 ).We begin to show that problem (15) admits a unique solution denoted by In the last step, it is easy to verify that the limit of u ε ζ to u ε when ζ → 0 is a solution of (13).
The Problem in a Fixed Domain
In this section, we use the dilatation in the variable x 3 given by x 3 = εz; then, our problem will be defined on a domain Ω ε , which is independent of ε.So for (x, x 3 ) in Ω ε l , l = 1, 2, we have (x, z) in Ω l , where To simplify the notation, everywhere in the sequel, α, β, γ, θ = 1, 2. According to this convention, when an index variable appears twice in a single term and is not otherwise defined, it implies summation of that term over all the values of the index.So, we define the following functions in Ω l For the data of problems ( 3)-( 12), it is assumed that they depend on ε as follows: with fl , κ, δl , and ĝl not depending on ε.We introduce the following spaces: endowed with the norms, respectively: Using the symmetry of E l ijpq , the variational problem (13) is reformulated on the fixed domain as follows: where and ê ûε l = êij ûε l ij are given by the relations In the next section, we establish some estimates for the solutions to the variational problem (18).
Theorem 3. If the hypotheses of Theorem 2 hold, then there exists a positive constant C that does not depend on ε, such that we have: Proof.Suppose that the problem P ε v admits a solution denoted by (u ε 1 , u ε 2 ), then we have For r ∈ [0, t], by integration, we obtain We use Korn's inequality and hypotheses (H 1 ) − (H 5 ).There exists a constant On the other hand, when we apply the Young's inequality , for every η > 0, in (13) By integration of the last inequality between 0 and t, we have Using Poincaré's inequality [1], Likewise, via Poincaré's inequality, we give: Substituting Formula ( 24)-( 28) into ( 23), we find: By simple calculations of the change in scale with respect to the third component given by Formula (17), we give . Then, multiplying (29) by ε, we obtain: where µ * = min(µ 1 , µ 2 ), µ * = max(µ 1 , µ 2 ) and B does not depend on ε Using Gronwall's Lemma, we obtain (19) and (20).
The proof of ( 21) is based on the techniques used in the proof of inequalities ( 19)- (20).Indeed, in a first step, we derive the associated approximate problem (15) with respect to t.Then, we choose (v 1 , v 2 ) = uε 1ζ (t), uε 2ζ (t) in the expression found, and, by applying hypotheses (1)-(3) of the dissipative terms g l and Korn's inequality, we obtain the analogue of (30).Finally, Gronwall's Lemma assures the existence of a constant C that is independent of ε and satisfies (21).The proof of Theorem 3 is complete.
Conclusions
The subject of this article falls within the framework of the study of a transmission problem with friction law and increasing continuous terms in a thin layer.To obtain the desired goal, and after the variational formulation of each problem using the change in scale and new unknowns to conduct the study on a domain does not depend on ε.Then, we demonstrate different estimates of the displacement and the source term independently of ε.Finally, by passing to the limit, we obtain the limit problem and the generalized weak equation of the problem considered. | 3,374 | 2023-11-10T00:00:00.000 | [
"Engineering",
"Mathematics",
"Physics"
] |
Modeling of High Speed Free Space Optics System to Maintain Signal Integrity in Different Weather Conditions ; System Level
Free space optical (FSO) also known as free space photonics (FSS) is a technology widely deployed in Local Area Network (LAN), Metro Area Network (MAN), and in Inter & Intra chip communications. However satellite to satellite and other space use of FSO requires further consideration. Although FSO is highly beneficial due to its easy deployment and high security in narrow beam as well as market demand for 10 GB+, some factors especially rain, snow and fog attenuation causes signal integrity problem in FSO. To get better Signal Integrity in FSO we need to consider all components while designing the system. In this paper a comparative analysis has been performed on 10 GB and 40 GB FSO system over 1 Km. Firstly for selecting suitable modulation technique we compared NRZ and RZ modulation and get spectrum analysis. NRZ modulation was found more data efficient. Signal Integrity in FSO system with 10 GB/s was analyzed by eye diagram and Q-Factor of both APD and PIN Photo detector was presented in graph. Same experiment was repeated with 40 GB/s and Bit error rate of both photo detectors were presented. Keywords—Free Space Optical; NRZ; RZ; PIN; APD; Photo Detector; BER; Q-factor
INTRODUCTION
Free space optical (FSO) is a transmission system which provide point to point, mesh and point to multi point communication by using laser and photodiodes.It can be a good candidate for high bandwidth future broadband and communication systems.Due to low BER, high bandwidth and easy installation FSO is popular in optical and wireless research community One more advantage of FSO is its Unlicensed Frequency Spectrum (800-1700nm) [1].FSO is also a smart selection for intra satellite communication and due to small terminal and low power it has advantage over microwave links [2].The first laser link to handle commercial traffic was built in Japan by Nippon Electric Company (NEC) around 1970.FSO is also efficient and being used for underwater communication, indoor wireless optical network, Intra chip to chip and board to board communication [3] and intra satellite communication [2].
Beside this, FSO also have number of challenges.One of those challenges is weather attenuation to optical signal.Rain, Snowfall and FOG are big challenges and they need researcher's consideration to maintain signal integrity in FSO system as the data-rate and coverage distance increases.To keep signal integrity throughout the system we need to consider few factors by selecting right blocks and components for modulation, receiver photodiodes, and Noise mechanism.In this paper we design a simple point to point FSO system in OpticSystem 14.0 and check signal integrity by PIN & APD photodiode in different weather conditions with 10 GB/s and 40 GB/s data rate over 1 Km.
II. NRZ AND RZ MODULATION TECHNIQUES
Selecting right modulation technique which will convert electrical signal into bit stream is the first step for optical system designing.Using INTERCONNECT (Lumerical) with 25 GB/s PRBS bit rate generator we compared NRZ (Non-Return to Zero) and RZ (Return to Zero) modulation as shown in Fig. 1.NRZ modulation does not have rest state Fig. 2a while signal drops to zero between each pulse in RZ Fig. 2b.NRZ is more data efficient as it requires only half the bandwidth as compared to RZ Fig. 3
III. ATTENUATION IN FREE SPACE OPTICS SYSTEM
As FSO system uses free space for transmission medium so weather condition is one of the challenges which require consideration [4].Even the clean state of atmosphere can't be refers as free space because Nitrogen and Oxygen are there.Attenuation cause to signal in atmosphere is called atmospheric attenuation [5].Beer's Law [5] (1) is used unremarkably to relate atmospheric attenuation.PR is received optical power and Pr is optical power at source.γ (L) is total attenuation [5].
Rain attenuation is one of the causes for atmospheric attenuation in tropical regions.Rayleigh, Mie and Non selective are there different types of atmospheric scattering.Redirection of light which leads to reduction of received light intensity is called scattering [6].Non-selective scattering happens when rain drop size is larger than wavelength [7].Absorption occurs by interaction between molecules and propagation photons in atmosphere [8].The visibility range is the distance travel by beam till its intensity drop to 5% of its real [9].
DETECTOR
Photo detector is a device which converts optical signal to electrical signal.Photo Intrinsic Negative (PIN) and Avalanche Photo Diode (APD) are two photo detectors use in Free space Optic.The overview diagram of system which we design and use for comparison of APD and PIN photodiode is shown in Fig. 4. Simulation was done with 10 GB and 40 GB bit/s on distance of 1 km (1000 meter).NRZ modulation technique was used because its data efficiency is higher than RZ modulation as we compared it in Fig. 2 and Fig. 3. Shot noise and Thermal noise are two noise mechanisms in Photo detector.In Fig. 5a we enable Shot noise in Photo detector and in Fig. 5b thermal noise were enabled.Due to high Q-factor we enabled thermal noise in our system.Laser frequency in system was 1550 nm.As Power of system vary according to different weather conditions (1 -21 db).For tropical areas attenuation for haze and rain can be calculates by considering International visibility code refer from [10].Table 1 shows attenuation giving to the system.V.
PIN & APD PHOTO DETECTOR Q-FACTOR ANALYSIS WITH 10 GB Figure 6 shows system diagram for 10 GB data over 1 Km Free Space Optical channel in OpticSystem 14.0.Different weather condition attenuation show in Table .1 was given to channel and EYE diagrams to get Q-factor analysis.Fig. 7 shows result from APD photo detector and Fig. 8 PIN photo detector.Q-Factor of both Photo detectors are analyzed and plotted in graph Fig. 9. APD photo detector showed high Q-Factor as compared to PIN photo detector.As the attenuation increases the input power of system was also increased to maintain signal Integrity.Overall power for all weather conditions are plotted in Fig. 10.Signal Integrity and performance comparison of APD and PIN photo detector were evaluated in FSO system.We concluded that APD have better performance than PIN photo detector, so optical receiver with APD photo detector provide better signal Integrity as compared to PIN.As BER can be decrease by increasing optical power so future experiments can be on different NRZ modulation techniques to get better power Integrity in FSO communication. | 1,477 | 2017-01-01T00:00:00.000 | [
"Computer Science",
"Physics"
] |
Wafer-Level Vacuum-Packaged Translatory MEMS Actuator with Large Stroke for NIR-FT Spectrometers
We present a wafer-level vacuum-packaged (WLVP) translatory micro-electro-mechanical system (MEMS) actuator developed for a compact near-infrared-Fourier transform spectrometer (NIR-FTS) with 800–2500 nm spectral bandwidth and signal-nose-ratio (SNR) > 1000 in the smaller bandwidth range (1200–2500 nm) for 1 s measuring time. Although monolithic, highly miniaturized MEMS NIR-FTSs exist today, we follow a classical optical FT instrumentation using a resonant MEMS mirror of 5 mm diameter with precise out-of-plane translatory oscillation for optical path-length modulation. Compared to highly miniaturized MEMS NIR-FTS, the present concept features higher optical throughput and resolution, as well as mechanical robustness and insensitivity to vibration and mechanical shock, compared to conventional FTS mirror drives. The large-stroke MEMS design uses a fully symmetrical four-pantograph suspension, avoiding problems with tilting and parasitic modes. Due to significant gas damping, a permanent vacuum of ≤3.21 Pa is required. Therefore, an MEMS design with WLVP optimization for the NIR spectral range with minimized static and dynamic mirror deformation of ≤100 nm was developed. For hermetic sealing, glass-frit bonding at elevated process temperatures of 430–440 °C was used to ensure compatibility with a qualified MEMS processes. Finally, a WLVP MEMS with a vacuum pressure of ≤0.15 Pa and Q ≥ 38,600 was realized, resulting in a stroke of 700 µm at 267 Hz for driving at 4 V in parametric resonance. The long-term stability of the 0.2 Pa interior vacuum was successfully tested using a Ne fine-leakage test and resulted in an estimated lifetime of >10 years. This meets the requirements of a compact NIR-FTS.
Introduction
Near-infrared spectroscopy (NIRS) with spectral wideband light sources is an extremely versatile method used for quality control in chemical production processes. As absorption bands in the near-infrared spectral range (800-2500 nm) are relatively weak, the method can be used for an analysis of a wide range of materials in very diverse states (gas, liquid, and solid, as well as heterogeneous multicomponent systems) [1]. NIRS is used, e.g., to determine water in plastics or for rapid, nondestructive moisture measurement in food quality control or for physical parameters such as viscosity or grain size. Practically, quantitative chemical analysis using NIRS relies on multivariate statistical analysis using a prediction model built from spectra of calibration samples for which the
Motivation of This Study
In this article, we present a wafer-level vacuum-packaged translatory MOEMS device, especially developed for a compact NIR-FT spectrometer operating in the NIR spectral range λ = (800-2500) nm. The MEMS-based NIR-FTS targets a spectral resolution ≤15 cm −1 and SNR >1000 for a measuring time of 1 s. For a fast optical path-length modulation, it requires a highly precise (tilt-free) out-of-plane MEMS translation with 350 µ m amplitude (700 µ m stroke) and only 80 nm dynamic mirror deformation. Although we developed a translatory (pantograph) MEMS with 500 µ m scan amplitude for MIR-FTS [25], it is not suitable for an NIR-FTS with λmin = 800 nm due to the significantly too large dynamic mirror deformation of δp-p = 433 nm. Even for a reduced amplitude of 350 µm, this MEMS has a too large dynamic mirror deformation of δp-p = 303.1 nm (λmin/2.6), which exceeds the specified value of 80 nm by a factor of 3.8. Hence, no pantograph MEMS for an NIR-FTS exists so far (see Table 1). To achieve the specified amplitude of 350 µ m, the translatory MEMS has to be encapsulated in an optical vacuum package as well. Due to the NIR spectral range, a glass window with broadband antireflective coating (BB-ARC) is applicable instead of the zinc selenide (ZnSe) used in [25]. Now, vacuum MEMS packaging can be performed at wafer level instead of the previous cost-intensive hybrid package.
Motivation of This Study
In this article, we present a wafer-level vacuum-packaged translatory MOEMS device, especially developed for a compact NIR-FT spectrometer operating in the NIR spectral range λ = (800-2500) nm. The MEMS-based NIR-FTS targets a spectral resolution ≤15 cm −1 and SNR >1000 for a measuring time of 1 s. For a fast optical path-length modulation, it requires a highly precise (tilt-free) out-of-plane MEMS translation with 350 µm amplitude (700 µm stroke) and only 80 nm dynamic mirror deformation. Although we developed a translatory (pantograph) MEMS with 500 µm scan amplitude for MIR-FTS [25], it is not suitable for an NIR-FTS with λ min = 800 nm due to the significantly too large dynamic mirror deformation of δ p-p = 433 nm. Even for a reduced amplitude of 350 µm, this MEMS has a too large dynamic mirror deformation of δ p-p = 303.1 nm (λ min /2.6), which exceeds the specified value of 80 nm by a factor of 3.8. Hence, no pantograph MEMS for an NIR-FTS exists so far (see Table 1). To achieve the specified amplitude of 350 µm, the translatory MEMS has to be encapsulated in an optical vacuum package as well. Due to the NIR spectral range, a glass window with broadband antireflective coating (BB-ARC) is applicable instead of the zinc selenide (ZnSe) used in [25]. Now, vacuum MEMS packaging can be performed at wafer level instead of the previous cost-intensive hybrid package. In this work, we contribute new results concerning the following three main topics: • NIR-FTS-specific design of a resonant large-stroke pantograph MEMS device with minimized dynamic mirror deformation of δ p-p = 80 nm at 350 µm scan amplitude, • development of a cost-effective optical (NIR) wafer-level vacuum package with process compatibility to the existing qualified MEMS scanner process AME75 of Fraunhofer IPMS, • detailed characterization of new translatory MEMS devices with WLVP, investigation and elimination of failure mechanisms of WLVP (e.g., influence of process temperature on mirror planarity), and long-term stability of the desired vacuum pressure for ≥10 years lifetime.
A broad variety of wafer-level vacuum-packaging technologies and wafer bond techniques exists for hermetic sealing of smart MEMS sensors [39] or MOEMS devices, e.g., torsional MEMS mirrors [40][41][42][43]. Two examples of wafer-level MEMS vacuum packages are exemplarily shown in Figure 2. Typically, MEMS WLVPs have a closed sealing area with low topography for hermetic wafer bonding. Vertical feedthroughs are also often used for electrical signal transfer from the inner cavity of WLVP. In this work, we contribute new results concerning the following three main topics: NIR-FTS-specific design of a resonant large-stroke pantograph MEMS device with minimized dynamic mirror deformation of δp-p = 80 nm at 350 µ m scan amplitude, development of a cost-effective optical (NIR) wafer-level vacuum package with process compatibility to the existing qualified MEMS scanner process AME75 of Fraunhofer IPMS, detailed characterization of new translatory MEMS devices with WLVP, investigation and elimination of failure mechanisms of WLVP (e.g., influence of process temperature on mirror planarity), and long-term stability of the desired vacuum pressure for ≥10 years lifetime.
A broad variety of wafer-level vacuum-packaging technologies and wafer bond techniques exists for hermetic sealing of smart MEMS sensors [39] or MOEMS devices, e.g., torsional MEMS mirrors [40][41][42][43]. Two examples of wafer-level MEMS vacuum packages are exemplarily shown in Figure 2. Typically, MEMS WLVPs have a closed sealing area with low topography for hermetic wafer bonding. Vertical feedthroughs are also often used for electrical signal transfer from the inner cavity of WLVP. Concerning the wafer-level-packaging of micro scanning mirrors, we mention the state of the art described in [40][41][42][43]. Here, a micro molded glass cap wafer with 400-900 µ m cavity height is anodically bonded to the polished surface of an epi-polysilicon MEMS device [40,41]. The MEMS backside is hermetically sealed by eutectic Au bonding [44] using a 3 µ m thick electroplated Au layer. Using getters, a vacuum pressure of 0.1 mbar (10 Pa) was achieved in [40]. In [42], a modified WLVP process of micro scanning mirrors was presented, which allows tilted glass windows on a wafer level (instead of a parallel window). For hermetic sealing of the top glass wafer, glass-frit bonding was now used instead of anodic bonding (enabling a free choice of glass material and avoiding high process voltages). In [42], an inner vacuum pressure of 0.1 Pa was reported using a titanium thin-film getter. Furthermore, a modular packaging concept for MOEMS was presented in [43] allowing also vacuum packages with integrated vertical electrical feedthroughs (through-glass via, TGA).
The concept of the MEMS WLVP reported in this work (see Figure 1c) was determined by the following specifics and limitations of the qualified MEMS scanner process AME75 [45]:
AlSiCu, which is used for metal signal lines and bond island45s (at the outer chip frame), Concerning the wafer-level-packaging of micro scanning mirrors, we mention the state of the art described in [40][41][42][43]. Here, a micro molded glass cap wafer with 400-900 µm cavity height is anodically bonded to the polished surface of an epi-polysilicon MEMS device [40,41]. The MEMS backside is hermetically sealed by eutectic Au bonding [44] using a 3 µm thick electroplated Au layer. Using getters, a vacuum pressure of 0.1 mbar (10 Pa) was achieved in [40]. In [42], a modified WLVP process of micro scanning mirrors was presented, which allows tilted glass windows on a wafer level (instead of a parallel window). For hermetic sealing of the top glass wafer, glass-frit bonding was now used instead of anodic bonding (enabling a free choice of glass material and avoiding high process voltages). In [42], an inner vacuum pressure of 0.1 Pa was reported using a titanium thin-film getter. Furthermore, a modular packaging concept for MOEMS was presented in [43] allowing also vacuum packages with integrated vertical electrical feedthroughs (through-glass via, TGA).
The concept of the MEMS WLVP reported in this work (see Figure 1c) was determined by the following specifics and limitations of the qualified MEMS scanner process AME75 [45]: • AlSiCu, which is used for metal signal lines and bond island45s (at the outer chip frame), • high topography (≥2 µm due to metal lines) within the areas needed for hermetic sealing, • ultrasonic Al wire bonding required for good electrical contact of AlSiCu, • use of filled isolation trenches for electrostatic comb drives (also adding surface topography), • CMOS compatibility of all in-line processes used for fabrication of MEMS device wafers.
Due to the fixed MEMS process, the following consequences result for the NIR-WLVP: (i) no area with low topology exists on the MEMS surface (sealing frame is crossed by metal lines, see Figure 1c) and (ii) vertical signal feedthroughs are not applicable due to the poor electrical contact to the AlSiCu surface. Hence, a hermetic bond technique is required which can tolerate high surface topologies of ≥2 µm. For this NIR-WLVP application, glass-frit bonding (which requires elevated process temperatures of 430-440 • C) was selected to be the best sealing method [46][47][48][49][50][51][52] in order to remain compatible with the fixed MEMS scanner process (see Table 2 for a comparison of alternative hermetic bonding techniques). Earlier work on glass-frit-bonded WLVP of AME75 processed MEMS mirrors resulted in 500 Pa internal pressure [52] (without getter) (in this work, <1 Pa was required). For similar MEMS mirrors, a significant degradation of mirror planarity was observed at elevated process temperatures. The radius of mirror curvature decreased below R < 2 m at temperatures above 350 • C [53] (equal to a mirror deformation of δ pp > 1.56 µm assuming a 5 mm mirror diameter), which was not acceptable for this work. An initial concept for NIR-MEMS WLVP using glass-frit bonding and integration of a state-of-the-art Zr-based getter for long-term stabilization of the internal pressure below <1 Pa was presented in [54]. Very first results demonstrate exemplarily a sufficient inner vacuum pressure of 0.25 Pa using only a single WLVP stack with simplified set-up (without Au mirror coating, no BB-ARC). New experimental results using a complete WLVP set-up (with Au mirror coating and BB-ARC) show that the initial NIR-WLVP concept [54] failed due to reliability issues. The main failure mechanism is unknown (e.g., inner outgassing sources, degradation of the Au coating). Adhesive BCB >150 High Ductile seal Non-hermetic [46] In this article, we carefully investigated the influence of technological factors (e.g., elevated process temperature during glass-frit bonding) on the inner vacuum pressure, optical performance (e.g., static mirror deformation), and long-term stability of the MEMS WLVP. The degradation of Au mirror planarity at elevated process temperatures (up to 440 • C) was observed as the main failure mechanism. Potential failure sources for outgassing and degradation of the long-term stability of the inner vacuum pressure were also investigated. These reliability issues were eliminated within the modified WLVP process, e.g., by using an additional Al 2 O 3 diffusion barrier layer to prevent thermally induced degradation of the Au mirror coating. Finally, a translatory MEMS WLVP with vacuum pressure ≤0.25 Pa and Q > 18,000 was realized, resulting in a stroke of 700 µm at 267 Hz for driving at 4 V in parametric resonance. The long-term stability of the 0.2 Pa inner vacuum was successfully tested, which was estimated to be >10 years using a Ne fine-leakage test. Parasitic tilt of 20 arcsec was measured for selected MEMS devices over the full 700 µm scan. These meet the requirements of a compact NIR-FTS with high SNR > 1000 and a spectral resolution of ∆ν ≤ 15 cm −1 for a spectral bandwidth of 1200-2500 nm.
In this article, we show that glass-frit bonding can be used for a relatively simple and cost-effective WLVP process of optical MEMS, while avoiding degradation of optical performance even at high process temperatures of up to 440 • C. In addition to the NIR-FTS application, our experimental results and the observed failure mechanism of the glass-frit based WLVP process reported in this work could be helpful to a broader community using glass-frit bonding for low-cost WLVP of various smart MEMS sensors or MOEMS applications.
Materials and Methods
This translatory MEMS actuator with optical WLVP was especially developed for fast optical path-length modulation in a compact NIR-FT spectrometer for NIR spectral region λ = 800-2500 nm. The general specification of NIR-FTS and boundary conditions (considered for the MEMS design) are summarized in Table 2. Additional requirements follow from the used MEMS processes.
In this work, we reduced the scan amplitude of a 5 mm mirror aperture from 500 µm to 350 µm in comparison to the previous MEMS-based MIR-FTS [25]. With a stroke of 700 µm (equal to twice the amplitude), this MEMS provides a spectral resolution of ∆ν = 14.2 cm −1 for a similar FTS instrumentation (shown in Figure 1a). This is useful, because requirements on parasitic MEMS properties, e.g., parasitic tilt and static and dynamic mirror deformation, are significantly higher due to the smaller wavelength. This can be seen in comparison to the requirements for a conservative NIR-FTS design, targeting a high spectral resolution of ∆ν = 8 cm −1 . A very small value of only 2" follows for the parasitic tilt angle, which has to be guaranteed over the entire stroke of 1.25 mm; this would be highly risky. Moreover, a small mirror deformation of 80 nm = 1/10 λ min (p-p value) results from the minimal wavelength of 800 nm. Both (i) parasitic mirror tilt and (ii) static mirror deformation are the main challenges for the MEMS and WLVP process due to (i) geometrical tolerances of narrow spring geometries caused by the deep reactive ion etching (DRIE) process, and (ii) static mirror deformation resulting from thermal stress on the optical coating induced by high WLVP process temperatures of up to 430-440 • C, required for hermetic glass-frit bonding.
Translatory MEMS Design
During the MEMS design process, the following technological constraints were fixed by the MEMS scanner technology, available at FhG-IPMS [45]: • electrostatic resonant driving using vertical comb drives, • use of 75 µm thick SOI (silicon-on-isolator) layer of monocrystalline silicon, • no additional stiffening structures at mirrors backside.
The main challenges for this MEMS are (i) to enable a tilt-free large stroke with (ii) reduced static or dynamic mirror deformation and (iii) high mechanical reliability. An early design approach for translatory MEMS with 1.65 mm 2 aperture was based on a mirror suspension using two folded bending springs, resulting in a limited amplitude of 100 µm and significant dynamic mirror deformations caused by the mirror suspension itself [7]. Next, in [23], a 1 kHz translatory 3 mm mirror with two pantograph suspensions was tested to increase the scan amplitude to 300 µm. Here, the pantograph suspensions use torsional springs as deflectable elements instead of bending springs. This has the potential for larger deflection and reduced mechanical stresses coupled into the mirror plate at the same time. Unfortunately, due to superimposed parasitic torsional modes, only an amplitude of 140 µm could be measured for the two-pantograph MEMS device, which is not suitable for an FTS. The problem of mode separation was fixed in [25] for a translatory MEMS device with 500 Hz and 5 mm diameter mirror using a fully symmetric mechanical design of four pantograph suspensions, enabling large strokes of up to 1.4 mm and avoiding problems with parasitic modes and tilting.
Pantograph Mirror Suspension for Large Stroke
The actual NIR-FTS optimized translatory MEMS device uses also a point-symmetric configuration of four pantograph suspensions of a 5 mm mirror aperture to guarantee a tilt-free out-of-plane translation (see Figure 3b). One single pantograph consists of six torsional springs (see Figure 3a,c): two springs arranged on the same axis and connected by stiff levers (SEM photographs of mechanical pre-deflected MEMS samples are presented in Figure 4). Due to the orthogonal-anisotropic elastic moduli of monocrystalline silicon, a design with four pantographs instead of three was chosen to be more robust to fabrication tolerances and less sensitive to parasitic mirror tilt. Details of the three different spring axes are shown in Figures 3 and 5. In addition to the torsional springs, other mechanical structures are visible, used for limiting maximal out-of-plane translation.
Micromachines 2020, 11, x FOR PEER REVIEW 7 of 27 3a,c): two springs arranged on the same axis and connected by stiff levers (SEM photographs of mechanical pre-deflected MEMS samples are presented in Figure 4). Due to the orthogonal-anisotropic elastic moduli of monocrystalline silicon, a design with four pantographs instead of three was chosen to be more robust to fabrication tolerances and less sensitive to parasitic mirror tilt. Details of the three different spring axes are shown in Figure 3 and Figure 5. In addition to the torsional springs, other mechanical structures are visible, used for limiting maximal out-of-plane translation. The pantograph geometry was optimized to achieve (i) a lower frequency for an out-of-plane translation oscillation mode of 250 Hz and (ii) reduced viscous gas damping and demands on vacuum pressure using pantograph levers compared to previous MEMS. In general, the final MEMS design was developed within an iterative design process using finite element analysis (FEA) simulations of single and coupled physical domains with ANSYS TM Multiphysics, as well as simulation of dynamic transients (e.g., voltage-dependent frequency-response curves of parametric resonance) using reduced-order models (ROMs). The pantograph geometry was optimized to achieve (i) a lower frequency for an out-of-plane translation oscillation mode of 250 Hz and (ii) reduced viscous gas damping and demands on vacuum pressure using pantograph levers compared to previous MEMS. In general, the final MEMS design was developed within an iterative design process using finite element analysis (FEA) simulations of single and coupled physical domains with ANSYS TM Multiphysics, as well as simulation of dynamic transients (e.g., voltage-dependent frequency-response curves of parametric resonance) using reduced-order models (ROMs).
Micromachines 2020, 11, x FOR PEER REVIEW 7 of 27 3a,c): two springs arranged on the same axis and connected by stiff levers (SEM photographs of mechanical pre-deflected MEMS samples are presented in Figure 4). Due to the orthogonal-anisotropic elastic moduli of monocrystalline silicon, a design with four pantographs instead of three was chosen to be more robust to fabrication tolerances and less sensitive to parasitic mirror tilt. Details of the three different spring axes are shown in Figure 3 and Figure 5. In addition to the torsional springs, other mechanical structures are visible, used for limiting maximal out-of-plane translation. The pantograph geometry was optimized to achieve (i) a lower frequency for an out-of-plane translation oscillation mode of 250 Hz and (ii) reduced viscous gas damping and demands on vacuum pressure using pantograph levers compared to previous MEMS. In general, the final MEMS design was developed within an iterative design process using finite element analysis (FEA) simulations of single and coupled physical domains with ANSYS TM Multiphysics, as well as simulation of dynamic transients (e.g., voltage-dependent frequency-response curves of parametric resonance) using reduced-order models (ROMs).
Reduction of Dynamic Mirror Deformation
To reduce the dynamic mirror deformation of the 75 µm thick silicon mirror plate to the NIR-FTS specified value for dynamic deformation of δ pp ≤ 80 nm = 1/10 λ min , the best design compromise was found by reducing the resonance frequency of the used translation mode to~250 Hz. The surface topology of the dynamically deformed mirror plate, occurring for a harmonic mirror oscillation of 256.6 Hz at a maximal z-deflection of 350 µm, is shown in Figure 6a. A satisfactory dynamic mirror deformation of δ pp = 84 nm = λ min /9.5 and δ RMS = 24 nm was simulated. To reduce the dynamic mirror deformation of the 75 µ m thick silicon mirror plate to the NIR-FTS specified value for dynamic deformation of δpp ≤ 80 nm = 1/10 λmin, the best design compromise was found by reducing the resonance frequency of the used translation mode to ~250 Hz. The surface topology of the dynamically deformed mirror plate, occurring for a harmonic mirror oscillation of 256.6 Hz at a maximal z-deflection of 350 µ m, is shown in Figure 6a. A satisfactory dynamic mirror deformation of δpp = 84 nm = λmin/9.5 and δRMS = 24 nm was simulated. We have to mention here that the mirror deformation could not be minimized to the specified limit only by reducing the oscillation frequency. On the one hand, the MEMS device becomes mechanically fragile and sensitive to mechanical shocks (also crucial for MEMS fabrication). On the other hand, a direct mechanical coupling of pantograph suspension and mirror plate significantly deforms the mirror in addition to the dynamic forces caused by inertia (see Figures 7a and 8a). To avoid the additional deformation, the circular mirror aperture of 5 mm diameter is kept flexible We have to mention here that the mirror deformation could not be minimized to the specified limit only by reducing the oscillation frequency. On the one hand, the MEMS device becomes mechanically fragile and sensitive to mechanical shocks (also crucial for MEMS fabrication). On the other hand, a direct mechanical coupling of pantograph suspension and mirror plate significantly deforms the mirror in addition to the dynamic forces caused by inertia (see Figures 7a and 8a).
To avoid the additional deformation, the circular mirror aperture of 5 mm diameter is kept flexible by narrow radial springs within an outer ring-like support structure attached to the pantograph suspensions (see Figure 6b). The additional soft spring elements enable the mechanical decoupling of the mirror plate from pantograph suspension. This makes the dynamic deformation profile almost rotationally symmetric, like a free circular plate with translational oscillation in z.
In the past, several options to reduce dynamic mirror deformation for large-stroke translatory MEMS were investigated (results not yet published). We have to point out that the 75 µm SOI thickness of the mirror plate was not changeable due to the MEMS process AME75. Three variants of large-stroke pantograph MEMS designs were tested: (i) initial design with direct coupling of pantographs and mirror plate (see Figure 7a, [25]), (ii) ring-like support structure with mechanical decoupling of inner mirror (Figure 6b; variant used also within this work), and (iii) a conceptual design for dynamic self-compensation of mirror deformation by means of additional outer inert masses of a locally thinned mirror membrane (Figure 7b). The resulting dynamic mirror deformations of these variants are shown in Figure 8. For better comparison, all variants were simulated with 5 mm mirror diameter and identical operation point (MIR-FTS: 500 µm amplitude, 500 Hz, λ min = 2.5 µm).
(a) (b) Figure 6. Reduction of dynamic mirror deformation: (a) FEA results of topology at 350 µ m deflection; (b) SEM photograph of outer ring-shaped support structure with mechanical decoupling springs.
We have to mention here that the mirror deformation could not be minimized to the specified limit only by reducing the oscillation frequency. On the one hand, the MEMS device becomes mechanically fragile and sensitive to mechanical shocks (also crucial for MEMS fabrication). On the other hand, a direct mechanical coupling of pantograph suspension and mirror plate significantly deforms the mirror in addition to the dynamic forces caused by inertia (see Figures 7a and 8a). To avoid the additional deformation, the circular mirror aperture of 5 mm diameter is kept flexible by narrow radial springs within an outer ring-like support structure attached to the pantograph suspensions (see Figure 6b). The additional soft spring elements enable the mechanical decoupling of the mirror plate from pantograph suspension. This makes the dynamic deformation profile almost rotationally symmetric, like a free circular plate with translational oscillation in z.
In the past, several options to reduce dynamic mirror deformation for large-stroke translatory MEMS were investigated (results not yet published). We have to point out that the 75 µ m SOI thickness It is obvious for this study that the initial variant results in a poor mirror planarity of λmin/3.4 (see Figure 8a), whereas the variant with ring-shaped support reduces the mirror deformation by a factor of 1.7 (Figure 8b). A minimal dynamic deformation of only λmin/14.7 was simulated for the conceptual design with dynamic self-compensation ( Figure 8c). On the other hand, the conceptual design results in higher technological complexity (local thinning of the mirror backside by deep reactive ion etching, DRIE), resulting also in higher risks for asymmetries, tilting, and sensitivity to mechanical shock. Therefore, we used in this work the ring-shaped pantograph support structure and 250 Hz resonance frequency, resulting in a dynamic mirror deformation of δpp = 84 nm = λmin/9.5, which is sufficient for the NIR-FTS applications ( Figure 6).
Modal Analysis
The FEA results of the linear modal analysis are summarized in Figure 9 for the first to 16th eigenmodes. Here, a good mode separation of the used translation mode 1 at 257 Hz to the next higher parasitic (tilting) mode 2 at 1200 Hz is obvious. Higher modes are only of practical relevance during the wire-bonding process or when used in mechanically rough environments, if they can be excited by high-frequency external vibrations. To avoid parasitic excitations, integral multiples of mode 1 must avoided during the MEMS design, which is guaranteed for this translatory MEMS device. It is obvious for this study that the initial variant results in a poor mirror planarity of λ min /3.4 (see Figure 8a), whereas the variant with ring-shaped support reduces the mirror deformation by a factor of 1.7 (Figure 8b). A minimal dynamic deformation of only λ min /14.7 was simulated for the conceptual design with dynamic self-compensation ( Figure 8c). On the other hand, the conceptual design results in higher technological complexity (local thinning of the mirror backside by deep reactive ion etching, DRIE), resulting also in higher risks for asymmetries, tilting, and sensitivity to mechanical shock. Therefore, we used in this work the ring-shaped pantograph support structure and 250 Hz resonance frequency, resulting in a dynamic mirror deformation of δ pp = 84 nm = λ min /9.5, which is sufficient for the NIR-FTS applications ( Figure 6).
Modal Analysis
The FEA results of the linear modal analysis are summarized in Figure 9 for the first to 16th eigenmodes. Here, a good mode separation of the used translation mode 1 at 257 Hz to the next higher parasitic (tilting) mode 2 at 1200 Hz is obvious. Higher modes are only of practical relevance during the wire-bonding process or when used in mechanically rough environments, if they can be excited by high-frequency external vibrations. To avoid parasitic excitations, integral multiples of mode 1 must avoided during the MEMS design, which is guaranteed for this translatory MEMS device.
Micromachines 2020, 11, x FOR PEER REVIEW 10 of 27 Figure 9. Results of FEA modal analysis: eigenmodes and separation to used translation mode 1 at 257 Hz.
Electrostatic Comb Drives and Mechanical Reliability
This MEMS device is actuated by electrostatic comb drives, driven in parametric resonance [25,55]. Two versions of comb drive variants were investigated: (i) basic comb drives attached to the ring-like support structure (see Figures 3b and 10a), and (ii) additional comb drives at the pantograph levers ( Figure 4 and Figure 10b) to increase available frequency bandwidth at 350 µ m amplitude. The results of this article are based on MEMS devices with basic comb variants only (see Figure 10a), using four comb drives symmetrically arranged at the support ring. Using ROM simulations of viscous gas damping [56] and modifying the model parameters to a reduced vacuum pressure, a driving voltage of 44.3 V was simulated to reach an amplitude of 350 µ m at an assumed vacuum pressure of 10 Pa. This is slightly below the simulated electrostatic stability (pull-in) voltage of Upull-in = 46.5 V. Hence, a WLVP with <10 Pa inner vacuum pressure is required. Concerning mechanical reliability, a mechanical stress of σ1 = 0.5 GPa at a maximal deflection of
Electrostatic Comb Drives and Mechanical Reliability
This MEMS device is actuated by electrostatic comb drives, driven in parametric resonance [25,55]. Two versions of comb drive variants were investigated: (i) basic comb drives attached to the ring-like support structure (see Figures 3b and 10a), and (ii) additional comb drives at the pantograph levers (Figures 4 and 10b) to increase available frequency bandwidth at 350 µm amplitude. The results of this article are based on MEMS devices with basic comb variants only (see Figure 10a), using four comb drives symmetrically arranged at the support ring. Using ROM simulations of viscous gas damping [56] and modifying the model parameters to a reduced vacuum pressure, a driving voltage of 44.3 V was simulated to reach an amplitude of 350 µm at an assumed vacuum pressure of 10 Pa. This is slightly below the simulated electrostatic stability (pull-in) voltage of U pull-in = 46.5 V. Hence, a WLVP with <10 Pa inner vacuum pressure is required.
Concerning mechanical reliability, a mechanical stress of σ 1 = 0.5 GPa at a maximal deflection of 350 µm and σ 1,eq = 1.47 GPa at 2500× g equivalent shock acceleration were simulated using nonlinear FEA simulations. Both stress values were below the design limit of ≤1.5 GPa, required for high mechanical reliability. To enhance the robustness to mechanical shocks, additional mechanical structures were designed, which are arranged parallel to the torsion springs (see Figure 5). In the event of mechanical contact due to mechanical shock, they cause the springs to stiffen and limit the out-of-plane translation. This stiffening concept was first demonstrated in [24] (see Figure 4 on pp. 6).
results of this article are based on MEMS devices with basic comb variants only (see Figure 10a), using four comb drives symmetrically arranged at the support ring. Using ROM simulations of viscous gas damping [56] and modifying the model parameters to a reduced vacuum pressure, a driving voltage of 44.3 V was simulated to reach an amplitude of 350 µ m at an assumed vacuum pressure of 10 Pa. This is slightly below the simulated electrostatic stability (pull-in) voltage of Upull-in = 46.5 V. Hence, a WLVP with <10 Pa inner vacuum pressure is required. Concerning mechanical reliability, a mechanical stress of σ1 = 0.5 GPa at a maximal deflection of 350 µ m and σ1,eq = 1.47 GPa at 2500× g equivalent shock acceleration were simulated using nonlinear
MEMS Device Wafer Fabrication
The MOEMS devices were fabricated by the qualified IPMS process AME75, developed for electrostatic comb-driven micro scanning mirrors ( Figure 11). They use 6" BSOI (bonded silicon-on-isolator) substrates with a 75 µm thick, highly p-doped SOI device layer, 1 µm buried oxide (BOX) layer, and 400 µm handling layer. Details of the MEMS process can be found, e.g., in [45]. For this work we have to point out the following specifics of the device wafer (DW) relevant for the MEMS WLVP: • Use of field isolation trenches (formed from DRIE-etched open trenches by thermal oxidation and refilling with polysilicon) to electrically isolate areas of different electrical potential within the same SOI layer needed to define the comb drives ( Figure 10), • Use of AlSiCu metal lines for electrical signal transmission from the bond islands at outer chip frame to the inner comb drive actuator (no VIA (vertical interconnect access) exists), • Use of a thin protected aluminum layer as standard optical coating, • CMOS (complementary metal-oxide-semiconductor) compatibility of all inline processes due to restrictions caused by in-house CMOS processes for highly integrated micro mirror arrays [57].
Micromachines 2020, 11, x FOR PEER REVIEW 11 of 27 FEA simulations. Both stress values were below the design limit of ≤1.5 GPa, required for high mechanical reliability. To enhance the robustness to mechanical shocks, additional mechanical structures were designed, which are arranged parallel to the torsion springs (see Figure 5). In the event of mechanical contact due to mechanical shock, they cause the springs to stiffen and limit the out-ofplane translation. This stiffening concept was first demonstrated in [24] (see Figure 4 on pp. 6).
MEMS Device Wafer Fabrication
The MOEMS devices were fabricated by the qualified IPMS process AME75, developed for electrostatic comb-driven micro scanning mirrors ( Figure 11). They use 6" BSOI (bonded silicon-onisolator) substrates with a 75 µ m thick, highly p-doped SOI device layer, 1 µ m buried oxide (BOX) layer, and 400 µ m handling layer. Details of the MEMS process can be found, e.g., in [45]. For this work we have to point out the following specifics of the device wafer (DW) relevant for the MEMS WLVP:
Use of field isolation trenches (formed from DRIE-etched open trenches by thermal oxidation and refilling with polysilicon) to electrically isolate areas of different electrical potential within the same SOI layer needed to define the comb drives ( Figure 10), Use of AlSiCu metal lines for electrical signal transmission from the bond islands at outer chip frame to the inner comb drive actuator (no VIA (vertical interconnect access) exists), Use of a thin protected aluminum layer as standard optical coating, CMOS (complementary metal-oxide-semiconductor) compatibility of all inline processes due to restrictions caused by in-house CMOS processes for highly integrated micro mirror arrays [57]. The standard Al coating is not possible to meet the high reflectance of R > 95% required for the NIR-FTS. Therefore, we used Au for reflection coating and a symmetric coating design for thermal compensation to guarantee a small static mirror deformation of ≤λmin/10 after the WLVP process. Due to CMOS compatibility, in a backend process, identical Au coatings were deposited on the front and rear faces of the silicon mirrors using shadow masks for lateral patterning of the Au coatings.
Wafer-Level Vacuum Package of Optical MEMS
In the previous MIR-FTS development, a WLVP was not applicable due to (i) the need to use a ZnSe window, and (ii) the limitation to use an open trench isolation [25], because a field trench isolation was not available for 75 µ m SOI at the time. In addition to the ZnSe window, this open trench insolation (needed also around bond islands located outside the vacuum cavity) prevented any The standard Al coating is not possible to meet the high reflectance of R > 95% required for the NIR-FTS. Therefore, we used Au for reflection coating and a symmetric coating design for thermal compensation to guarantee a small static mirror deformation of ≤λ min /10 after the WLVP process. Due to CMOS compatibility, in a backend process, identical Au coatings were deposited on the front and rear faces of the silicon mirrors using shadow masks for lateral patterning of the Au coatings.
Wafer-Level Vacuum Package of Optical MEMS
In the previous MIR-FTS development, a WLVP was not applicable due to (i) the need to use a ZnSe window, and (ii) the limitation to use an open trench isolation [25], because a field trench isolation was not available for 75 µm SOI at the time. In addition to the ZnSe window, this open trench insolation (needed also around bond islands located outside the vacuum cavity) prevented any hermetic vacuum sealing on the wafer level. In this work, the challenge for hermetic sealing of WLVPs results from the topology and roughness of the DW within the areas needed for the future bonding frames. These bonding areas are crossed by the metal lines required to contact the outer bond islands. Profilometer measurements showed a significant topology of~1.8 µm (pp) [52] on these DW locations. Due to the significant surface topology, most bonding methods which could potentially be used for vacuum-packaging (see Table 2), such as metallic thermo-compression bonding, solid-liquid-inter-diffusion (SLID), eutectic AuSi bonding [39,44], and anodic bonding [46], are not applicable to our WLVP process.
For this work, glass-frit bonding [47][48][49][50][51][52] is the most reliable bonding method, allowing hermetic sealing over surface topologies several µm high. This bonding approach allows realizing hermetic electrical interconnects using existing metal lines without complex technological changes for the DW. Only the mask layout was designed to form a closed tetra-etyhl-ortho-silikat (TEOS) oxide frame on DW within the areas reserved for glass-frit bonding (see Figure 12a). First results on WLVP using glass-frit bonding for hermetic vacuum sealing of electrostatic resonant tilting MEMS scanners were reported in [52]. An inner vacuum pressure of 2-20 mbar (200-2000 Pa) was estimated without using a getter. This is insufficient for this translatory MEMS, requiring an internal pressure of at most 10 Pa. Hence, a thin film getter [58][59][60] is needed for this WLVP, using a highly efficient zirconium (Zr)-based thin-film getter from SAES Getters (SAES Getters S.p.A., Lainate (MI), Italy). For this work, glass-frit bonding [47][48][49][50][51][52] is the most reliable bonding method, allowing hermetic sealing over surface topologies several µ m high. This bonding approach allows realizing hermetic electrical interconnects using existing metal lines without complex technological changes for the DW. Only the mask layout was designed to form a closed tetra-etyhl-ortho-silikat (TEOS) oxide frame on DW within the areas reserved for glass-frit bonding (see Figure 12a). First results on WLVP using glassfrit bonding for hermetic vacuum sealing of electrostatic resonant tilting MEMS scanners were reported in [52]. An inner vacuum pressure of 2-20 mbar (200-2000 Pa) was estimated without using a getter. This is insufficient for this translatory MEMS, requiring an internal pressure of at most 10 Pa. Hence, a thin film getter [58][59][60] is needed for this WLVP, using a highly efficient zirconium (Zr)-based thin-film getter from SAES Getters (SAES Getters S.p.A., Lainate (MI), Italy).
Concept of WLVP for NIR-FTS
The schematic set-up of the optical MEMS wafer-level vacuum package (WLVP) is shown in Figure 13a. The WLVP consists of four pre-fabricated substrates, sequentially bonded by three glass-frit wafer bonding processes to form the final WLVP stack of 3476 µm total thickness in addition to 3× the layer thickness of the glass-frit bonding layer.
Concept of WLVP for NIR-FTS
The schematic set-up of the optical MEMS wafer-level vacuum package (WLVP) is shown in Figure 13a. The WLVP consists of four pre-fabricated substrates, sequentially bonded by three glassfrit wafer bonding processes to form the final WLVP stack of 3476 μm total thickness in addition to 3× the layer thickness of the glass-frit bonding layer. In addition to the 476 µm thick SOI device wafer (DW), containing all active (movable) MEMS structures, the WLVP stack consists of a 1000 µm thick top glass wafer (TGW, used as optical window), a 1000 µm thick (100) Si top spacer wafer (TSW), and a 1000 µm thick bottom (100) Si spacer wafer (BW). A double-sided polished borosilicate-glass wafer with high NIR transmission is used as the top glass wafer (TGW). For technological simplification of WLVP, an ordinary parallel configuration of the optical glass window was chosen instead of wedged optical windows. Parasitic etalon modulations on the measured NIR spectrum arising from multiple reflections between the TGW interior surface and the mirror surface were reduced by means of (i) a broadband antireflective coating (BB-ARC) deposited on the interior surface of the TGW, and (ii) a TSW of 1000 µm thickness, which always guarantees a minimum distance ≥650 µm between the TGW interior surface and the oscillating mirror, resulting in an etalon-free spectral range slightly smaller than a spectral resolution of ∆ν = 8 cm −1 . The bottom wafer (BW), which finally hermetically seals the backside of the WLVP stack, contains a 600 µm deep cavity (TMAH (tetra-methyl-ammonium-hydroxide) etched) with a patterned Zr-based thin-film getter. For electrical characterization, the diced WLVP chips were glued and wire-bonded to a PCB (printed circuit board) using an additional 1 mm thick glass substrate on the backside of WLVP for simplified handling.
Process-Flow for NIR-WLVP
The schematic process flow of the WLVP is shown in Figure 14. For fabrication of the top spacer wafer (TSW), a double-sided, polished, 1000 µm thick, (100)-oriented Si wafer was used. The free optical apertures were realized by TMAH wet etching simultaneously from either side (using a SiO 2 hard mask). Then, a first, 20 µm thick glass-frit bond layer (Ferro FX-11 036) was screen-printed (patterned by the sieve) on the top spacer wafer. For all glass-frit bonding frames, a width of 490-550 µm was chosen to avoid defects during wafer dicing. After screen printing, the glass-frit layers were first dried at 120 • C, followed by a glazing process at 425 • C for hardening and out-gassing of the glass-frit frame before bonding (see Figure 15). This was followed by the first bonding process to bond the top glass wafer (TGW) and top spacer wafer (TSW) at 440 • C (equal to the glass temperature of the glass-frit layer). For simplicity, the bond interface is located on top of the BB-ARC (uniformly deposited on the inner side of the TGW). Here, the TGW and TSW were aligned flat to flat. Next, the second glass-frit bond layer was screen-printed on the opposite side of the top spacer wafer, using identical processes for drying and pre-baking. Then, the stack of TGW and TSW was glass-frit bonded to the front side of the device wafer (DW) using a slightly reduced bond temperature of 435 • C, in order not to influence the first bond interface.
resulting in an etalon-free spectral range slightly smaller than a spectral resolution of ∆ν = 8 cm −1 . The bottom wafer (BW), which finally hermetically seals the backside of the WLVP stack, contains a 600 µ m deep cavity (TMAH (tetra-methyl-ammonium-hydroxide) etched) with a patterned Zr-based thinfilm getter. For electrical characterization, the diced WLVP chips were glued and wire-bonded to a PCB (printed circuit board) using an additional 1 mm thick glass substrate on the backside of WLVP for simplified handling.
Process-Flow for NIR-WLVP
The schematic process flow of the WLVP is shown in Figure 14. For fabrication of the top spacer wafer (TSW), a double-sided, polished, 1000 µ m thick, (100)oriented Si wafer was used. The free optical apertures were realized by TMAH wet etching simultaneously from either side (using a SiO2 hard mask). Then, a first, 20 µ m thick glass-frit bond layer (Ferro FX-11 036) was screen-printed (patterned by the sieve) on the top spacer wafer. For all glass-frit bonding frames, a width of 490-550 µ m was chosen to avoid defects during wafer dicing. After screen printing, the glass-frit layers were first dried at 120 °C, followed by a glazing process at 425 °C for hardening and out-gassing of the glass-frit frame before bonding (see Figure 15). This was Here, the TGW and TSW were aligned flat to flat. Next, the second glass-frit bond layer was screen-printed on the opposite side of the top spacer wafer, using identical processes for drying and pre-baking. Then, the stack of TGW and TSW was glass-frit bonded to the front side of the device wafer (DW) using a slightly reduced bond temperature of 435 °C, in order not to influence the first bond interface. The third glass-frit bond layer (used for final hermetic vacuum sealing of WLVP) was screenprinted on top of the BW before the getter deposition process (see Figure 15b). This was used in order to achieve a thermal decoupling of getter material from the high process temperature needed for glassfrit pre-bake. Otherwise the Zr-based getter layer would be completely activated and already saturated in the ambient atmosphere and would have no effect within the WLVP. The Zr-based thin-film getter was externally deposited, using the PageWafer ® process of SAES Getters. The final hermetic vacuum sealing process consists of two main steps: (i) pre-heating to 200 °C for 2 h before contacting the wafer stack, still under vacuum pumping at process pressure, in order to degas the glass-frit layer and avoid early getter activation in step 1, and (ii) final hermetic vacuum sealing of the WLVP stack at 430 °C and final bonding pressure. Here, the getter material is activated during heating when temperature increases to T > 300 °C. In this study, we also investigated the influence of the vacuum process pressure (using 0.1, 1, and 25 Pa) on the inner vacuum pressure of WLVP.
The WLVP bonding processes were performed using a bond aligner SÜ SS BA8 (SÜ SS MicroTec SE, Garching, Germany) and an SÜ SS SB8 wafer bonder (SÜ SS MicroTec SE, Garching, Germany). The final wafer level vacuum package (WLVP) of translatory MEMS is shown exemplarily in Figure 16 The third glass-frit bond layer (used for final hermetic vacuum sealing of WLVP) was screen-printed on top of the BW before the getter deposition process (see Figure 15b). This was used in order to achieve a thermal decoupling of getter material from the high process temperature needed for glass-frit pre-bake. Otherwise the Zr-based getter layer would be completely activated and already saturated in the ambient atmosphere and would have no effect within the WLVP. The Zr-based thin-film getter was externally deposited, using the PageWafer ® process of SAES Getters.
The final hermetic vacuum sealing process consists of two main steps: (i) pre-heating to 200 • C for 2 h before contacting the wafer stack, still under vacuum pumping at process pressure, in order to degas the glass-frit layer and avoid early getter activation in step 1, and (ii) final hermetic vacuum sealing of the WLVP stack at 430 • C and final bonding pressure. Here, the getter material is activated during heating when temperature increases to T > 300 • C. In this study, we also investigated the influence of the vacuum process pressure (using 0.1, 1, and 25 Pa) on the inner vacuum pressure of WLVP.
The WLVP bonding processes were performed using a bond aligner SÜSS BA8 (SÜSS MicroTec SE, Garching, Germany) and an SÜSS SB8 wafer bonder (SÜSS MicroTec SE, Garching, Germany). The final wafer level vacuum package (WLVP) of translatory MEMS is shown exemplarily in Figure 16 before wafer dicing. The details of infrared and microscopic images show a good homogeneity of the bonding frame. Finally, the 3476 µm thick WLVP stack was separated by wafer dicing into individual chips. Moreover, the bond pads were opened for wire bonding using a three-step sawing process. For characterization, the MEMS WLVP chips are mounted and wire-bonded onto a dedicated PCB.
Basic MEMS Characteristics without WLVP
Initially, we studied the amplitude-frequency behavior depending on vacuum pressure and driving voltage using MEMS devices without WLVP. This is a commonly used method to estimate the vacuum influence on MEMS performance, e.g., [40,61], helpful to determine the minimum vacuum requirements for further WLVP development. Therefore, the MEMS samples were placed into a small vacuum chamber of 67 cm 3 inner volume, using a vacuum turbo pump for external evacuation. The vacuum pressure inside the cavity could be varied from ambient pressure down to 0.1 Pa. The amplitude of translatory oscillation was measured through an optical window using a Michelson interferometer set-up with a helium-neon (He-Ne) laser operating at 632.8 nm wavelength. The MEMS devices are driven in parametric resonance, electrically excited with a pulsed driving voltage of 50% duty cycle and pulse frequency of twice the mechanical oscillation frequency. A frequency downsweep is required to start the oscillation. Before this test, the stability (pull-in) voltage of Upull-in = 45 V was measured, limiting the maximal driving voltage. The dependence of the amplitude on pressure is exemplarily shown in Figure 17a for 12 V driving voltage.
Basic MEMS Characteristics without WLVP
Initially, we studied the amplitude-frequency behavior depending on vacuum pressure and driving voltage using MEMS devices without WLVP. This is a commonly used method to estimate the vacuum influence on MEMS performance, e.g., [40,61], helpful to determine the minimum vacuum requirements for further WLVP development. Therefore, the MEMS samples were placed into a small vacuum chamber of 67 cm 3 inner volume, using a vacuum turbo pump for external evacuation. The vacuum pressure inside the cavity could be varied from ambient pressure down to 0.1 Pa. The amplitude of translatory oscillation was measured through an optical window using a Michelson interferometer set-up with a helium-neon (He-Ne) laser operating at 632.8 nm wavelength. The MEMS devices are driven in parametric resonance, electrically excited with a pulsed driving voltage of 50% duty cycle and pulse frequency of twice the mechanical oscillation frequency. A frequency down-sweep is required to start the oscillation. Before this test, the stability (pull-in) voltage of U pull-in = 45 V was measured, limiting the maximal driving voltage. The dependence of the amplitude on pressure is exemplarily shown in Figure 17a for 12 V driving voltage.
interferometer set-up with a helium-neon (He-Ne) laser operating at 632.8 nm wavelength. The MEMS devices are driven in parametric resonance, electrically excited with a pulsed driving voltage of 50% duty cycle and pulse frequency of twice the mechanical oscillation frequency. A frequency downsweep is required to start the oscillation. Before this test, the stability (pull-in) voltage of Upull-in = 45 V was measured, limiting the maximal driving voltage. The dependence of the amplitude on pressure is exemplarily shown in Figure 17a for 12 V driving voltage. It is obvious that a vacuum of below 4 Pa is needed to guarantee a stable parametric resonance oscillation at 575.5 Hz (pulse frequency) with 350 µ m amplitude; this operation frequency is only 0.2 Hz above the instable resonance point [45,55]. The voltage dependence of the parametric resonance curves is shown for two relevant vacuum levels: at 0.1 Pa (target for this WLVP using getter, see Figure 17b) and 500 Pa (see Figure 17c, assuming a glass-frit bonded WLVP without getter, realized It is obvious that a vacuum of below 4 Pa is needed to guarantee a stable parametric resonance oscillation at 575.5 Hz (pulse frequency) with 350 µm amplitude; this operation frequency is only 0.2 Hz above the instable resonance point [45,55]. The voltage dependence of the parametric resonance curves is shown for two relevant vacuum levels: at 0.1 Pa (target for this WLVP using getter, see Figure 17b) and 500 Pa (see Figure 17c, assuming a glass-frit bonded WLVP without getter, realized previously in [52]). To achieve the full amplitude of 350 µm, a driving voltage of 30 V is needed for 500 Pa, which can be significantly reduced in this study to 3 V at 0.1 Pa. It was observed for this translatory MEMS that scan amplitude cannot be set independently by driving voltage and frequency (see Figure 17). It is obvious that all parametric resonance curves lie directly on top of each another. They do not form a parametric family of curves (as is usually the case for torsional MEMS scanners [45]), i.e., there exist no amplitude-frequency curves that can be displaced relative to each other via the driving voltage. On the other hand, the increase in driving voltage only causes an increase in maximum scan amplitude, accompanied by a decreasing frequency of the (unstable) resonance point. This behavior (lack of additional parametric response curves, displaced by a parameterized driving voltage) is explained by (i) the small electrostatic actuation range in z of comb drives (see Figure 10a) and (ii) the degressive spring characteristic [62].
Further experiments show a non-negligible influence of the volume of encapsulated vacuum cavity on the resulting pressure dependency of the MEMS characteristics (see Figure 18a). We modified the experimental set-up (i) to determine realistic values of the minimum vacuum requirements for the development of the WLVP, and (ii) to allow also an indirect measurement of the internal vacuum pressure inside the WLVP using a pressure-calibrated MEMS characteristic as a reference. Therefore, we encapsulated the MEMS within an additional cavity (similar in size and volume to the real WLVP), both evacuated inside the external vacuum chamber (see Figure 18c). We needed two iterations of this modified set-up to eliminate size effects. For reference, the pressure-dependent MEMS characteristics were first verified using the Michelson interferometer set-up. Finally, the vacuum pressure dependencies of scan amplitude and Q factor were measured with a laser vibrometer (Polytec MSA 500). Here, Q factors were calculated from the freely damped MEMS oscillation (see Figure 18b). We have to point out that this method is only applicable for large oscillation amplitudes >10 µm. Using a calibration of the internal pressure in terms of the measured Q factor, the minimum requirements for MEMS WLVP (defined for meeting the full 350 µm amplitude at ≤8 V) were determined as follows: have to point out that this method is only applicable for large oscillation amplitudes >10 µ m. Using a calibration of the internal pressure in terms of the measured Q factor, the minimum requirements for MEMS WLVP (defined for meeting the full 350 μm amplitude at ≤8 V) were determined as follows:
Characteristics of Initial MEMS WLVP Run
In the initial WLVP fabrication run, four wafer stacks, i.e., (i) two stacks without getter and gold coating and (ii) two stacks with zirconium (Zr)-based getter and gold coating, were vacuum-bonded at 25 Pa process pressure. Using the calibrated characteristic of Q vs. vacuum pressure, it was possible to measure the inner vacuum pressure within the cavity of the WLVP.
MEMS Characteristics with WLVP
For the two WLVP stacks without getter, an insufficiently high internal vacuum pressure of 139 Pa (Q = 18.6) and 62 Pa (Q = 45.3) was measured below the critical values. On the other hand, for the WLVP stacks with getter, a sufficiently high Q factor of 18,000 (equal to 0.25 Pa) was achieved for the WLVP stack 03. The second stack 04 failed (p = 46.8 Pa and Q = 61.7) for an unknown reason.
The frequency amplitude of a functional MEMS WLVP (sample of stack 03 measured with the Michelson interferometer set-up) is exemplarily shown in Figure 19b. At 0.25 Pa inner vacuum pressure, a driving voltage of 4 V is needed for an amplitude of 350 µm. The long-term stability of the inner vacuum was initially verified for WLVP stack 03 up to >10 years [54] using an ultrafine Ne vacuum leakage test [63], with an estimated mean time to getter saturation of 6.34 years. The long-term stability of the inner vacuum of stack 03 was experimentally verified after 32 months again, resulting in a vacuum of 0.47 Pa and a new estimated (residual) mean time-to-getter-saturation of 7.23 ± 1.54 years. These results clearly demonstrate the potential of the present WLVP strategy for a sufficiently long lifetime >10 years.
inner vacuum was initially verified for WLVP stack 03 up to >10 years [54] using an ultrafine Ne vacuum leakage test [63], with an estimated mean time to getter saturation of 6.34 years. The longterm stability of the inner vacuum of stack 03 was experimentally verified after 32 months again, resulting in a vacuum of 0.47 Pa and a new estimated (residual) mean time-to-getter-saturation of 7.23 ± 1.54 years. These results clearly demonstrate the potential of the present WLVP strategy for a sufficiently long lifetime >10 years.
Influence of Process Temperature on Optical Coating
During the initial WLVP process, the MEMS devices have to withstand two process steps at elevated process temperatures of 435 and 440 °C due to hermetic glass-frit bonding. These high process temperatures can be crucial for the static planarity of the Au-coated mirrors. To investigate the influence of process temperature on optical mirror coating, the static mirror deformation was measured for WLVP stacks 03 and 04 before and after the WLVP process using white-light interferometry (WLI). Different commercial Au coatings were used: Au-variant 1:10 nm Cr + 70 nm Au coatings for stack 04 (identical for the front-and backside) and Au-variant 2:7 nm Cr + 70 nm Au coatings (identical for the front-and back surfaces) for stack 03. The WLI results for stack 04 are summarized in Figure 20; significant defects caused by the diffusion of Au into the silicon substrate are obvious. These diffusion Au defects are less frequent at the front surface but affected 100% of the
Influence of Process Temperature on Optical Coating
During the initial WLVP process, the MEMS devices have to withstand two process steps at elevated process temperatures of 435 and 440 • C due to hermetic glass-frit bonding. These high process temperatures can be crucial for the static planarity of the Au-coated mirrors. To investigate the influence of process temperature on optical mirror coating, the static mirror deformation was measured for WLVP stacks 03 and 04 before and after the WLVP process using white-light interferometry (WLI). Different commercial Au coatings were used: Au-variant 1:10 nm Cr + 70 nm Au coatings for stack 04 (identical for the front-and backside) and Au-variant 2:7 nm Cr + 70 nm Au coatings (identical for the front-and back surfaces) for stack 03. The WLI results for stack 04 are summarized in Figure 20; significant defects caused by the diffusion of Au into the silicon substrate are obvious. These diffusion Au defects are less frequent at the front surface but affected 100% of the samples on the backside (see Figure 20a). In consequence, a clearly too large static mirror curvature of 1/R = 0.50-0.85 m −1 (equal to δ pp = 1.5-2.5 µm and compared to the small initial curvature of 1/R = 0.0014-0.0454 m −1 with δ pp = 4.5-142 nm static mirror deformation) was measured with WLI for stack 04. The results of stack 03 with Au coatings of variant 2 had fewer defects and a better performance, with final curvature 1/R = 0.13-0.19 m −1 (equal to δ pp = 395-440 nm) after WLVP process in comparison to the initial values in the range 1/R = −0.10 to −0.14 m −1 (δ pp = 309-440 nm). samples on the backside (see Figure 20a). In consequence, a clearly too large static mirror curvature of 1/R = 0.50-0.85 m −1 (equal to δpp = 1.5-2.5 µ m and compared to the small initial curvature of 1/R = 0.0014-0.0454 m −1 with δpp = 4.5-142 nm static mirror deformation) was measured with WLI for stack 04. The results of stack 03 with Au coatings of variant 2 had fewer defects and a better performance, with final curvature 1/R = 0.13-0.19 m −1 (equal to δpp = 395-440 nm) after WLVP process in comparison to the initial values in the range 1/R = −0.10 to −0.14 m −1 (δpp= 309-440 nm).
Discussion on Improvements of MEMS WLVP
The significant degradation of mirror planarity was caused by diffusion of gold into the silicon mirror plate, resulting in higher mechanical stresses. It also disturbed the original symmetry of the mirror coating required for temperature compensation. The higher defect density on the rear mirror surface can be explained by its higher thermal load in the bonding tool. As a consequence, the chromium adhesion layer could not safely prevent the Au diffusion. In order to prevent Au diffusion, we tested an additional, diffusion-tight barrier layer, which was deposited on the bare silicon mirror surface before the Au coatings were deposited. A 40 nm thick barrier layer of Al2O3 was homogenously deposited with atomic layer deposition (ALD) onto the entire DW. First, we simulated the influence of the process temperature on the (i) mirror planarity and (ii) the defect density of the Au coating after the WLVP process using MEMS dummy wafers (without electrical function). Two groups were tested (i) with and (ii) without Al2O3 diffusion barrier layer. We also compared the previously tested variants of Au coatings of two different commercial suppliers.
For the experimental simulation of the WLVP process, the samples were exposed to the same temperature cycles and thermal budget as in the real process. In Figure 21 the results on mirror curvature are displayed before and after the simulated WLVP process as boxplots for all tested variants. It is obvious that the Au coating variant 2 with an ALD barrier layer achieved the smallest mirror curvature of 1/R = 0.02 ± 0.015 m −1 corresponding to a static mirror deformation of δpp = 55.1 ±27.9 nm. Without the ALD barrier layer, small Au diffusion defects on the backside of the wafer result in a larger curvature and a broader variation range. In all samples with the Al2O3 diffusion barrier layer, no diffusion defects occurred after the simulated thermal process load.
The reason for the poor vacuum of the defective WLVP stack 04 (with p = 46.8 Pa, although a Zrbased getter was used) was suspected to originate in local defects inside the glass-frit bonding frame. After external getter deposition at the SAES Getters, local defects of cracks and spalling were observed (see Figure 22a, bond frame before hermetic vacuum sealing), which could probably increase the leakage rate. The receiving inspection at SAES also showed small cracks within the previously defectfree glass-frit bond frames. The hypothesis of an increased leakage rate for WLVP stack 04 due to cracks was checked by means of an ultrafine neon vacuum leakage test [63].
Discussion on Improvements of MEMS WLVP
The significant degradation of mirror planarity was caused by diffusion of gold into the silicon mirror plate, resulting in higher mechanical stresses. It also disturbed the original symmetry of the mirror coating required for temperature compensation. The higher defect density on the rear mirror surface can be explained by its higher thermal load in the bonding tool. As a consequence, the chromium adhesion layer could not safely prevent the Au diffusion. In order to prevent Au diffusion, we tested an additional, diffusion-tight barrier layer, which was deposited on the bare silicon mirror surface before the Au coatings were deposited. A 40 nm thick barrier layer of Al 2 O 3 was homogenously deposited with atomic layer deposition (ALD) onto the entire DW. First, we simulated the influence of the process temperature on the (i) mirror planarity and (ii) the defect density of the Au coating after the WLVP process using MEMS dummy wafers (without electrical function). Two groups were tested (i) with and (ii) without Al 2 O 3 diffusion barrier layer. We also compared the previously tested variants of Au coatings of two different commercial suppliers.
For the experimental simulation of the WLVP process, the samples were exposed to the same temperature cycles and thermal budget as in the real process. In Figure 21 the results on mirror curvature are displayed before and after the simulated WLVP process as boxplots for all tested variants. It is obvious that the Au coating variant 2 with an ALD barrier layer achieved the smallest mirror curvature of 1/R = 0.02 ± 0.015 m −1 corresponding to a static mirror deformation of δ pp = 55.1 ± 27.9 nm. Without the ALD barrier layer, small Au diffusion defects on the backside of the wafer result in a larger curvature and a broader variation range. In all samples with the Al 2 O 3 diffusion barrier layer, no diffusion defects occurred after the simulated thermal process load.
The reason for the poor vacuum of the defective WLVP stack 04 (with p = 46.8 Pa, although a Zr-based getter was used) was suspected to originate in local defects inside the glass-frit bonding frame. After external getter deposition at the SAES Getters, local defects of cracks and spalling were observed (see Figure 22a, bond frame before hermetic vacuum sealing), which could probably increase the leakage rate. The receiving inspection at SAES also showed small cracks within the previously defect-free glass-frit bond frames. The hypothesis of an increased leakage rate for WLVP stack 04 due to cracks was checked by means of an ultrafine neon vacuum leakage test [63]. However, the test showed a low leakage rate identical to the good samples. In other words, the glass-frit bond frames were hermetically sealed. The cause of the error had to be inside the vacuum cavity. By means of a comparative test using residual gas analysis (RGA) at SAES Getters, a high pressure of 4.4 mbar (440 Pa) and an unusual residual gas atmosphere was measured, containing 97% hydrogen and methane (inside the cavity) only for tested WLVP samples of the affected stack 04. This indicates an internal source of outgassing. Microscopic investigations and correlation with further SEM inspections revealed local defects of opened voids within the filled isolation trenches (see Figure 22c). The microscopic inspections found that 36.4% of all device wafers of this study were affected by these defects, observed also in wafer stack 04 but not for stack 03. It is suspected that these voids contained polymer residuals from the DRIE passivation process. In the improved WLVP process, the 40 nm thick Al2O3 barrier layer, deposited on the entire DW by atomic layer deposition (ALD), may help to encapsulate these void defects. Afterward, failure (observed at stack 04) no longer occurred. To reduce the risks for hermeticity of WLVP, the final glass-frit bonding process used for hermetic vacuum sealing of WLVP was changed. To avoid any cracks or spalling inside the final glass-frit bonding frame (caused during the external getter deposition), we decoupled the getter deposition on the BW. For this purpose, we screen-printed the final glass-frit bonding frame (used for hermetic However, the test showed a low leakage rate identical to the good samples. In other words, the glass-frit bond frames were hermetically sealed. The cause of the error had to be inside the vacuum cavity. By means of a comparative test using residual gas analysis (RGA) at SAES Getters, a high pressure of 4.4 mbar (440 Pa) and an unusual residual gas atmosphere was measured, containing 97% hydrogen and methane (inside the cavity) only for tested WLVP samples of the affected stack 04. This indicates an internal source of outgassing. Microscopic investigations and correlation with further SEM inspections revealed local defects of opened voids within the filled isolation trenches (see Figure 22c). The microscopic inspections found that 36.4% of all device wafers of this study were affected by these defects, observed also in wafer stack 04 but not for stack 03. It is suspected that these voids contained polymer residuals from the DRIE passivation process. In the improved WLVP process, the 40 nm thick Al2O3 barrier layer, deposited on the entire DW by atomic layer deposition (ALD), may help to encapsulate these void defects. Afterward, failure (observed at stack 04) no longer occurred. To reduce the risks for hermeticity of WLVP, the final glass-frit bonding process used for hermetic vacuum sealing of WLVP was changed. To avoid any cracks or spalling inside the final glass-frit bonding frame (caused during the external getter deposition), we decoupled the getter deposition on the BW. For this purpose, we screen-printed the final glass-frit bonding frame (used for hermetic However, the test showed a low leakage rate identical to the good samples. In other words, the glass-frit bond frames were hermetically sealed. The cause of the error had to be inside the vacuum cavity. By means of a comparative test using residual gas analysis (RGA) at SAES Getters, a high pressure of 4.4 mbar (440 Pa) and an unusual residual gas atmosphere was measured, containing 97% hydrogen and methane (inside the cavity) only for tested WLVP samples of the affected stack 04. This indicates an internal source of outgassing. Microscopic investigations and correlation with further SEM inspections revealed local defects of opened voids within the filled isolation trenches (see Figure 22c). The microscopic inspections found that 36.4% of all device wafers of this study were affected by these defects, observed also in wafer stack 04 but not for stack 03. It is suspected that these voids contained polymer residuals from the DRIE passivation process. In the improved WLVP process, the 40 nm thick Al 2 O 3 barrier layer, deposited on the entire DW by atomic layer deposition (ALD), may help to encapsulate these void defects. Afterward, failure (observed at stack 04) no longer occurred.
To reduce the risks for hermeticity of WLVP, the final glass-frit bonding process used for hermetic vacuum sealing of WLVP was changed. To avoid any cracks or spalling inside the final glass-frit bonding frame (caused during the external getter deposition), we decoupled the getter deposition on the BW. For this purpose, we screen-printed the final glass-frit bonding frame (used for hermetic vacuum sealing of WLVP) on the backside of the DW. The thermal influence of this additional third glass-frit bonding process (affecting the mirror planarity) was tested to be tolerable.
Final Characteristics of Improved MEMS WLVP
Finally, four WLVP stacks were fabricated with the improved process (exchange of third glass-frit layer to DW backside, see Figure 23a), using also a reduced vacuum process pressure to enhance the effective getter capacity of WLVP. Two samples each were vacuum-bonded at 5 Pa (#8, #9) and 0.1 Pa (#10, #11). The inner vacuum pressure was measured for individual chips with a laser scanning vibrometer (Polytec MSA500) using the calibration method shown in Figure 18. In comparison to the initial results (see Figure 19b: 25 Pa process pressure, resulting in 0.25 Pa of WLVP), we achieved a slightly reduced median vacuum pressure of 0.155 ± 0.019 Pa. The boxplot diagram of inner vacuum pressure is shown in Figure 23b. We only found a neglectable dependency on vacuum process pressure, resulting in 0.170 ± 0.013 Pa inside WLVP for 5 Pa and 0.114 ± 0.025 Pa for 0.1 Pa. Furthermore, higher Q factors were measured, typically in the range of Q = 38,600-48,500.
Micromachines 2020, 11, x FOR PEER REVIEW 20 of 27 vacuum sealing of WLVP) on the backside of the DW. The thermal influence of this additional third glass-frit bonding process (affecting the mirror planarity) was tested to be tolerable.
Final Characteristics of Improved MEMS WLVP
Finally, four WLVP stacks were fabricated with the improved process (exchange of third glassfrit layer to DW backside, see Figure 23a), using also a reduced vacuum process pressure to enhance the effective getter capacity of WLVP. Two samples each were vacuum-bonded at 5 Pa (#8, #9) and 0.1 Pa (#10, #11). The inner vacuum pressure was measured for individual chips with a laser scanning vibrometer (Polytec MSA500) using the calibration method shown in Figure 18. In comparison to the initial results (see Figure 19b: 25 Pa process pressure, resulting in 0.25 Pa of WLVP), we achieved a slightly reduced median vacuum pressure of 0.155 ± 0.019 Pa. The boxplot diagram of inner vacuum pressure is shown in Figure 23b. We only found a neglectable dependency on vacuum process pressure, resulting in 0.170 ± 0.013 Pa inside WLVP for 5 Pa and 0.114 ± 0.025 Pa for 0.1 Pa. Furthermore, higher Q factors were measured, typically in the range of Q = 38,600-48,500. The influence of a higher Q factor on the resulting frequency characteristic is exemplarily shown in Figure 24a for two samples of the initial (Q = 18,000) and final (Q = 46,598) WLVP run. The rise of the frequency response curves is fairly identical. No steeper characteristic (as anticipated) is visible for the sample with higher Q value. On the contrary, it is somewhat flatter due to the used higher driving voltage of 8 V (preferred in the meantime for FTS integration). Furthermore, in Figure 24b, a good reproducibility of frequency characteristics is shown. From both diagrams, a robust WLVP process window for inner vacuum pressure and frequency characteristic is obvious. The influence of a higher Q factor on the resulting frequency characteristic is exemplarily shown in Figure 24a for two samples of the initial (Q = 18,000) and final (Q = 46,598) WLVP run. The rise of the frequency response curves is fairly identical. No steeper characteristic (as anticipated) is visible for the sample with higher Q value. On the contrary, it is somewhat flatter due to the used higher driving voltage of 8 V (preferred in the meantime for FTS integration). Furthermore, in Figure 24b, a good reproducibility of frequency characteristics is shown. From both diagrams, a robust WLVP process window for inner vacuum pressure and frequency characteristic is obvious. vacuum sealing of WLVP) on the backside of the DW. The thermal influence of this additional third glass-frit bonding process (affecting the mirror planarity) was tested to be tolerable.
Final Characteristics of Improved MEMS WLVP
Finally, four WLVP stacks were fabricated with the improved process (exchange of third glassfrit layer to DW backside, see Figure 23a), using also a reduced vacuum process pressure to enhance the effective getter capacity of WLVP. Two samples each were vacuum-bonded at 5 Pa (#8, #9) and 0.1 Pa (#10, #11). The inner vacuum pressure was measured for individual chips with a laser scanning vibrometer (Polytec MSA500) using the calibration method shown in Figure 18. In comparison to the initial results (see Figure 19b: 25 Pa process pressure, resulting in 0.25 Pa of WLVP), we achieved a slightly reduced median vacuum pressure of 0.155 ± 0.019 Pa. The boxplot diagram of inner vacuum pressure is shown in Figure 23b. We only found a neglectable dependency on vacuum process pressure, resulting in 0.170 ± 0.013 Pa inside WLVP for 5 Pa and 0.114 ± 0.025 Pa for 0.1 Pa. Furthermore, higher Q factors were measured, typically in the range of Q = 38,600-48,500. The influence of a higher Q factor on the resulting frequency characteristic is exemplarily shown in Figure 24a for two samples of the initial (Q = 18,000) and final (Q = 46,598) WLVP run. The rise of the frequency response curves is fairly identical. No steeper characteristic (as anticipated) is visible for the sample with higher Q value. On the contrary, it is somewhat flatter due to the used higher driving voltage of 8 V (preferred in the meantime for FTS integration). Furthermore, in Figure 24b, a good reproducibility of frequency characteristics is shown. From both diagrams, a robust WLVP process window for inner vacuum pressure and frequency characteristic is obvious. Furthermore, the time until complete getter saturation (afterward, the inner vacuum pressure starts to increase) was estimated using several time-shifted vibrometer-based control measurements of the internal pressure using individual chips of stack #10. Therefore, a normalized conductance of the leakage channel to the inner WLVP cavity of 0.166 cm 3 volume was calculated for stack #10 (Figure 23b) as follows: Using this value and the specification of the used PageWafer ® thin-film getter, we calculated the time until complete getter saturation (after which the internal pressure of WLVP starts to increase). This calculation is based on the assumption of a temporarily constant conductance of the leak channel and without internal sources of outgassing. A mean time to getter saturation of 7.91 ± 0.08 years was calculated for stack #10.
The parasitic tilt angles in x and y, occurring during a translatory oscillation in z with 350 µm amplitude, were also measured for individual samples with laser scanning vibrometry (Polytec MSA 500). We observed parasitic tilt angles in the range of 20-80 arcsec, where an average of 60.2 ± 15.6 arcsec was achieved. A result of a sample with a small parasitic mirror tilt of ±20 arcsec is exemplarily shown in Figure 25a.
Micromachines 2020, 11, x FOR PEER REVIEW 21 of 27 Furthermore, the time until complete getter saturation (afterward, the inner vacuum pressure starts to increase) was estimated using several time-shifted vibrometer-based control measurements of the internal pressure using individual chips of stack #10. Therefore, a normalized conductance of the leakage channel to the inner WLVP cavity of 0.166 cm 3 volume was calculated for stack #10 ( Figure 23b) as follows: Using this value and the specification of the used PageWafer ® thin-film getter, we calculated the time until complete getter saturation (after which the internal pressure of WLVP starts to increase). This calculation is based on the assumption of a temporarily constant conductance of the leak channel and without internal sources of outgassing. A mean time to getter saturation of 7.91 ± 0.08 years was calculated for stack #10.
The parasitic tilt angles in x and y, occurring during a translatory oscillation in z with 350 µ m amplitude, were also measured for individual samples with laser scanning vibrometry (Polytec MSA 500). We observed parasitic tilt angles in the range of 20-80 arcsec, where an average of 60.2 ± 15.6 arcsec was achieved. A result of a sample with a small parasitic mirror tilt of ±20 arcsec is exemplarily shown in Figure 25a. Figure 21. This saddle deformation arises from an incorrect alignment of the shadow masks used for the patterning of front and back side Au coatings.
Conclusions
In this article, we presented a wafer-level vacuum-packaged (WLVP) translatory MEMS actuator developed for a compact NIR-FT spectrometer with high SNR >1000 in the spectral bandwidth of 1200-2500 nm. For this purpose, two objectives had to be solved which were not available with the current state of the art: (i) design of a large-stroke pantograph MEMS device with minimized dynamic mirror deformation of ≤80 nm and (ii) development of a glass-frit-based optical WLVP (to be compatible to the fixed MEMS process), avoiding degradation of the optical mirror quality at high process temperatures.
The large-stroke resonant MEMS design uses a fully symmetrical four-pantograph mirror suspension, avoiding problems with tilting and parasitic modes. To minimize dynamic mirror deformation, the best design compromise was found in (a) reduction of the resonance frequency of out-of-plane translation to <270 Hz and (b) mechanical decoupling of the 5 mm mirror plate from the pantograph suspension using a ring-shaped support structure with additional radial decoupling Figure 21. This saddle deformation arises from an incorrect alignment of the shadow masks used for the patterning of front and back side Au coatings.
Conclusions
In this article, we presented a wafer-level vacuum-packaged (WLVP) translatory MEMS actuator developed for a compact NIR-FT spectrometer with high SNR >1000 in the spectral bandwidth of 1200-2500 nm. For this purpose, two objectives had to be solved which were not available with the current state of the art: (i) design of a large-stroke pantograph MEMS device with minimized dynamic mirror deformation of ≤80 nm and (ii) development of a glass-frit-based optical WLVP (to be compatible to the fixed MEMS process), avoiding degradation of the optical mirror quality at high process temperatures.
The large-stroke resonant MEMS design uses a fully symmetrical four-pantograph mirror suspension, avoiding problems with tilting and parasitic modes. To minimize dynamic mirror deformation, the best design compromise was found in (a) reduction of the resonance frequency of out-of-plane translation to <270 Hz and (b) mechanical decoupling of the 5 mm mirror plate from the pantograph suspension using a ring-shaped support structure with additional radial decoupling springs. The use of additional stiffening structures at the mirror backside [64] was avoided to simplify the MEMS process. This MEMS design approach results in a sufficiently small dynamic mirror deformation of δ pp = 84 nm = λ min /9.5 at 350 µm amplitude and 267 Hz out-of-plane translation, driven electrostatically in parametric resonance. Due to significant gas damping, the MEMS device has to be operated in vacuum. In a first step, we experimentally studied the influence of vacuum pressure and cavity size on the MEMS behavior. The minimum requirements of ≤3.21 Pa and Q = 1177 inside the WLVP cavity of 0.166 cm 3 size were determined by laser vibrometer experiments. From these experiments, we also developed an experimentally verified calibration model for the Q-factor-based determination of the inner vacuum pressure inside the MEMS WLVP.
The challenge for the hermetic sealing of this MEMS is the significant surface topology ≥2 µm caused by the AlSiCu metal lines crossing the sealing area. For the hermetic sealing of NIR-WLVP, we selected the glass-frit bonding to be best suited and technologically compatible with our in-house MEMS scanner process AME75, resulting in lower development efforts. In contrast to alternative bonding approaches (see Table 2), the highly ductile 25 µm thick glass-frit bond layer safely seals the WLVP hermetically and simultaneously forms embedded lateral signal feedthroughs to outer bond islands. On the other hand, glass-frit bonding requires high process temperatures of 430-440 • C, which the MEMS device has to withstand without compromising its optical performance. In comparison to eutectic alloy bonding (e.g., Au 0.80 Sn 0.20 with a eutectic temperature of 280 • C [39]), glass-frit-based WLVP also requires a broader sealing frame, resulting in a larger chip size and higher costs. In our case, the 530 µm wide sealing frame is fully acceptable compared to the large overall MEMS chip size of 12.4 × 12.4 mm 2 . The NIR-FTS optimized WLVP was developed in two iterations to investigate failure modes and optimize reliability. For a high reflectance ≥95% in NIR, we had to use Au for optical coating. To enable thermal compensation of the stresses induced by the bonding processes, we used a symmetric coating design [65], depositing identical Au coatings on the front-and backside of the mirror. A small static mirror deformation of ≤100 nm was achieved after the WLVP process. To guarantee a long-term stable inner vacuum pressure of 0.2 Pa, we applied a Zr-based thin-film getter using the external PageWafer process from SAES Getters. In the initial WLVP run, we achieved 0.25 Pa inner vacuum pressure of the WLVP and a Q factor of 18,000 at 25 Pa process pressure, resulting in a driving voltage of only 4 V to meet the required amplitude of 350 µm. After 32 months, the remaining mean time-to-getter saturation was estimated to be 7.2 ± 1.5 years, which demonstrates the potential of the WLVP for a sufficiently long lifetime >10 years.
On the other hand, several serious problems were observed for the original WLVP concept [54]. Initially, the 70 nm thick Au coatings with 7-10 nm Cr adhesion layer were deposited directly onto the silicon mirror plate. After the WLVP process, significant mirror defects and unacceptable large mirror deformations caused by Au diffusion into silicon were observed. Via ALD deposition of an additional 40 nm thick Al 2 O 3 diffusion barrier layer (which was homogeneously deposited on the entire MEMS device wafer prior to Au deposition), this problem could be completely eliminated. In addition to fixing the Au-Si reaction (not prevented by a Cr layer only), the conformed Al 2 O 3 layer is also advantageous to encapsulate inner sources of out-gassing contained within the DW (e.g., voids of the filled trenches). In this work, another potential reliability problem for the final glass-frit bond was observed. In the initial WLVP run, the final glass-frit bond layer (containing also the thin-film getter) was deposited on the BW before getter deposition using the PageWafer process from SAES Getters. After getter deposition, small cracks or spallings were observed inside the final glass-frit bonding frame. To avoid this potential risk for WLVP hermeticity, we finally switched the third glass-frit layer from BW to DW backside in order to decouple the final hermetic vacuum sealing from the getter deposition.
In the final WLVP process, the inner vacuum pressure was reduced to 0.15 ± 0.02 Pa and higher Q factors 38,600-48,500 were measured due to a reduced process pressure of 0.1-5 Pa. However, it was shown that the process pressure has a minor influence on the internal vacuum pressure. The achieved inner vacuum pressure is comparable to the state-of-the-art WLVP of MOEMS [42], but avoids the thermal degradation of mirror planarity observed for glass-frit bonded micro scanners in [52,53]. Finally, static mirror deformations of ≤100 nm were measured for WLVP samples with symmetric Au coatings and additional 40 nm thick Al 2 O 3 diffusion barrier layers. The mean time-to-getter saturation was estimated to be 7.9 ± 0.1 years. The residual dynamic tilt angle (which occurs during a full translational oscillation with 350 µm amplitude) was measured to be in the range of 20-80 arcsec. For NIR-FTS system integration, MEMS devices with small tilt must be selected to guarantee an SNR >1000 in the spectral bandwidth 1200-2500 nm. Finally, we compared the performance of this NIR-FTS-specific pantograph MEMS device (350 µm amplitude, resonant operation at 267 Hz, dynamic mirror deformation of δ pp = 84 nm) with the latest state of the art for MEMS-based NIR-FTS. In [64], a MEMS-based NIR-FTS with 7 nm spectral resolution was reported using a translatory MEMS with 3 mm mirror diameter, driven electrostatically in resonance at 265.5 Hz, resulting in an amplitude of 125 µm in normal ambient conditions. At 125 µm amplitude, a parasitic tilt of 2.2/1000 • and a mirror deformation of 100 nm were measured. Compared to [64], our pantograph MEMS has a higher optical throughput and a 2.8-foldhigher amplitude. Hence, it has the potential for increased spectral resolution and higher SNR.
Summary and Outlook
Although monolithic, highly miniaturized MEMS-based NIR-FTSs exist today, we follow a classical optical FT instrumentation using a resonant MEMS with precise out-of-plane translatory oscillation of a 5 mm diameter mirror for optical path-length modulation. Our advantages are a higher optical throughput and resolution in comparison to highly miniaturized systems, as well as mechanical robustness and insensitivity to vibration and mechanical shock, compared to conventional FTS mirror drives. The new vacuum WL-packaged translatory MEMS devices are very promising for compact FTSs, potentially allowing to replace expensive and complex conventional mirror drives. The versatility, high acquisition rate, and robustness of an MOEMS-based FTS makes it ideal for process control and applications in harsh environments (e.g., surveillance of fast chemical reactions). This potentially enables a new family of compact FT analyzers for the NIR spectral region λ = 800-2500 nm with a spectral resolution of ≤15 cm −1 , 500 scans/s, and SNR > 1000 within an acquisition time of 1 s (with co-addition of spectra). It should lead to a sensitive, reliable, and easy-to-use stand-alone NIR-FT spectrometer qualified for industrial applications in harsh environments, e.g., applied for ad hoc inspection of food quality or environmental parameters. The results of the final system integration into a miniaturized FT-NIR spectrometer (using selected MEMS devices with minimal parasitic tilt) will be published elsewhere. For further developments of NIR-FTS systems, new developments should also be considered [64,[66][67][68].
Compared to alternative WLVPs, the glass-frit bonding process offers long-term hermeticity along with good sealing properties and tolerates the topography of the joining partners by a planarizing effect of the softened glass-frit during the bonding process. In parallel, conductive tracks or metallic lead-throughs for device signaling can be covered within the softened glass, preventing the need for processing complex and obviously more expensive vertical device signaling [48,49], with new questions arising related to hermeticity and device design and compatibility. Additionally, glass-frit bonding offers process flexibility due to a wide variety of suitable substrates including CMOS compatibility [50]. By application-related benchmarking of glass-frit bonding with alternative WLVP, glass-frit bonding can offer a good compromise of challenges, advantages, and disadvantages with respect to the specific features of the final application [39], required tooling, and the involved manufacturing processes. Additionally, glass-frit bonding still offers potential in terms of reducing the bonding temperature below (already available) 380 • C with ongoing glass-frit material development, substituting lead-based with lead-free glass-frit materials and using alternative or improved glass-frit deposition methods. In order to minimize outgassing behavior, the tuning of the thermal conditioning of the glass frit and the involved bonding procedures could be investigated, along with the reduction of the size and volume of the printed glass-frit bonding interface and its influence on the resulting hermeticity and reliability of the device. | 20,647.2 | 2020-09-23T00:00:00.000 | [
"Physics",
"Engineering"
] |
Time series synchronization in cross-recurrence networks: uncovering a homomorphic law across diverse complex systems
Exploring the synchronicity between time series, especially the similar patterns during extreme events, has been a focal point of research in academia. This is due to the fact that such special dependence occurring between pairs of time series often plays a crucial role in triggering emergent behaviors in the underlying systems and is closely related to systemic risks. In this paper, we investigate the relationship between the synchronicity of time series and the corresponding topological properties of the cross-recurrence network (CRN). We discover a positive linear relationship between the probability of pairwise time series event synchronicity and the corresponding CRN’s clustering coefficient. We first provide theoretical proof, then demonstrate this relationship through simulation experiments by coupled map lattices. Finally, we empirically analyze three instances from financial systems, Earth’s ecological systems, and human interactive behavioral systems to validate that this regularity is a homomorphic law in different complex systems. The discovered regularity holds significant potential for applications in monitoring financial system risks, extreme weather events, and more.
Introduction
Time series data mining has long been a widely researched topic [1][2][3][4].One of its subfields focuses on evaluating the synchronization and dependence between time series, assessing the degree to which a given time series is similar or correlated with another.Considering individual time series as elements in a system, the dependence between elements often leads to emergent behaviors in the system [5][6][7][8].Particularly, the dependence or synchronous patterns between the extreme values in time series deserve much more attention as it is often associated with revealing systemic risks [9][10][11].Traditional methods for analyzing the relationship between two time series typically employ linear measures such as Euclidean distance and Pearson correlation coefficient.However, time series in real complex systems often exhibit nonlinear similarity and dependence, and the complex dynamical mechanisms underlying time series are difficult to capture solely based on values of a single observation.Therefore, there is a compelling demand for investigating more comprehensive nonlinear measures for time series in order to unveil the synchronicity between pairwise time series, particularly during occurrences of extreme events.Traditional linear measures exhibit notable constraints in this regard, whereas techniques such as recurrence plots (RPs) and complex networks serve as potent nonlinear approaches for elucidating the intricate dynamical mechanisms inherent in time series.The former enables the extraction of underlying dynamical properties in short or non-stationary time series, while the latter facilitates the mapping of time series onto networks and the representation of original sequence attributes using network statistics.
Eckmann et al first introduced the concept of RPs, highlighting the recurrence of states as a fundamental attribute of every dynamic system [12].Based on this attribute, RP was developed as a two-dimensional visualization tool.Over the past two decades, RP has evolved into a nonlinear method for describing complex dynamics [13].RP represents a graphical representation of a binary symmetric matrix, encoding the times when two states are very close (i.e.neighbors in phase space).The core idea is to reconstruct the attractor from the time series using the delay embedding method [14].If the metric distance between two points on the reconstructed attractor is less than or equal to a threshold, it is considered a state reoccurred.All recurrence of trajectory points can be represented by a two-dimensional graph, with each point indicating whether the corresponding trajectory point has a recurrence.RP provides abundant information about the underlying dynamical properties of the system.By analyzing the recurrence matrix, information about the system's dynamics can be extracted and quantified using techniques like recurrence quantification analysis (RQA) [15].Additionally, RP-based techniques can analyze short and non-stationary data, making them highly applicable to studying real-world data [16].Although there is not a precise definition in academia for the length of so-called 'short' time series, recurrence-based methods have found extensive application in fields facing data acquisition challenges like geology, paleoclimatology, physiology, among others.Research in these areas, employing simulation and empirical evidence, has demonstrated the effectiveness and robustness of recurrence-based methods when analyzing short time series with lengths greater than 100 but not exceeding 500 [12,15,[17][18][19].Therefore, this paper will also employ empirical studies using sample sequences within this range of lengths.
In recent years, significant progress has been made in developing methods for complex system analysis based on RP.However, since RP is typically used to analyze the recurrence patterns of a single time series, further extensions are required to analyze the similar patterns between pairwise time series.Marwan et al proposed the concept of cross-RPs (CRP) [13], which simultaneously embeds two time series into phase space and compares their dynamic behaviors [20].The cross-RP (CRP) displays all times when the state of a dynamic system occurs simultaneously in the second dynamic system, providing a two-dimensional cross-recurrence matrix.In other words, CRP shows all the times when the phase space trajectory of the first system is roughly the same as the trajectory in the phase space of the second system.Similarly, quantification tools based on CRP, such as cross-RQA (CRQA), can measure how and to what extent two time series exhibit similar patterns.This analysis framework was initially developed and widely used in natural sciences, such as heart rate variability, seismology, and chemical fluctuations, among other fields [13,20].In psychology, it has found extensive applications in the field of motor control [21][22][23].Richardson et al provided a broader context for the method in the domains of dynamical systems and psychology [24].It has also been applied to capture dynamic patterns between individuals, for example, uncovering interaction behaviors during goal-oriented tasks [25] or conversations [26].In these applications, the time series of interest can be factorial data describing body sway or eye movement states, as well as numerical data such as heart rate [21,26,27].
Due to the ability of complex networks to capture both local and global properties, they have become instrumental in understanding the complex relationships and information flow among different components in extended systems [16].Complex networks have gained considerable popularity in analyzing complex, particularly spatially extended systems [28,29].Additionally, because the CRP plot provides an adjacency matrix, it can serve as a basis for constructing a complex network.Therefore, by utilizing the cross-RP as an intermediary, it becomes possible to map pairwise nonlinear time series onto a network, enabling the use of network analysis tools instead of traditional methods for time series analysis.Donner et al demonstrated the fundamental relationship between the topological properties of the recurrence network and the statistical properties of attractor densities in the underlying dynamical system [30].This complements the existing RQA by incorporating network descriptions as new quantitative features of the dynamic complexity of time series.Recurrence networks and related statistical measurements have also become important tools for analyzing time series data [31].However, to the best of our knowledge, statistical metrics of cross-recurrence networks (CRNs) have received little attention and have not been widely utilized to assist in the analysis of the original time series.As we are interested in capturing certain types of cross-correlation or dependence between pairwise time series, which are often not accurately captured by traditional linear analysis tools such as Pearson correlation coefficients, we construct CRNs based on the original sequences.We establish a connection between the statistical features of the CRN and the original time series, aiming to reveal the coupling relationships of the original sequences using statistical metrics derived from the established network.
Inspired by Wallot et al [27], we observe the statistical metrics of CRNs established from pairwise time series and the properties of the time series themselves.Through our theoretical derivations, we discover a significant positive linear correlation between the clustering coefficient of the CRN and the synchronicity of the pairwise time series.To validate this regularity, we consider multicomponent dynamical systems and employ the coupled map lattices (CML) technique to simulate multidimensional time series with coupling relationships.By varying the coupling strength between the sequences, we find that there exists a strong linear relationship between the synchronicity of pairwise sequences and the clustering coefficient of their corresponding CRNs.The simulation results confirm our findings.Furthermore, we conduct empirical analyses in financial systems, Earth systems, and human interactive systems.We find strong universality of this regularity in time series from different complex systems, indicating it as a homomorphic law.Due to the relatively stable topological structure of recurrence networks and their applicability to short time series, the regularity uncovered in this study reveals the relationship between statistical metrics of complex networks derived from time series mapping and event synchronicity.Thus, it holds significant practical implications for real-time monitoring of future financial risks, natural crises, or human behavior using CRNs, particularly when faced with limited data.
The remaining sections are organized as follows: section 2 provides an introduction to the fundamental methods of RPs and cross-RPs (CRP), along with their corresponding statistical metrics.Additionally, it discusses the network metrics derived from RP and CRP.In section 3, a theoretical derivation is presented to establish the mathematical relationship between the clustering coefficient of the CRN and the probability of synchronicity.Section 4 validates this relationship by employing the CML technique to simulate coupled multidimensional time series.Section 5 presents empirical evidence using real-world financial time series, rainfall time series, and eye-tracking sequences during human interactions.These empirical analyses aim to demonstrate the homomorphic nature of the discovered regularity across diverse complex systems.Finally, section 6 provides a summary and conclusion of the paper, highlighting the key findings and contributions.
Method
A RP is a graphical representation of the recursive states of a dynamical system in its m-dimensional phase space.For all phase space vectors − → x i , a pairwise measurement based on distance is performed: where Θ (•) is the Heaviside function, ε represents the threshold of closeness, and d is the measure of closeness.Different measures can be used to quantify closeness, such as spatial distance, string metrics, or local rank orders [13,32].In most cases, spatial distance is considered using the Euclidean distance, where In the binary recurrence matrix R, if the distance ∥ − → x i − − → x j ∥ is less than ε, the corresponding R i,j is set to 1.The phase space trajectories can be reconstructed from the time series {u i } N i =1 using time-delay embedding [33]: where m represents the embedding dimension and τ represents the time delay.And {u i } N i =1 represents the observed values of the time series variable of interest, derived from our collected data samples.Optimal values for m and τ can be determined by calculating the average mutual information (AMI) function and the false nearest neighbors (FNN) function, ensuring the coverage of all free parameters and avoiding autocorrelation effects [34].Specifically, FNN can be computed for different embedding dimensions, and the optimal embedding dimension is chosen as the first local minimum or the dimension that corresponds to a smooth curve.Similarly, for time delay, AMI is calculated for different time lags, and the optimal time lag is selected as the first local minimum value of AMI.Let us illustrate the meaning of equation (2) with a simple example.Suppose the series { − → x i } N i =1 represents the actual phase space vector series of a variable.Considering {u i } N i =1 as observations from one dimension, unable to fully reflect the variable's true dynamical characteristics, we employ delaying and embedding for {u i } N i =1 for phase space reconstruction.Assuming the optimal embedding dimension m = 3 and optimal delay τ = 1, we aim to obtain the first-order lagged sequence of , as observations for the second dimension, and subsequently, the first-order lagged sequence of , as observations for the third dimension.After this embedding process, { − → x i } N i =1 becomes a three-dimensional sequence, where each vector − → x i is obtained from observations in three dimensions: (u i , u i +1 , u i +2 ) for i = 3, . . ., N. Due to the lagged operation, the number of points with observations in all three dimensions reduces from N to N − (m − 1).Thus, d represents the distance between points i and j in this three-dimensional space.If the reconstruction of phase space is not performed, i.e. m = 1, d signifies the absolute difference between u i and u j in the sequence.We use a simple schematic diagram in figure 1 to illustrate the above process.However, our study primarily focuses on identifying cross-recurrence patterns among pairwise time series variables in multivariate systems, necessitating equal lengths.Due to variations in the optimal embedding dimensions and delays for each time series, ensuring equal lengths for any pairwise sequences after phase space reconstruction is challenging.Therefore, for simplicity, we do not perform embedding of the original series {u i } N i =1 by defaulting to m = 1.Consequently, the equation ( 2) can be written as − → x i = u i in our case, and the process shown in figure 1 is not involved.This practice aligns with common approaches in studying multivariate systems where multivariate analysis itself offers a better description of a system's evolutionary features compared to univariate analysis [35].
Clearly, in RP, there is an evident diagonal line representing the recurrence of each point with itself.If spatial distance is used as the criterion for recurrence, RP is symmetric.Small-scale features in RP can be observed through diagonal and vertical lines, and the morphology of these special lines reflects the dynamics of the system.Following a heuristic approach, Zbilut and Webber [15] introduced a quantitative description of RP based on these line structures, known as RQA.RQA defines measures such as diagonal line length, recurrence rate (RR), determinism (DET), average length of diagonal structures, and entropy to characterize the diagonal segments in the RP.Table 1 presents these measures and their definitions.
The RP is extended to the CRP, which compares the dynamic behavior of two time series simultaneously embedded in phase space [13].Specifically, for each point in the first trajectory − → x i (i = 1, . . ., N, − → x i ∈ R m ), and each point in the second trajectory ⃗ y (j = 1, . . ., N, − → y j ∈ R m ), the distance measure d ( − → x i , − → y j ) is calculated, resulting in an N × N matrix indicating their closeness: CRP is the two-dimensional plot generated from the binary cross-recurrence matrix CR.In CRP, long diagonal line structures reveal synchronization between the two time series in phase space.Corresponding RQA measures are redefined, where RR, DET, and mean diagonal line length (MEAN_DL) are functions of the distance from the main diagonal [13].This indicates that CRP can explore the dynamical similarity of the two sequences in the presence of time delays.For example, in CRP, the diagonal-wise RR focuses only on the recurrence along the diagonal and is defined as: where k represents the time delay between the second trajectory ⃗ y and the first trajectory − → x i , and k can be positive or negative.In practical research, the optimal time delay, k, which maximizes RR k , indicates the best delay for exhibiting the highest similarity between the two sequences.The graphical properties of RP and CRP reflect the dynamic patterns of the sequences.When transformed into complex networks, the adjacency matrices derived from the recurrence matrix R and the cross-recurrence matrix CR can further reveal the self-similarity and mutual similarity patterns of the sequences.Various metrics commonly used to measure the topological structure of complex networks are listed in table 2.
These metrics provide insights into the properties of recurrence networks, capturing characteristics such as clustering, connectivity, average degree, average path length, and centrality measures.They help quantify the structure and importance of vertices within the recurrence network.Donner et al [30] established a connection between the properties of recurrence networks and the phase space topology of dynamic systems
Measurement Definition Explanation
The number of recurrence points (RP_N) The number of recurrent points in the RP plot Recurrence rate (RR) The proportion of recurrent points in the RP plot It represents the ratio of recurrent points to all points, indicating the probability of specific recurrent patterns.
Determinism (DET)
The percentage of recurrent points forming diagonal lines in the RP plot, given a minimum diagonal line length threshold It reflects the extent to which deterministic behavior dominates the system, as random behavior leads to shorter diagonal lines while deterministic behavior results in longer diagonal lines.Thus, the ratio of recurrent points forming diagonal structures to all recurrent points serves as an indicator of the determinism of a system.
Mean diagonal line length (MEAN_DL)
The represented by RPs.However, their work primarily focused on demonstrating the fundamental relationship between the statistical properties of the underlying dynamical system and the topological properties of the corresponding recurrence network.In contrast, this paper emphasizes the relationship between the statistical metrics of CRNs and the probability of pairwise time series experiencing synchronized 'events' .The objective is to utilize the network's topological structure to reveal the likelihood of synchronization risks or states occurring between two time series.This approach offers an alternative perspective for understanding and mitigating synchronization risks among multiple entities.
Theoretical derivation
In our study, we observe the common statistical metrics of CRN, as listed in table 2, in relation to the proportion of synchronized states between pairwise time series.We find a deterministic linear relationship between the clustering coefficient and the occurrence of synchronized events (i.e.relatively extreme states).We will present further details and characteristics of the CRN in this section, and then derive the theoretical basis for the pattern we have discovered.To simplify the process of determining the threshold for recurrence when constructing the recurrence network, we transformed the numerical time series into categorical time series based on the relative magnitudes of their observed values.Specifically, assuming equal probability of the occurrence of each state, by increasing percentile levels at fixed increments from 0% to 100%, we can obtain n + 1 percentiles from the pseudo-sample distribution of the numerical time series of length N.
Consequently, this partition yields n ranges, where the sample values falling within these ranges are classified into n states.These n states can be ordered according to the relative sizes of their corresponding ranges.Hence, the final state space can be designated as (a 1 , a 2 , . . ., a n ) (1 ⩽ n ⩽ N), facilitating the identification of so-called extreme states due to its inherent orderability.The choice of n should neither be too small nor too large; if it is too small, the probability of occurrences of a 1 or a n will be significantly higher than that of extreme events.Conversely, if it is too large, each state will occur with lower frequency, resulting in very few edges in the CRN and making it challenging to extract useful information.Considering the application of recurrence-based methods on shorter time series, an empirical range for n could be between [10,20].Consider a two-dimensional time series {X t , Y t ; t ⩾ 0}, where the components have the same state space (a 1 , a 2 , . . ., a n ) (1 ⩽ n ⩽ N).With a set of observation samples of length N, (x 1 , y 1 ) , . . ., (x N , y N ), we can construct a CRN for these two time series variables.Specifically, we designate the N moments as network vertices, denoted as V = (1, 2, . . ., N).The cross-recurrence matrix R = { A ij } N×N is constructed based on whether the respective states at two moments, one from each time series, are the same or not, defined as follows: Clearly, this matrix is not symmetric; generally, A ij ̸ = A ji .As defined above, the determination of states is done from the perspective of X t , where the state of each moment in X t is compared sequentially to that of every moment in Y t .If the matrix is constructed from the perspective of Y t , the comparison is done similarly, and the resulting cross-recurrence matrix is denoted as R * = { B ji } N×N , where: Hence, we have A ij = B ji , establishing the relationship between R and R * as transpose matrices, i.e., R T = R * .This property is demonstrated graphically in figure 2.
When establishing the CRNs, since our primary interest lies in determining whether there is a connection between nodes rather than focusing on the directionality or weight of these connections, we obtain symmetric adjacency matrices, R ′ and R * ′ , based on R and R * .And then the CRNs are constructed by these symmetric adjacency matrices.Consequently, the resulted network is undirected and unweighted.Moreover, this network includes self-loops, denoted as Within the constructed CRN, we observed a particular property: the clustering coefficient of this network is positively correlated with the probability of synchronous states between the two time series, i.e. the synchronization probability.This relationship is expressed as: where C represents the global clustering coefficient of the CRN, and (X = Y) signifies the simultaneous occurrence of identical states in both time series.The proof of this relationship is as follows: Let v be any vertex in the CRN, and C ν be the local clustering coefficient of that vertex.Since the adjacency matrix corresponding to the CRN is symmetric, when analyzing the actual meaning of the network's edges e ij (i, j = 1, 2, . . ., N), it is reasonable to consider either A ij = 1 or A ji = 1.Therefore, the definition formula for the local clustering coefficient can be written as: This equation demonstrates that the local clustering coefficient of CRN is equal to the conditional probability of synchronization between two time series.Particularly, when the states of the two-dimensional time series at different moments are mutually independent, we have: Under this condition, the local clustering coefficient equals the unconditional synchronization probability of the two time series.However, assuming the independence of the states of the two-dimensional time series at different moments is a strong assumption, indicating that this two-dimensional time series is independently and identically distributed (i.i.d.) sequences.In reality, when the auto-correlation and cross-correlation functions of the two time series decay exponentially, i.e. there's only short-term correlation within and between the time series, we can derive an approximate relationship: C ν ≈ P (X = Y).
It can be observed that the local clustering coefficient of any vertex in the network is determined by a same term.Consequently, the global clustering coefficient C can be obtained as the average of all local clustering coefficients: C = P (X = Y) holds if the strong independence assumption is met, and C ≈ P (X = Y) when the correlations are weak or decay rapidly over time.Under the strong independence assumption, if we are more concerned about the synchronization probability of extreme events, that is, the probability of the co-occurrence of the extreme states in the two series, the above equation can be further decomposed as: where P 0 = n−1 ∑ k=1 P (x i = y i = a k ), and P 1 = P (x i = y i = a n ).P 1 represents the probability of extreme event synchronization, while P 0 denotes the probability of the synchronization of other states.Clearly, while maintaining P 0 constant, the variation in P 1 is directly proportional to C. This derivation demonstrates two key points: firstly, the clustering coefficient of a CRN established between two time series is equal to the synchronization probability when the states of the two-dimensional time series at different moments are independent; secondly, the clustering coefficient is linearly correlated to the synchronization probability of extreme event, whether the strong independence assumption is satisfied or not.From a physical perspective, the synchronization of two time series is determined by the coupling strength between their respective attractors.In general, if the coupling strength is increased, the synchronization between the time series is enhanced, leading to a higher clustering coefficient in the CRN of the two time series.Although Chen et al [36] also emphasized that the relationship between coupling strength and synchronization can be complex and nonlinear, and it might depend on the specific characteristics of the coupled systems.The coupling strength remains a crucial variable that plays a significant role in altering the synchronization between time series.In the next section, we will employ simulation experiments to further demonstrate the findings mentioned above.
Simulation experiments
From a physical perspective, the synchronization of two time series can be understood as the effect of coupling between their respective attractors, manifested as the dependence between the states of the time series.In this paper, we consider CMLs, which are simple spatiotemporal chaotic models widely used for modeling complex spatiotemporal dynamics.Generally, their form is given by: where x [κ] t can be understood as the value of the κ th sequence at time t (κ = 1,2, . . ., M, M denotes the dimension of the multivariate time series), and h t represents the interaction of the elements from other time series at time t on the elements of the κ th sequence.The first term on the right side represents the internal chaotic dynamics determined by the nonlinear mapping function f(x), while the second term represents the mutual coupling effect generated by the coupling parameter ζ (0 < ζ < 1).Generally, CMLs have three types of coupling: local, direct-neighbor coupling, where h t ); and intermediate-range coupling, where h ).In this experiment, following the approach of Lacasa et al [37] and Eroglu et al [35], we consider the M-dimensional time series as M points on a ring model.With this approach, the dynamic evolution of each point x [κ] is determined by the internal chaotic evolution and the average coupling effect between neighboring points, resembling a form of direct-neighbor coupling: This coupling mechanism ensures that the simulated sequences only exhibit short-range correlations, as the single simulated value only couple with its neighbors.As ζ varies, the system is trapped in different attractors, leading to different degrees of synchronization and dynamical phases.Although ζ does not directly reflect the synchronization between time series, it acts as a coupling strength parameter that alters the synchronization between time series.Thus, we can observe the relationship between the synchronization probability of each pair of time series and the clustering coefficient of the corresponding CRN under different values of ζ in the system.The classic logistic mapping function f (x) = 4x (1 − x) is adopted as the internal chaotic dynamics mechanism to represent the underlying dynamics of the system and the system is assumed to be in a five-dimensional phase space (M = 5).The coupling parameter ζ is set within the range [0, 0.4], with an increment of △ ζ = 0.005 to investigate the effect of different coupling strengths.Due to the sensitivity of chaotic systems to initial conditions, small variations in the initial values can lead to significantly different system structures.To capture the prominent avalanche effect, we iterate the system 15 000 times and use the last 10 000 simulation values for analysis.Although ample previous research confirms the robustness of recurrence-based methods in the presence of noise [19,31,38], to demonstrate the stability of the identified homomorphic regularity across different signal-to-noise ratios (SNRs), we will add varying levels of noise to the above simulated series.During each experiment, we introduce white noise (following a Gaussian distribution with a standard deviation of 1 and mean of 0) into the five simulated sequences within the CML system, setting SNRs at 2, 5, 10, and 20, representing 50%, 20%, 10%, and 5% noise addition, respectively.Then we factorize each time series into ten states based on the relative magnitudes of their values.We establish CRNs for the updated series and compute the clustering coefficients and synchronization rates.With multiple sample results under the coupling parameter ζ, we perform linear fitting on the sample series of clustering coefficients and synchronization probabilities, yielding parameter estimates (slope and intercept) for first-order linear regression.Through 100 iterations of the aforementioned setup, we derive pseudo-sample distributions of slopes and intercepts for different random initial values.Based on these pseudo-distributions, we establish 95% confidence intervals for the slope and intercept estimates.The criterion for assessing the strong linear correlation between CRN clustering coefficients and synchronization probabilities relies on whether the slope 1 and intercept 0 fall within the respective confidence intervals.It is worth noting that under certain ζ values, the simulation sequences may evolve into a few fixed values, leading to a reduced number of distinct states, possibly fewer than ten.In such cases, when computing the synchronization probability and constructing the CRN for any two time series, we adopt the number of states with the least partition from both time series as the reference.This ensures an equal number of distinct states in each series of a pair.
Figure 3 illustrates the relationship between CRN clustering coefficients and synchronization probabilities at various SNRs, including a noise-free scenario for comparison.Remarkably, even at a SNR of 2, representing a 50% noise addition, the robust linear relationship between CRN clustering coefficients and synchronization probabilities persists.The fitted slope tends closer to 1 when noise is minimal or absent.Meanwhile, figure 4 presents the pseudo-sample distributions of parameter estimates from the fitted linear models at different SNRs.As noise increases, the pseudo-sample distribution of slopes gradually skews right; however, a significant portion of slope estimates remains within proximity to 1. Similarly, the distribution pattern of the intercept did not exhibit systematic changes with alterations in the noise.Table 3 95% confidence intervals for slope and intercept estimates across different scenarios, showcasing that the slopes 1 and intercepts 0 fall within the corresponding intervals irrespective of the noise levels.This observation suggests the insensitivity of the relationship between CRN clustering coefficients and synchronization probabilities to some extent, regardless of the noise present.Therefore, we have valid reasons to believe that the observed regularity remains effective even in empirical series with varying degrees of noise.
Empirical analysis
In the empirical study, we will focus on verifying the homogeneity of the linear relationship revealed by equation (11) in different systems, as event synchronization is often a crucial factor in triggering systemic risks.We will also examine the performance of the equivalence between the clustering coefficient in CRN and the unconditional synchronization probability in actual data.We select price time series from the financial system, rainfall data from the Earth's ecosystem, and eye-tracking sequences from human interactive behavior as empirical objects.For each system, we establish CRNs for all pairwise time series and calculate their global clustering coefficients.Subsequently, we obtain the occurrence ratio of synchronous extreme events for each pair of sequences, aiming to verify whether there exists a significant positive linear correlation between the global clustering coefficient and the ratio of event synchronization.
Financial system 5.1.1. Data
The stock data from the two major trading markets in China, namely, the Shanghai Stock Exchange and the Shenzhen Stock Exchange is considered empirical objects.We select constituent stocks from the Shanghai Composite Index (SH000001) and the Shenzhen Component Index (SZ399001) based on their market capitalization weights arranged in descending order.Our focus is on choosing stocks with complete trading data to ensure data integrity and reliability.Ultimately, we identify 56 stocks from the Shanghai Composite Index and 34 stocks from the Shenzhen Component Index that have complete trading data from 4 January 2015, to 30 October 2020.The data is divided into two parts: the training set (from 2015 to 2019) and the test set (2020).The data length of each time series is denoted as N, resulting in an N × N recurrence matrix when establishing the CRNs or CRPs.It is worth noting that a large N is not suitable for constructing CRNs or CRPs due to the computational complexity.Considering that financial returns often exhibit yearly cyclical changes, we will use the daily return data from 2015 to 2019 to build CRNs for each year separately.And the length of the sample data of each year falls within the range of 200-240.Subsequently, we will calculate the ratio of daily coexceedances of pair stocks within a year to verify the robustness of the relationship between the topological structure of CRNs and the ratio of coexceedances of daily returns across different years.
Exceedances here refer to positive returns exceeding the 95th percentile and negative returns falling below the 5th percentile.And coexceedances specifically refer to exceedances in the pair stocks that occur at the same moment or at the fixed lead-lag moments.Table A1 and A2 present the specific stock codes for the selected stocks in the SH000001 and SZ399001, respectively.
Parameter setting
As described in section 3, establishing CRNs based on numerical data involves the selection of the radius parameter.However, determining a suitable fixed radius is challenging, and there is currently no reliable literature supporting a specific choice.In this context, our primary interest lies in the frequency of occurrences of joint extreme returns between pairs of stocks Extreme values represent absolute values of significantly positive and negative returns.We then discretize returns into ten equidistant states, ensuring a uniform distribution of returns across these states.By converting the numerical variable into a factor variable, recurrence is only considered to have occurred when the states are entirely consistent (radius = 0), thereby avoiding the need for optimal parameter selection.Consequently, CRNs are established based on factor variables.Since we consider the data of different constituent stocks as descriptions of different dimensions within the financial system, we will no longer perform the reconstruction of the phase space in the empirical analysis.Therefore, the embedding dimension is set to 1, and the time-lag value does not affect the results when the embedding dimension equals 1.
Optimal delay between time series
In cross-RQA (CRQA), we can construct CRPs for all possible lags between pairs of sequences and extract corresponding indicators, such as diagonal-wise RR, to explore the maximum ratio of state recurrence between the two sequences at different time lags.This helps us investigate the delayed similarity patterns between pairs of stock return sequences within the same financial system.If there exists a time delay that maximizes the RR between two return sequences, we can reasonably assume that there are statistically significant differences in their reaction speeds to the same financial events in the market.This information can also guide us in establishing rules for identifying coexceedances, which means detecting exceedances in two sequences either with a time delay or simultaneously.
Based on the data from each year in the training set, with a maximum delay of 10 (k max = 10), we construct all possible CRPs for any pair of factorized return series and observe the optimal delay that maximizes the diagonal-wise RR.The results show that the optimal delay for any pair of sequences is 0, indicating that the highest RR occurs when there is no lead-lag relationship in the pairwise daily return series.Taking the daily returns of sh601398 and sh601288 in the Shanghai market and sz000027 and sz000400 in the Shenzhen market in 2015 as examples, figure 5 shows the variation of diagonal-wise RR RR k within the range of {k = ±i} 10 i =1 .The RR k curves for any other pair of sequences exhibit a similar pattern to that in figure 5, where RR k reaches its highest relative value at k = 0.In other words, when there is no delay between pairwise stock return series, there are more occurrences of the same state on the same day.Based on this observation, when determining coexceedances, if exceedances occur in different assets on the same day, we consider it as a case of coexceedance.
Relationship between CRP and CRN indicators and the ratio of coexceedances
In the absence of delay in the pairwise series, we construct the CRP and CRN separately and obtain the commonly analyzed indicators in tables 1 and 2. Taking the results of 2015 as an example, we show the scatter plots of any pair of stocks in the Shanghai market corresponding to the CRP indicators versus the ratio of coexceedances in figure 6, and the CRN indicators versus the ratio of coexceedances in figure 7. The corresponding plots figures A1 and A2 for the Shenzhen market are shown in the appendix.
In figure 6, there is no clear linear or nonlinear relationship observed between the CRP indicators and the proportion of coexceedances.However, in figure 7, it can be observed that there is a positive linear relationship between the average shortest path in the complex network established based on pairwise returns and the proportion of coexceedances between the returns of the two stocks.Additionally, there is also a positive linear relationship between the network's clustering coefficient and the ratio of coexceedances.
We validate these two relationships using the 5 year training set data separately.As shown in figure 8, the linear relationship between the average shortest path and the ratio of coexceedances in individual stocks' CRN networks in the Shanghai Composite Index is not consistently significant, such as in 2016 and 2017.However, for each year's data, there is a clear positive linear relationship between the clustering coefficient of stocks' CRN and the ratio of coexceedances. Figure 9 illustrates the corresponding relationships in the Shenzhen Composite Index.It can also be observed that the linear relationship between the average shortest path and the coexceedances is weaker than the linear relationship between the clustering coefficient and the coexceedances.To further investigate these linear relationships, we conduct Pearson correlation tests and use the Wilcoxon rank-sum test to verify if the clustering coefficient and the ratio of coexceedances follow the same distribution.The results are presented in table 4. As indicated in the table, for constituent stocks in the Shanghai market, the clustering coefficient of CRN from 2015 to 2019 is significantly and positively correlated with the corresponding ratio of coexceedances.This strong correlation is evident as all the linear correlation coefficients for each year are greater than 0.7.For constituent stocks in the Shenzhen market, there is also a significant positive linear relationship between the CRN clustering coefficient and the ratio of coexceedances from 2015 to 2019, but the strength of the relationship is weaker than that in the Shanghai market.The correlation coefficients between the average shortest path of CRN and the ratio of coexceedances in 2017 and 2019 are lower than 0.5.
Based on these findings, we conclude that only the clustering coefficient of the CRN for pairwise daily returns shows a robust and strong positive linear relationship with the ratio of coexceedances.The Wilcoxon rank-sum test results suggest that both in the Shanghai and Shenzhen markets, and for all years, the clustering coefficient and the average shortest path of individual stocks' CRN do not belong to the same distribution as the corresponding proportion of coexceedances.
As mentioned earlier, mapping time series onto complex networks allows us to discover not only correlations but also richer information and specific patterns.Besides extreme values in financial systems, the relationships between extreme values in time series of other complex systems have been of interest to researchers.To emphasize the universality of this pattern, in the following section, we will conduct separate analyses on rainfall data from different regions in the climate system and eye-tracking data from human interaction behavior.
Financial returns exhibit several stylized facts, one of which is long memory, characterized by long-range autocorrelation in the absolute values of returns.This characteristic may interfere with the equivalence of C and P (X = Y).Taking the example of data from the Shenzhen market, we segment the return series into 10 states and calculate the synchronization probability of pairwise state sequences.Then, using the clustering coefficient series obtained above, we plot the relationship between the clustering coefficient of CRN and the synchronization probability of states of returns for each year from 2015 to 2019 in figure 10.
From figure 10, it is evident that the fitted curve does not align perfectly with the straight line of slope 1 and intercept 0. However, the deviation is small and it also suggests a strong linear correlation between the CRN clustering coefficient and the synchronization probability.This indicates that the long memory of returns has minimal impact on the conclusion of C ≈ P (X = Y).This could be attributed to the fact that the length of the data is just one year, comprising just over two hundred values, where the long memory of returns is not notably pronounced in such relatively short time series.
Earth's ecosystems
In this study, we obtain the China's ground climate daily data set (V3.0) from the National Tibetan Plateau Data Center (https://data.tpdc.ac.cn/home).The original data provides daily observations from various weather stations.We first interpolate the original data into grid data, covering 500 × 500 grids over China, with each grid size of 0.123 1924 (longitude degrees) × 0.099 4549 (latitude degrees).By regionally averaging, we calculate the daily average precipitation for 34 provinces in China for the period from 2015 to 2019.The length of the sample data of each year is about 365.The unit of precipitation is 0.1 mm, and the total length of each province's precipitation sequence is 1826 d.For analysis, we consider extreme rainfall events as those exceeding the 95th percentile of daily precipitation in each year.The occurrence of extreme rainfall in two provinces on the same day is defined as synchronous rainfall.We calculate the probability of synchronous extreme rainfall events between pairs of provinces.Then, for each pair of provinces, we construct a CRN based on their respective rainfall sequences, following similar procedures and parameter choices as described in section 5.1.2.The clustering coefficient of each recurrence network is calculated.
Figure 11 presents the relationship between the clustering coefficient of the CRNs and the probability of synchronous extreme rainfall events between pairs of provinces from 2015 to 2019.It tells that this relationship is positively linear and exhibits a certain level of robustness.The results from the correlation tests in table 5 also confirm the statistical significance of this linear relationship.Similarly, even though these two variables show a linear correlation, the results from the rank sum test indicate that they do not come from the same distribution.
We are also interested in investigating whether the equivalence between C and P (X = Y) approximately holds in rainfall data samples.Similarly, we partition rainfall sequences into 10 states based on their respective percentiles and compute the synchronization probability between pairwise rainfall sequences.Figure 12 depicts the relationship between the clustering coefficient of CRN and the synchronization rate for each year from 2015 to 2019.It can be seen that the fitted curve of C and P (X = Y) is closely aligned with the straight line of slope 1 and intercept 0, indicating a more obvious equivalence between C and P (X = Y) in this case.The differences between figures 10 and 12 also indicate that rainfall sequences exhibit stronger independence compared to return sequences.
Human interaction behavior system
Ho et al [39] conducted a study to investigate the intercorrelation between gaze and speech during social interaction games.Specifically, they organized 20 pairs of participants in small groups, where each pair played two social guessing games, and their eye movements were tracked during the interaction.The length of each pair of sample sequences is around 100.Through cross-correlation analysis of the gaze and speech signals between the two participants, they found that speakers often end their turn by directly gazing at the listener, signaling the listener to respond, resulting in a lagged synchronization of the listener's speech state with the speaker's direct gaze behavior.In other words, during social interaction, the direct gaze state of participant A exhibits a certain level of synchrony with the speech state of participant B, even though the latter's speech state consistently lags behind the former's gaze state.We process the data from one of these social games and explore the relationship between the synchronization rate of one participant's gaze state with the other participant's speech state and the clustering coefficient of the corresponding CRN established from their respective behavioral sequences.In this experimental data, both the gaze time series and speech time series are binary sequences, where 1 represents the presence of gazing or speaking, and 0 indicates looking away or the end of speech.Treating both gaze and speech as events, the event synchronization rate between pairs of participants during the interaction activities shows a positive linear relationship with the clustering coefficient of their respective CRNs, as depicted in figure 13.And the p-value of the corresponding Pearson's correlation test is 0.03 (<0.05), which indicates that the linear relationship is significant at 5% confidence level.
Similarly, figure 14 illustrates the relationship between the CRN clustering coefficient and the state synchronization probability between pairs of participants in this scenario.The fitted curve of C and P (X = Y) shows a slope approaching 1, and while the intercept is not equal to 0, the confidence interval of the fitted curve still contains the line with a slope of 1 and an intercept of 0. Therefore, this also indicates an approximate equivalence between C and P (X = Y).
In conclusion, there exists a significant positive linear relationship between the synchronization rate of events occurring in pairs of objects and the clustering coefficient of the CRNs established based on their respective time series.This pattern is consistently observed across various complex systems, including but not limited to financial systems, Earth's ecological systems, and human interactive behaviors, demonstrating strong universality.Moreover, data from these three different systems all support the approximate equivalence between the clustering coefficient of CRN and the unconditional synchronization probability, as long as the weak independence assumption is satisfied.
Discussion and conclusion
This paper focuses on the similarity patterns or synchronicity of extreme events occurring in pairs of time series.By leveraging cross-recurrence analysis to capture the complex dynamics of time series and the relative stability of network topology, we map the two time series into a complex network based on cross-recurrence, aiming to establish a relationship between the network's statistical metrics and the probability of synchronized events occurring in the two time series.Our analysis reveals a positive linear correlation between the clustering coefficient of the CRN and the event synchronization rate of the two time series.We conduct simulation experiments by CMLs and observe that the synchronization probability and the clustering coefficient of CRNs tend to become approximately equivalent.We also conduct empirical analyses in financial systems, Earth's ecological systems, and human interactive systems, and find that this pattern is universally applicable across different complex systems.However, when dealing with limited data, there are inherent limitations in observing synchronized events between two time series.In contrast, the CRP is particularly suitable for non-stationary and short time series, with relatively stable statistical features of the CRN.This context provides a practical scenario for the discovered regularity.For instance, in financial systems, systemic risk is often represented by index volatility.By identifying stocks that exhibit similar patterns to the index, i.e. with higher event synchronization rates, we may monitor or even predict systemic risk using this kind of constituent stock.When only short historical data is available and observing extremes is rare, it becomes challenging to compute event synchronization rates to identify stocks with high synchronization with the index.However, due to the positive linear relationship between the clustering coefficient of the CRN and event synchronicity, we can achieve this goal by constructing CRNs for short time series.
Similarly, this method can be applied to predict critical events such as extreme rainfall and earthquakes in specific regions.Therefore, the discovered homomorphic regularity in this study bears significant practical implications in revealing systemic risk.In the future, we plan to implement and expand the aforementioned applications based on this regularity.
Figure 1 .
Figure 1.Schematic diagram of delay and embed.
average length of diagonal lines in the RP plot Diagonal structures in the RP plot indicate segments of trajectories that are close to each other at different time points.The length of diagonal line represents the duration of their closeness.Entropy of diagonal line length distribution (ENT_DL) The entropy of the distribution of diagonal line lengths It measures the Shannon entropy of the histogram of diagonal line lengths, reflecting the complexity of deterministic structures in the system.Laminarity (LAM) The percentage of recurrent points forming vertical lines in the RP plot, given a minimum vertical line length threshold It measures the extent of deterministic behavior in the system by evaluating the ratio of recurrent points forming vertical structures to all recurrent points.Trapping time (TT) The average length of vertical lines in the RP plot Vertical line structures indicate segments of trajectories that stay close to specific points in another trajectory.The trapping time represents the duration of their closeness.
Figure 3 .
Figure 3.The relationship between synchronization probabilities and cluster coefficients of cross-recurrence network in one of the simulation experiments (the initial values of five simulated series in CML system are 0.083, 0.4088, 0.5153, 0.3969, 0.2227 in this experiment) in the presence of noise.
Figure 4 .
Figure 4. Distribution of the results obtained from the univariate regression model examining the relationship between the synchronization probability of two sequences and the clustering coefficient of the cross-recurrence network in the presence of noise.The results are based on 100 simulation experiments.
Figure 5 .
Figure 5. Diagonal-recurrence plots of the pairwise daily return series.(a) shows the variation of the diagonal-recurrence values for the daily returns of sh601398 and sh601288 in 2015 with different time delays.(b) presents the variation of the diagonal-recurrence values for the daily return sequences of sz000027 and sz000400 in 2015 with different time delays.
Figure 6 .
Figure 6.Relationship between the CRP indicators and the ratio of coexceedances of the constituent stocks in the Shanghai stock market in 2015.
Figure 7 .
Figure 7. Relationship between the CRN indicators and the ratio of coexceedances of the constituent stocks in the Shanghai stock market in 2015.
Figure 8 .
Figure 8. Relationships between the CRN clustering coefficient, average shortest path, and the ratio of coexceedances in the Shanghai market for different years' data.
Figure 9 .
Figure 9. Relationships between the CRN clustering coefficient, average shortest path, and the ratio of coexceedances in the Shenzhen market for different years' data.
Figure 10 .
Figure 10.The relationship between synchronization probabilities and clustering coefficients of cross-recurrence networks in the Shenzhen market based on different years' data.
Figure 11 .
Figure 11.Relationship between the clustering coefficient of CRNs for paired provincial rainfall data and the probability of synchronous extreme rainfall events in different years.
Figure 12 .
Figure 12.The relationship between synchronization probabilities and clustering coefficients of cross-recurrence networks based on different years' rainfall data.
Figure 13 .
Figure 13.Relationship between the clustering coefficient of the CRNs based on gaze and speech sequences during dyadic interaction and the state synchronization rate.
Figure 14 .
Figure 14.The relationship between synchronization probabilities and clustering coefficients of cross-recurrence networks based on the data of eye-movement experiment.
Figure A1 .
Figure A1.Relationship between the CRP indicators and the ratio of coexceedances of the constituent stocks in the Shenzhen stock market in 2015.
Figure A2 .
Figure A2.Relationship between the CRN indicators and the ratio of coexceedances of the constituent stocks in the Shenzhen stock market in 2015.
Table 1 .
Main qualification analysis metrics of recurrence plots.
Table 2 .
Main metrics of recurrence networks.
Table 3 .
The limits of 95% confidence intervals for slope and intercept estimates across different scenarios.
Table 4 .
Results of correlation tests between the CRN indicators and the proportion of coexceedances.Note: the values corresponding to the Pearson's correlation test are correlation coefficients, and the values corresponding to the Wilcoxon rank sum test are test statistics.* in parentheses indicate the significance level of the corresponding P-value.* , * * , and * * * represent statistical significance levels of 5%, 1%, and 0.1%, respectively.
Table 5 .
Results of correlation tests between the precipitation CRNs clustering coefficients and the proportion of synchronized rainfall events.Note: the values corresponding to the Pearson's correlation test are correlation coefficients, and the values corresponding to the Wilcoxon rank sum test are test statistics.* in parentheses indicate the significance level of the corresponding P-value.
Table A1 .
List of selected stocks in the Shanghai stock market.
Table A2 .
List of selected stocks in the Shenzhen stock market. | 12,084.8 | 2024-01-11T00:00:00.000 | [
"Mathematics",
"Environmental Science",
"Physics"
] |
Characterization of Electronic and Ionic Transport in Li 1- x Ni 0 . 8 Co 0.15 Al 0.05 O 2 (NCA)
Despite the extensive commercial use of Li 1- x Ni 0 . 8 Co 0.15 Al 0.05 O 2 (NCA) as the positive electrode in Li-ion batteries, and its long research history, its fundamental transport properties are poorly understood. These properties are crucial for designing high energy density and high power Li-ion batteries. Here, the transport properties of NCA are investigated using impedance spectroscopy and dc polarization and depolarization techniques. The electronic conductivity is found to increase with decreasing Li-content from ∼ 10 − 4 Scm − 1 to ∼ 10 − 2 Scm − 1 over x = 0.0 to 0.6, while lithium ion conductivity is at least five orders of magnitude lower for x = 0.0 to 0.75.Asurprisingresultisthatthelithiumionicdiffusivityvs. x showsav-shapedcurvewithaminimumat x = 0.5,whiletheunitcell parameters show the opposite trend. This suggests that cation ordering has greater influence on the composition dependence than the Li layer separation, unlike other layered oxides. From temperature-dependent measurements in electron-blocking cells, the activation energy for lithium ion conductivity (diffusivity) is found to be 1.25 eV (1.20 eV). Chemical diffusion during electrochemical use is limited by lithium transport, but is fast enough over the entire state-of-charge range to allow charge/discharge of micron-scale particles at practical C-rates. © The
Cathodes having high energy and power density, adequate safety, excellent cycle life, and low cost are crucial for Li-ion batteries that can enable the commercialization of electric transportation. 1 Towards this end, much research has previously focused on the development of the LiNi 1-x Co x O 2 (NC) 2-10 cathode due to its high capacity (∼275 mAh/g) and favorable operating cell voltage (4.3 V vs. Li/Li + ), which is within the voltage stability window of current liquid electrolytes. This compound also has lower cost than LiCoO 2 ; but despite extensive optimization, e.g., with respect to the Ni/Co ratio, 2-10 NC still suffers from poor structural stability during electrochemical cycling. 11 Significant efforts were subsequently focused on improving structural stability by doping with small amounts of electrochemically inactive elements such as Al and Mg. [12][13][14][15][16][17] One of the most promising compositions that emerged is Li 1 Ni 0.8 Co 0.15 Al 0.05 O 2 (NCA), currently in widespread commercial use. This intercalation material exhibits solid solution behavior during the extraction of lithium 3,4,18 and is structurally stable upon cycling. 2 The majority of studies of NCA have focused on structural and electrochemical characterization. Surprisingly, there is limited, and conflicting, data on the basic transport properties of the NC/NCA family of compounds. [19][20][21][22][23] Cho et al., 19 and Montoro et al. 20 determined the chemical diffusivity of LiNi 1-x Co x O 2 using GITT measurements of composite cathodes (e.g., NC powder combined with polymer binder and carbon additive), and reported results varying over several orders of magnitude. They also reported that the chemical diffusivity of Li x Ni 0.8 Co 0.2 O 2 is nearly independent of lithium content. In contrast, Montoro et al. 20 found a wider variation of lithium diffusivity with lithium content in the same Li x Ni 0. 8 Thus our objective in this work is to systematically characterize and interpret the transport properties of NCA. We use additive-free, single phase sintered samples in which the extrinsic effects that may be present in composite electrodes are avoided. Using electron blocking and ion blocking cell configurations, respectively, and electrochemical impedance spectroscopy and dc polarization and depolarization techniques (see Table I), we deconvolute the electronic and ionic conductivities of NCA as a function of temperature and Li content.
Experimental
NCA powder of Li 1-x Ni 0.8 Co 0.15 Al 0.05 O 2 composition was obtained from NEI Corporation Inc. (Somerset, NJ, USA). Compacted pellets were prepared from the powder by pressing at 340 MPa for 60s, forming cylindrical samples of 14 mm in diameter. The pellets were sintered at 850 • C for 12 h in ambient atmosphere, preceded by heating at 5 • C/min and cooling at the same rate. This procedure yielded samples of 96-98% relative density, sufficiently high that the measured conductivity represents the bulk value (being proportional to density at high sintered density).
Electrochemical delithiation.-The sintered pellets were polished to a thickness of 0.30 to 0.80 mm. One side of the polished pellets was coated with a thin layer of graphite to form good electrical contact with the metal current collectors in the cells. Delithiation was performed in a Swagelok-type electrochemical cell using lithium metal foil as the counter electrode, the NCA pellet as the working electrode, and a liquid electrolyte mixture containing 1 M LiPF 6 in 1:1 by mole of ethylene carbonate/diethyl carbonate (EC/DEC). A Celgard separator (Charlotte, NC, USA) was used to separate the electrodes. A charging current equivalent to C/200 rate was applied using Bio-logic SA France Model: VMP3 (Claix,, France). The current was applied continuously or intermittently (applied for one hour intervals followed by a half-hour rest). After electrochemical delithiation to the desired compositions, the cells were disassembled and the pellets were washed with acetone and pure EC/DEC solvent, and heated at 120 • C in an inert atmosphere for at least 24 h in order to homogenize the lithium distribution. The pellets were again polished lightly on both sides to remove any surface lithium salt.
Electronic conductivity measurement.-The as-sintered lithiated and partially delithiated pellets were painted with silver paste on both surfaces forming the cell configuration Ag|NCA|Ag. The pellets were subsequently heated at 120 • C overnight in order to remove the organic solvent. The Ag|NCA|Ag cells were placed in battery coin cell holders with support of stainless steel disks on both sides of the pellet. Direct current polarization technique (DC) as well as electrochemical impedance spectroscopy (EIS) were employed to measure the electrical conductivity of the samples using Bio-logic SA France Model: VMP3 (Claix,, France) in the frequency range 200 k Hz-0.5 Hz. The measurements were performed at temperatures from 25-100 • C using Table I. Summary of techniques and cell configurations used to elucidate electronic and ionic conductivity, and the ion diffusivity, as a function of temperature and/or Li-content.
Probed
Technique Cell configuration Transport properties as a function of : a VWR temperature controller. The sample temperature was measured by a thermocouple placed near the sample.
Ionic conductivity and diffusivity measurements.-The ionic conductivity and diffusivity of the fully lithiated, starting NCA was measured by direct current polarization (DC) as well as electrochemical impedance spectroscopy (EIS), over the frequency range 200 kHz-10 μHz at AC amplitude 10 mV as a function of temperature. Doped polyethylene oxide (PEO) was used as an electron-blocking, lithiumconducting membrane. The measurements were performed in the symmetric cell configuration Li|PEO|NCA|PEO|Li in a Swageloktype cell. The PEO membrane was fabricated by mixing PEO powder (from scientific polymer products, Inc., Mw 4,000,000) and LiI (from Aldrich, 99.99%) in a 6:1 molar ratio in dry acetonitrile. Detail of preparation can be found elsewhere. 24 In order to measure ion diffusivity as a function of lithium content, dc polarization/depolarization measurements were made on thin sintered NCA pellets (0.26-0.30 mm thickness and 0.219-0.158 cm 2 surface area) of known weight, using Swagelok-type cells. The same cell preparation procedure and components as described above for electrochemical delithiation were used. A charging current equivalent to C/400 rate were applied for 25 h, after which the cell was relaxed at open circuit voltage (OCV) conditions for at least 75 h to reach the steady state OCV. Lithium ionic diffusivity was derived from the voltage relaxation vs time (depolarization process). Here we considered the diffusion length to be one half the sample thickness.
X-ray diffraction and rietveld refinements.-Powder X-ray diffraction (PXD) measurements were performed in Bragg-Brentano reflection geometry using a RigakuD/Max/b 185 mm radius goniometer X-Ray Powder Diffractometer equipped with a 13 kW RU300 Crsource rotating anode X-ray generator (Cr Kα radiation), diffracted beam monochromator and a scintillation point detector. All PXD data were collected from 26 to 120 • 2θ with a step size of 0.02 • and a step speed of 0.5 • /min.
To investigate sample stability during exposure, a sample exposed to ambient air and one stored under inert (glove box) conditions were each analyzed by PXD of the exposed surface. No impurity phases were observed for the latter sample, while for the former sample a relatively large amount of Li 2 CO 3 was observed ( Figure 1). Therefore, all sample handling and measurements were performed under inert conditions.
To elucidate the change in unit cell parameter in Li 1-x Ni 0.8 Co 0.15 Al 0.05 O 2 as a function of x, a series of samples was chemically delithiated by reaction with nitroniumtretafluoroborate (Sigma Aldrich) in dry acetonitrile (Alfa Aesar). The reaction mixture was kept in an argon filled glove box for three days in order to complete the reaction. The partially delithiated NCA powder was separated from the solution by centrifugation (Eppendorf Centrifuge, Model 5804R, (Hauppauge, NY, USA)). The powder was washed with additional acetonitrile to remove salt and other impurity phases. The final powder was dried in the glove box at 80 • C before PXD measurements.
Rietveld refinements were performed using the program FullProf. 25 The backgrounds were described by linear interpolation between selected points, while pseudo-Voigt profile functions were used to fit the diffraction peaks. In general the unit cell parameters, sample displacement, profile parameters and the overall temperature factors, B ov were refined. The structural model for Li 0.99 Ni 0.71 Co 0.15 Al 0.15 O 2 (space group: R-3m, a = 2.86 and c = 14.199 Å) published by Guilmard et al. 26 was used as starting model for the refinements of Li 1-x Ni 0.8 Co 0.15 Al 0.05 O 2 . The occupancies of Ni and Al were changed to match the material composition while that of Li was changed to accommodate the relevant degree of delithiation. At high degrees of delithiation a second NCA phase was observed, which was satisfactorily modeled using a second R-3m Li 1-x Ni 0.8 Co 0.15 Al 0.05 O 2 with slightly different cell parameters.
Results and Discussion
Electronic conductivity.-The impedance spectra of as-sintered lithiated NCA measured at selected temperatures on the symmetric cell configuration, Ag|NCA|Ag are shown in Figure 2a. Nearly perfect semicircles are obtained in the temperature range 25-100 • C and in the frequency range 2 × 10 6 -5 × 10 −2 Hz. The absence of a second semicircle is an indication of absence of other resistive processes and also suggests minor or negligible ionic conductivity. Similar impedance spectra were observed for the partially delithiated samples. The impedance spectra were evaluated with the ideal equivalent circuit shown in Figure 2b. For temperature-dependent measurements, impedances were measured during both heating and Figure 1. PXD data of NCA samples handled under inert conditions (black curve) and exposed to ambient conditions for 48 h (red curve), respectively. Note that Li 2 CO 3 is observed for the sample exposed to ambient conditions, while only peaks from NCA are observed for the sample handled under inert condition. cooling. The capacitance (C) values can be calculated from the fitting parameters Q and n according to C = (R 1-n Q) 1/n where Q is a constant phase element and n is essentially a measure of the degree of depression of an arc (here n is usually in the range of 0.98-0.92 depending on the temperature and degree of delithiation).
Derived capacitance values are ∼10 −10 F and thus confirm that the observed impedance responses originate from the bulk (grains) of the samples, i.e. not the grain boundaries. 27 The absence of any additional polarization process (i.e. second semicircle) at low frequencies for all samples indicates that this conduction is predominantly due to electronic carriers. In order to substantiate this observation, DC polarization and depolarization measurements were performed for lithiated and partially delithiated NCA samples using the same cell configuration as for the impedance spectroscopy. Figure 2c is representative of a typical DC measurement on a NCA sample. During application of a constant voltage the current increases in a step-function manner to a stationary value with the applied load and analogously, decays in a step-function on switching off the applied voltage. Such behavior is indicative of a electronically dominated conduction process since in the case of significant contributions from ionic motion, a continuous increase of the current during polarization limited by a steady state situation, and a symmetrical decay of the current during depolarization with rate constants being determined by lithium diffusion D δ Li are expected. [28][29][30] The electronic conductivities of the lithiated and partially delithiated NCA are plotted in Figure 3a as a function of inverse of temperature. The conductivities of the partially delithiated samples measured at a given temperature increase monotonically with increasing delithiation (Figure 3b, for measurements at 30 • C). Over the measured compositional range, the electrical conductivity shows thermally-activated behavior. The values of activation energy, calculated using an Arrhenius law, varies from 0.22-0.14 eV (±0.04 eV) as shown in Figure 3a, and is consistent with the value reported by Saadoune and Delmas 22 for Li x Ni 0.8 Co 0.2 O 2 . These are typical values for the migration process of a small polaron generally observed in mixed-valence systems. 31 It is reported that Co is less prone to oxidize from the trivalent to tetravalent state in the presence of Ni. 32 The electronic configurations of Co 3+ , Ni 3+ and Ni 4+ have the (t 2 ) orbital filled in each case. As a result, electron delocalization is unlikely. The increase in electronic conductivity is associated with the presence of mixed Ni 3+ /Ni 4+ valence states resulting from delithiation, which leads to hole formation in the narrow (Ni 4+ /Ni 3+ ) band. It is seen from the Figure 3a and 3b that beyond 60% delithiation (i.e. x = 0.60) the sample exhibits a sharp rise in electronic conductivity. This may be due to oxidization of cobalt from the trivalent to tetravalent state at this delithiation level. The degree of cobalt oxidation is evidently small, since a high degree of oxidation would lead to metallic behavior due to the presence of holes in the broad t 2 band. We were not able to measure the electronic Ionic conductivity and diffusivity by AC impedance.-Results with electron-blocking cells must take into account the temperature dependent ionic conductivity of the PEO blocking layer. Figure 4a shows the impedance spectra of lithiated NCA measured at 58 • C in the cell configuration Li/PEO/NCA/PEO/Li. In contrast to the ion-blocking cells above, these impedance spectra consist of two semicircles at high frequencies (inset of Figure 4a) followed by a Warburg response at low frequencies. The high frequency semicircle represents the total resistance to electronic and ionic motion including contributions from the bulk conductivity of PEO. The Warburg response is indicative of stoichiometric polarization owing to blocking of electrons. In order to obtain the ionic conductivity and diffusivity, the impedance spectra were fitted with the equivalent circuit as shown in Figure 4b. Good agreement is obtained between the simulated and experimental data. Qualitatively similar impedance spectra were observed for other temperatures; however, at temperatures below 35 • C the frequency range is not sufficiently wide to obtain the relaxation frequency. Hence, for the lower temperatures, the model fit to lower frequencies was extrapolated to reach the relaxation frequency. The ionic conductivity and diffusivity respectively are plotted against inverse temperature in Figure 4c and 4d, the diffusivity being obtained via the Nernst-Einstein relation. An Arrhenius relationship is obtained, with the activation energies for ionic conductivity and diffusivity being 1.25 ± 0.2 and 1.20 ± 0.2 eV, respectively. This is to our knowledge the first measurement of these ion transport parameters for any NCA composition. The obtained activation energies are comparable to those for other lithium ion cathode compounds. 33,34 Given predominant electronic conductivity, the Li + diffusivity (D Li ) should have the form D Li ∝ σ ion ( x ion C ion + xeon Ceon ) where x ion and x eon refer to the contributions due to trapping of ionic and electronic carriers and c ion , c eon denote to the ionic and electronic carrier concentrations. 35 It is discernable from Figure 4c and 4d that both the ionic conductivity and diffusivity display single slope behavior which indicates the absence of charge carrier trapping effect (x ion and x eon ). That is, charge carrier association-dissociation is not prominent in the measured temperature range. In the intrinsic dilute defect, c ion 1, the ionic diffusivity should be higher than the ionic conductivity according to above equation, as is observed in Figure 4c and 4d.
Ionic conductivity and diffusivity by steady state polarization/ depolarization.-DC polarization/depolarization measurements were also performed using electronically blocking cell arrangements over a range of temperatures in order to substantiate the result obtained from the impedance measurements. These measurements were not performed at low temperature due to the excessively long times required to reach steady state (also observed in AC impedance). Figure 5 shows the time dependence of the polarization voltage (galvanostatic mode). The voltage immediately jumps from zero to IR el R ion /(R el + R ion ) where R ion and R el are the resistances due to Li ion and electronic carriers. With increasing time, the partial current of the blocked electrons decreases and eventually vanishes. A steady state is then observed (voltage being IR ion ) during which the total current is carried only by the non-blocked ions. The relaxation time of the polarization process is τ δ , which provides the chemical diffusion coefficients D δ Li (via τ δ = L 2 /(π 2 D δ Li ). It should be noted that owing to the internal concentration profiles, σ ion and D δ Li are averaged over a Li composition range, corresponding to a polarization voltage of 50 mV. The behavior of the depolarization was analogous to that during polarization. Owing to the relatively high diffusion coefficients, a steady state cell voltage was reached during polarization at ∼60 • C.
As the long-time polarization voltage (t > τ δ ) depends on time according to [28][29][30] : ). Diffusivity and ionic conductivity data derived from the DC measurement are compared in Figure 4c and 4d with the AC data, and the two are in good agreement. Figure 6a shows the cell voltage vs. time during the stepwise galvanostatic titration of fixed amounts of lithium from the NCA sample. A lithium concentration gradient is developed across the sample during titration. After each delithiation step, the cell is allowed to relax in the OCV condition, and the cell voltage slowly reaches steady state, corresponding to removal of the lithium concentration gradient. Lithium ion diffusivity data derived from the relaxation of the cell voltage, i.e. the depolarization process, using Eq. 1 is shown in Figure 6b. It is seen that the depolarization cell voltage can be fitted very well with Eq. 1. However, at 50% delithiation (x = 0.50), a change in the slope of the depolarization curve was observed after ∼1.5 × 10 5 s (Figure 6a). The two slopes can be fitted separately by Eq. 1 and clearly shows two regimes, as shown in Figure 7. The change in the depolarization rate may originate either from the formation of a second phase during delithiation, or a change in the diffusion mechanism.
Lithium ion diffusivity from depolarization.-
To investigate this observation further, we prepared a series of chemically delithiated NCA powders and performed X-ray diffrac- tion measurements, for which results appear in Figure 8. For x = 0.0 and 0.1, only one NCA phase was observed (denoted NCA1), while for samples of x ≤ 0.25 a second NCA phase (denoted NCA2) was also observed. As the lithium content is decreased (x = 0.25 to 0.5), the fraction of NCA2 increases from 5 to 14 mol%, while further decrease (x = 0.75) results in only a slight increase in the amount of NCA2 to 15 mol%. Thus the amount of NCA2 reaches a plateau value. The R-3m symmetry of NCA1 is preserved in NCA2, but the cell parameters are different for NCA2, and suggest a significantly different Li-content in this phase. A notable feature for both phases is the appearance of a maximum in unit cell dimensions at x = 0.5, which suggests a change in cation ordering at that composition. Yoon et al. 38 have previously reported similar structure observations for NCA using in situ electrochemical cells, wherein after initial delithiation a second NCA phase appears exhibits a maximum in c-axis and minimum in a-axis with increasing x. However, there are significant differences between the present results and theirs, in the relative amounts of the two phases as a function of x, the concentrations at which the maximum in c and minimum in a occur, and the fact that in our case, both phases exhibit the same variation in c and a dimensions with x, whereas in their work one of the phases exhibits nearly invariant c and a. The room-temperature lithium ion diffusivity obtained from depolarization measurements are plotted in Figure 9 as a function of lithium content. Firstly, note that the ionic conductivity varies by about an order of magnitude with x, and is everywhere at least 10 5 lower than the electronic conductivity ( Figure 3b). Secondly, the ionic conductivity shows an inverse relationship to the unit cell parameters in Figure 8b, whereby diffusivity decreases as unit cell dimensions increase. The minimum in diffusivity at x = 0.5 corresponds to a maximum in unit cell dimensions at the same composition. This is surprising, and is counter to the expectation that a c-axis expansion corresponding to an increase in the Li slab distance lowers the activation energy for migration and should therefore increase the Li diffusion coefficient. 36,37 Recalling that the depolarization data ( Figure 6b) indicate a change in mechanism and/or rate-controlling phase around x = 0.5, we considered alternative explanations. We examined the diffraction data for evidence for changes in cation ordering. Rietveld refinement did not indicate significant mixing between Li and the transition metals. Possible changes in cation ordering within the transition metal layers need to be investigated further. No superstructure peaks indicative of ordering were detected. The dependence of ion diffusivity on Li vacancy concentration x is consistent with a defect chemical model where between x = 0 and x = 0.5, diffusivity is dependent on interstitial concentration and therefore decreases with increasing vacancy concentration through a Frenkel equilibrium. The increase in diffusivity for x > 0.5 is consistent with a dependence on vacancy concentration. However, changes in cation ordering which are more subtle than can be detected in the present diffraction data, and related changes in migration energy, may be the dominant influence.
We know of no other published diffusion data for NCA to compare with the present results. For NC, however, Montoro et al. 21 composite electrodes. The diffusivity data reported by Montoro ranged from ∼10 −10 to ∼5 × 10 −9 cm 2 s −1 over the entire Li-compositional window. In contrast, Cho et al. 20 reported an almost constant lithium diffusion coefficient of Li x Ni 0.8 Co 0.2 O 2 as a function of lithium content. However, their data showed a much higher diffusivity (∼10 −8 cm 2 s −1 ). We believe that extrinsic factors may be present in these previous results due to the nature of the samples, and that neither may represent the pure single phase transport behavior. Table II compares the diffusivity data obtained in the present work with literature results.
Conclusions
Electronic and lithium ionic transport in NCA have been measured using ion and electron blocking cell configurations and both ac and dc techniques. NCA exhibits semiconducting behavior over the entire range of lithium concentrations measured (x = 0.0 to 0.7). Starting from the fully lithiated state, the electronic conductivity gradually increases with increasing delithiation, and rises more sharply beyond about 60% delithiation. We suggest that the latter increase is due to the onset of Co 3+ /Co 4+ multivalency. Regarding lithium ionic conductivity (and diffusivity), in fully lithiated NCA the activation energy is 1.25 eV. No evidence for charge carrier association or dissociation was seen in the measured temperature range. However, the ion diffusivity as a function of lithium concentration shows V-shaped behavior with a minimum at about x = 0.5. The unit cell parameters show the opposite trend with a maximum at x = 0.5. This behavior is counter to expectations for ion migration energy as a function of c-axis dimensions, and suggests more subtle cation ordering effects that remain to be resolved. From the magnitude of the electronic and ionic transport parameters across the measured range of Li concentration, it can also be concluded that chemical diffusion is always limited by lithium ion transport rather than electronic conductivity, and that bulk transport is rapid enough to allow charging and discharging of micron size particles at practical C-rates regardless of state-of-charge. | 5,859 | 2015-01-01T00:00:00.000 | [
"Materials Science"
] |
Moderating Effect of Institution in FDI-Growth Relationship in Developing Countries: A Case of Nigeria
This paper employed the good governance index as a proxy for institutional quality to examine its moderating effect on the FDI-growth relationship in Nigeria from 2006 to 2020. The ARDL bounds testing approach was employed as the technique of analysis to ascertain the direct impact of FDI on economic growth and the indirect impact through the moderating effect of institutional quality (good governance). The paper provides evidence of a long term relationship between FDI and economic growth as well as a significant unconditional positive impact of FDI on economic growth. Regarding the interactive effect of institutional quality (good governance) on the FDI-growth effect, we find convincing evidence that institutional quality (good governance) alters the effect of FDI on economic growth favourably. Therefore, it is recommended that Nigeria strengthen its governance quality to benefit more from FDI and achieve better economic growth results. analysed and provided additional and applicable quantitative data on the effect of foreign direct investment (FDI) on economic growth in developing countries in the lower-middle-income group in the period 2000–2014. Findings from the result indicate that foreign direct investment (FDI) tends to boost economic growth in the long run, while having a negative effect in the short run for the sample countries. From 1996 to 2015, Hayat (2019) investigated the role of institutional quality in economic growth, specifically the role it plays through the channel of foreign direct investments. This paper examines the direct impact of institutional quality on economic growth and the indirect impact of institutional quality on economic growth by enhancing FDI-induced economic growth using a larger dataset of 104 countries and the GMM estimation method on dynamic panel data. This paper shows that FDI inflows and institutional quality both lead to higher economic growth. FDI-led growth, on the other hand, was limited to low- and middle-income countries. Better institutional quality was also found to boost FDI-led economic growth in these countries. The paper also discovered that FDI has a negative impact on economic growth in high-income countries. Dinh et. al. (2019) analysed and provided additional and applicable quantitative data on the effect of foreign direct investment (FDI) on economic growth in developing countries in the lower-middle-income group in the period 2000–2014 in their report. Findings from the result indicate that foreign direct investment (FDI) tends to boost economic growth in the long run while having a negative effect in the short run for the sample countries.
Introduction
Economic growth is one of the essential benchmarks for every well-managed economy. Increased economic growth indicates increased economic development and welfare; as a result, governments are interested in finding strategies to improve the growth of the economy. (Etale et al., 2016). defined economic growth as an "increase in an economy's productive capacity as a function of Gross Domestic Product (GDP) growth". Thus, economic growth is the expansion of commodities and services in a country that increases consumption. This condition could lead to a rise in labour demand and a rise in labour income.
The relationship between foreign direct investment (FDI) and economic growth is well-studied in the theoretical and empirical development economics literature. With the introduction of endogenous growth theories (Barro, 1991;Barro & Sala-i-Martin, 1995), a renewed interest in growth determinants and extensive research on externality-led growth made it more realistic to include FDI as one of the causes of long-run economic growth. The concept that an economy's openness enhances economic growth is universally acknowledged, regardless of whether the economy is developed or developing (Etale et al., 2016). Foreign direct investment (FDI) is critical for boosting international capital flows, and it has piqued the interest of many experts. FDI can boost the host country's export capacity, resulting in higher foreign exchange revenues for the developing country.
Although the structure of FDI inflows has evolved significantly over time, FDI remains a vital instrument for growth enhancement in the vast majority of countries. According to the Organization for Economic Cooperation and Development (OECD), countries having weaker economies view FDI as the sole means of growth and economic transformation. As a result, governments, especially in developing nations, are putting more emphasis on foreign capital. However, the FDI spillover effect on the host countries does not occur instantaneously but rather depends on the host countries' absorptive capacity, determined by various factors like the quality of the host country's institutional quality.
A country's institutional quality is mainly defined by the extent to which property rights are protected, the level to which rules and regulations are enforced fairly, as well as the level to which corruption exists (IMF, 2003). International development agencies and non-governmental organisations (NGOs) have been campaigning for "good governance" and institutional changes to boost institutional quality to improve the investment climate and stimulate growth in developing nations. These measures are considered critical for economic success and attracting FDI. Despite the paucity of empirical data on the nature of the connection between institutional quality and FDI, the policy appears to have improved in executing these institutional changes once resources and other conditions are under check. As a result, policy debates regarding the importance of institutional quality in assessing a country's competitiveness to attract FDI have intensified.
For decades, African countries have been in a state of abject economic growth delirium. Economic growth in the continents has been epileptic, unsustainable, and even where it has occurred, it has been marred by constant macroeconomic uncertainty and financial crisis. "The economic performances of the African countries have drawn significant attention in recent years, with superlative words such as 'tragedy, mediocre, and dismal' used to characterise the low rates of economic growth witnessed in these countries from the 1980s to date," African countries has been the only developing-world area to 'stagnate,' and growth rates have been generally low. From 1961 to 2000, the average GDP per capita growth rate in SSA was 0.45 percent, compared to 1.6 percent in Latin America and the Caribbean (LAC), 2.3 percent in South Asia (SA), and 4.9 percent in East Asia and the Pacific (EAP)." According to a United Nations report, African countries continue to have the world's highest poverty rates. In 2021, 490 million people in Africa, approximately 36% of the total population, are living in extreme poverty. This is an increase over the previous year's figure of 481 million. Although Sub-Saharan Africa has a higher percentage of FDI host countries than any other region in Africa, poverty is substantially worse in Sub-Saharan Africa. The region also contains some of the world's poorest countries. Nigeria, Africa's largest economy and third FDI recipient, has a 46 per cent poverty rate, with 90 million people living in extreme poverty out of a population of approximately 210 million. Between 2011 and 2020, Nigeria was one of the leading African destinations for FDI. Nigeria was ranked second in terms of net inflow of $45.1 billion, following only Egypt, which attracted $56.2 billion, and one spot ahead of South Africa, which received $41.3 billion.
Despite the inflow of foreign direct investment into Nigeria, the countries' inability to attract the required level of investment to boost its economy has been a major challenge. Nigeria still suffers from large prevalence of resource gaps due to domestic financial systems' inability to mobilise adequate resources (UNCTAD, 2020). Thus, the quest for economic growth through favourable foreign investment policies has not been readily actualised. Nigeria is still characterised by unimpressive poor macro-economic performance low per capita income, high unemployment rates and the level of economic transformation has been low (Adeyeye et al., 2017). Over years Nigeria annual GDP growth rate has maintained a downward trend. According to World Bank (2020), between 2002 to 2020, Nigeria's annual GDP growth rate dropped from 15 per cent to approximately -1.7 per cent. This is in contrast to current research's reports of foreign direct investment contributing to economic growth and development. It also contradicts recent empirical evidence from China and other rapidly developing Asian nations, in which a reasonable level of growth has accompanied the inflow of FDI. (Modou & Liu 2017)).
Irrespective of the laudable research volume on FDI and economic growth, empirical evidence still shows that the relationship between FDI and economic growth stands conflicting and debatable. On the one hand, one stream of researchers asserts that foreign direct investment has a negative impact on economic growth. (Carkovic & Levine 2017;Bermejo & Werner, 2018;Cruz et al., 2019;. They further augured that foreign direct investment creates income inequality, discourage national sovereignty and self-dependency and also repatriate capital from the economy to the home country; as a result, developing economies are denied the opportunity to grow, thus in agreement with the dependency theory, the resultant effect of FDI dependent nation tends to be detrimental in the long-run. This was in line with Sen (1998), when he emphasised that multinationals may negatively impact the host country's research and development to maintain a technological advantage over local firms. In addition, he emphasises the rise in royalty payments, which would have a negative effect on the balance of payments. According to , the host country may rely on multinationals' technologies. These authors argued that employees with a high level of education may leave the country because there are no opportunities for R&D in the host country. This view has been strongly challenged by another stream of researchers who opined that foreign direct investment has a positive impact on economic growth (Galaye Ndiaye & Helian Xu,2016;Hasibul & John 2017;Sokang,2018;, they further argued that foreign investment is key to solving the problem of low productivity, and scarce local capital in developing economies through efficient exploitation and utilisation. On the other hand, another group of researchers (Ayub et al., 2019;Matsumoto, 2020) believe that the benefits of FDI depend on the absorptive capacity of the host country. They thereby emphasised that the growth effect of foreign direct growth is induced by its interaction with other moderating factors in the host countries, such as the host country's institutional, economic, political, social, and cultural state. This is consistent with Dunning's (2002) assertion that institutional qualities of the host country have become a vital driver being considered by multinational corporations' towards attaining their respective efficient seeking goal rather than market and resource seeking. Considering this underlying problem and the need to reconcile the discrepancies from previous findings, the study contributes by the broader inclusion of all the Worldwide Governance Indicators as a proxy for institutional quality to moderate the FDI-growth effect specifically for Nigeria.
The study is structured into five sections .section one provides the study's background, which has already been discussed. The review of literature is depicted in section two, the adopted methodology of the study and its justification is covered in chapter three, chapter four presents the analysis of the various data collected, results and discussion of findings and finally summary, conclusion and recommendations are discussed in chapter five 2. Literature Review A quite number of both local and foreign empirical studies have been done on the relationship between foreign direct investment and economic growth. The general observation from these studies is that the results have been mixed depending on many factors, including sample periods, the methodology adopted, estimation techniques, measures of volatility adopted and the countries considered (developed or developing). Some of these empirical studies are being reviewed in this section Nguyen et al. (2021) examined the relationship between FDI, trade, and economic growth in Albania. Annual time-series data were used in the study, as well as Johansen cointegration analysis and the error correction model. The findings revealed a long-term connection between foreign direct investment, trade, and economic growth. Similarly, Sapuan and Roly (2021) investigated the relationships between ICT adoption, foreign direct investment, and economic growth in ASEAN-8 countries. The panel regression analysis was used to test these relationships using data from 2003 to 2017. The findings revealed that ICT and FDI dissemination are significant and positively impact the ASEAN-8 countries' economic growth. However, Renzi (2021) also conducted a study to determine the effects of foreign direct investment on South Sudan. According to the report, South Sudan has been unable to completely exploit FDI, which also found that FDI has struggled to boost the country's economy and that poverty levels are still high. Furthermore, despite modest increases in FDI, South Sudanese citizens' average standard of living remains poor, and the country is still embroiled in a long-running civil war that has claimed thousands of lives.
Opeyemi (2020) evaluated the effect of FDI and inflation on economic growth in five African countries from 1996 to 2018. the results showed that FDI has a positive effect on economic growth in all five countries. In the same vein, Gochero and Boopen (2020) used the autoregressive distributed lag (ARDL) method to investigate the impact of mining FDI on the Zimbabwe economy while adjusting for non-mining FDI and domestic investment. Using a time-series data from 1988 to 2018, the result revealed that foreign direct investment in the mining sector has a significant positive relationship with the country's GDP over time. For the period 1980-2014, researched how information and communication technology (ICT) mediates the impact of foreign direct investment (FDI) on economic growth dynamics in 25 Sub-Saharan African countries using the GMM estimation techniques. According to the findings, internet and cell phone penetration significantly mediate FDI, resulting in overall positive net effects on all three economic growth dynamics. Moreover, for the period 1990-2018, Joshua et al. (2020) examined the impact of FDI on economic growth in 200 economies. Panel estimation techniques such as pooled ordinary least squares (POLS), dynamic panel estimation with fixed and random effects, and generalised method of moments were used in the analysis (GMM). The study discovered that FDI, debt stock, and official development assistance all foster growth in the countries studied. However, debt stock has a minor effect. Trade openness and exchange rates, on the other hand, had a mixed (positive and negative) effect on economic development.
Using panel GMM techniques, Baiashvili and Gattini (2020) investigated the effects of FDI inflows on growth in developed and developing economies and how they are mediated by income levels and the efficiency of the institutional environment. It focuses on the relationship between FDI and country income levels, including low-, middle-, and high-income countries. The study found that FDI benefits are not distributed uniformly and mechanically across countries. Furthermore, an inverted-U shaped relationship between countries' income levels and the scale of FDI effect on growth was discovered. Within country income groups, institutional factors positively mediate FDI, with countries with better-developed institutions relative to their income group peers showing a positive impact of FDI on development. used multiple regression methods to model the relationship between foreign direct investment and economic growth in a free market economy. Foreign direct investment's positive contribution to economic growth is dependent on their shared impact with domestic direct investment. Furthermore, the authors pointed out that the direct impact of FDI can be decreased as a result of the negative externalities associated with foreign investment, which include, among other things, the replacement of domestic investment and capital repatriation. Adegboye et al. (2020) evaluated the effect of institutional problems on FDI inflow and how it affects economic growth in sub-Saharan African host countries (SSA). The research used combined data from 30 SSA countries between the years 2000 and 2018. The fixed and random effect regression model was used to estimate the effect of foreign capital on economic growth in the developing SSA sub-region of Africa, with considerations for the quality of institutions. The report confirmed that FDI is critical for economic growth in Africa's SSA subregion. Gherghina et. al. (2019) investigated the relationship between FDI inflows and economic growth, taking into account many institutional quality variables and the 2030 Sustainable Development Goals (SDGs). The empirical findings support a non-linear relationship between FDI and gross domestic product per capita by estimating panel data regression models for a sample of 11 Central and Eastern European countries from 2003 to 2016. In addition, control of corruption, government effectiveness, and regulatory efficiency, the rule of law, and voice and transparency all influence growth positively. At the same time, political stability and the absence of violence terrorism are not statistically significant. Also, using the Autoregressive Distributed Lag (ARDL) bounds testing approach, Soylu (2019) investigates the effect of savings and foreign direct investment on economic growth in Poland from 1992 to 2016. Findings revealed that FDI has a positive impact on economic growth. Vásquez et al. (2019) examined the effect of Economic Openness and Foreign Direct Investment on economic growth in eighteen Latin American countries from 1996 to 2014. Findings from the Vectors Autoregressive model estimation revealed that FDI has a negative impact on growth of the selected country. However, analysed and provided additional and applicable quantitative data on the effect of foreign direct investment (FDI) on economic growth in developing countries in the lower-middle-income group in the period 2000-2014. Findings from the result indicate that foreign direct investment (FDI) tends to boost economic growth in the long run, while having a negative effect in the short run for the sample countries. From 1996 to 2015, Hayat (2019) investigated the role of institutional quality in economic growth, specifically the role it plays through the channel of foreign direct investments. This paper examines the direct impact of institutional quality on economic growth and the indirect impact of institutional quality on economic growth by enhancing FDI-induced economic growth using a larger dataset of 104 countries and the GMM estimation method on dynamic panel data. This paper shows that FDI inflows and institutional quality both lead to higher economic growth. FDI-led growth, on the other hand, was limited to low-and middle-income countries. Better institutional quality was also found to boost FDI-led economic growth in these countries. The paper also discovered that FDI has a negative impact on economic growth in highincome countries. analysed and provided additional and applicable quantitative data on the effect of foreign direct investment (FDI) on economic growth in developing countries in the lower-middle-income group in the period 2000-2014 in their report. Findings from the result indicate that foreign direct investment (FDI) tends to boost economic growth in the long run while having a negative effect in the short run for the sample countries.
Kawaii (2018) examined the role of FDI on the economic growth of 62 selected countries from 1972 to 2016 using a panel analysis approach. Findings revealed that FDI has a positive and significant impact in determining the growth of these countries. This result contrasts with the earlier study conducted by Cakovic and Levine (2017), who also employed a panel data approach to examine the relationship between FDI and economic growth. Their finding revealed that FDI and its components do not exert a robust influence on economic growth. Katerina et. al. (2017) employ the Bayesian analysis to empirically analyse the relationship between foreign direct investment and economic growth of the United States and European nations. Their findings in conformation with Cakovic et. al. (2017) reveal that FDI does not significantly impact the economic growth of these selected countries. Nguyen (2017) uses annual time series data from 1986 to 2015 to investigate the short and long run effects of foreign direct investment (FDI) and export on Vietnam's economic growth using ARDL and error correction models. The findings indicate that FDI has a substantial positive impact on Vietnam's economic growth in the long run, while export has a negative impact. However, in the short run, FDI and export have no significant impact on economic growth. Hayat (2017) also investigates the role of institutional quality in economic growth, especially as it relates to foreign direct investments. The paper examines the direct and indirect effects of institutional quality on economic growth through foreign direct investments using economic performance-related institutional quality indicators (both an aggregated variable of institutional quality and individual indicators). A dynamic panel data collection of 104 countries was estimated using the GMM estimation method. In contrast to countries with lower institutional quality, FDI inflows induce faster economic growth in countries with higher institutional quality. Similarly, Jilenga and Helian (2017) investigate the effect of FDI on economic growth and the position of institutional efficiency. A sample of 36 countries from Sub-Saharan Africa was used from 2001 to 2015, and the estimation was executed by adopting the Generalize Moment Method (GMM) estimation techniques. The empirical findings reveal that foreign direct investment has a significant negative impact on economic growth. On the other hand, institutional efficiency has a positive impact on economic growth. When the interaction term between FDI and institutional quality is considered, empirical evidence shows that institutional quality increases the FDI spillover effect and thus matters for economic growth. The results of the GMM model show that good institutions are needed to mediate the effects of FDI on economic growth. employed the modified growth model to examine the impact of FDI on economic growth in some randomly selected African economies from 1980 to 2013. The two-estimation method used was the ordinary least square regression (OLS) and the generalised moments method (GMM). They found that, except for the Central African Republic, the FDI estimate in all selected countries was significantly positive for both OLS and GMM. However, despite the large and optimistic coefficients of FDI, the extremely small magnitude that indicated a limited or negligible effect of FDI on economic growth was still the most important aspect of the coefficients. In the same vein, Adedeji and Ahuru (2016) suggested in their findings that, while FDI tends to stimulate African development, it is not a critical factor in the growth process of Africa. Furthermore, the researchers claimed that the reception of global FDI by SSA was very unimpressive.
Methodology and Data 3.2 Model Specification
In order to examine the impact of FDI on the economic growth of Nigeria, the study augment the Mankiw, Romer and Weil (1992) Where Y being the dependent variable is GDP growth rate which stands as a proxy for economic growth, Yt-1 is the lagged value of GDP growth rate, FDI is Foreign direct investment measured as a percentage of GDP, GOV is good governance used as proxy for institutional quality, HC is human capital development measure as primary school enrolment, LAB is labour, GCFC is gross fixed capital formation used as proxy for domestic investment, INFR is infrastructural facilities, TOP is trade openness measured as sum of exports and imports as a percentage of GDP , INF is inflation Where β0 is the intercept, β1-β6 are slope of the explanatory variables; Ln represents the natural logarithm of variables; and is the error term, t denotes the time dimension.
There are several reasons for using the log-linear specification to estimate the coefficient of variables. Firstly, the relationship between these different parameters is not linear. Second, in the case of the log model, the coefficient value might be expressed as a percentage or elasticity rather than a unit. Furthermore, we anticipate that FDI inflows, good governance, domestic investment, trade openness, human capital development, labour force, and infrastructure facilities would have a positive impact on economic growth in Nigeria, while inflation will have a negative impact.
In order determine the moderating effect of institutional quality on the FDI-growth relationship, we, therefore, modify the baseline model (Eq.1) to include the interaction between FDI and good governance quality in order to test the hypothesis that the influence of FDI on the growth of Nigeria economy is dependent on the level of governance quality. Therefore, the second set of regression that includes the interactive term can be expressed as thus: = + , + , + + + ( , + ) + + + + , + + … … … … … … … … … … … … …3.2 We are interested in and Which gives details on the marginal effect of FDI on economic growth based on the quality of governance. A positive interaction ( >0) would imply that the governance quality enhances the positive effect of foreign direct investment on economic growth, also a positive coefficient of ( > 0) would imply that FDI has a direct positive effect on economic growth and vice-versa.
Regarding the estimation methods for time series data, The Augmented Dickey-Fuller (ADF) unit root test (Dickey & Fuller,1979) will first be employed to ascertain the variables' stationarity condition. Thereafter, we estimate the optimal lag to be used in the study according to the lag selection criterion. Once the individual series' stationarity features and lag selection have been determined, linear combinations of the integrated series are assessed for cointegration. The cointegrated relationship between variables is commonly regarded as the variables' long-term equilibrium. The ARDL bound testing technique will be used to perform the cointegrating test and the regression analysis. Various cointegration techniques are available in the prior research, including Engle and Granger (1987), Johansen (1988) cointegration test, and Banerjee et al. (1998. These cointegration techniques, however, have several drawbacks. For example, the Engle and Granger cointegration technique has two phases, and inaccuracy in one step might transfer over to the next, resulting in biassed predictions (Ahmed et al., 2019). The cointegration technique developed by Johansen and Juselius (1990) is based on a single equation and needs a consistent order of integration 1(1) and a large sample size.
Furthermore, the availability of several cointegration approaches sometimes leaves a user uncertain regarding selecting an appropriate cointegration method because the results of cointegration testing might differ. The ARDL modelling approach was originally introduced by Pesaran and Shin (1999) and further extended by . This approach is based on estimating an Unrestricted Error Correction Model (UECM), which enjoys several advantages over the conventional cointegration techniques. The ARDL bound test approach proposed by is preferred for the study since it gives reliable estimations for small sample sizes, like this study's case, which spans from 1995 to 2020. Furthermore, the ARDL technique is not dependent on a consistent integration sequence and can be used as long as no variable is integrated at 1(2) (Nathaniel,2020). This approach employs a simple linear transformation to estimate both short-run and long-run dynamics at the same time, with the error correction term capturing the speed of convergence (Uzar & Eyuboglu, 2019). Furthermore, the ARDL bound testing method is devoid of autocorrelation, and an optimal lag length selection eliminates the issue of endogeneity (Nepal et al., 2021). The generalised ARDL (p, q) model is specified as: = + ∑ + ∑ + ………………..3.3 Where Yt is a vector and the variable in Xt are allowed to be purely I(0) or I(1) or cointegrated, are coefficient; is the constant; i=1,…..k; p,q are optimal lag orders; is a vector of the error terms -unobservable zero mean white noise vector process(serially uncorrelated or independent) To perform the bound test for cointegration, the conditional ARDL (p, q1, q2… q9) Hypotheses: Journal of Economics and Sustainable Development www.iiste.org ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.12, No.22, 2021 61 H0: b1i = b2i = b3i = 0 (where i = 1, 2, 3……9) H1: b1i ≠ b2i ≠ b3i ≠ 0 ∆ = + + + + ( * ) If there is no cointegration, the ARDL (p, q1, q2…… q9) model is specified as: .5 If there is cointegration, the error correction model (ECM) representation is specified as: == the long run representation/ relationship in the model = (1 -∑ ծ ), speed of adjustment with negative sign ECT = (lnmva t-1 -ƟXt), the error correction term Ɵ = ∑ , is the long-run parameter − are the short-run dynamics coefficient of the model's adjustment long-run equilibrium
Data
The dataset used for this study is based on annual time series data spanning from 1996 to 2020 which will be obtained from World Development Indicator (WDI). The dependent variables economic growth proxy by GDP growth rate, while the main independent variables are foreign direct investment and institutional quality(proxy by good governance). As described by Kaufmann et al. (2010), governance quality is composed of six distinct governance indicators: the rule of law, control of corruption, regulation quality, government effectiveness, voice and accountability, and political stability. Based on these indicators, we create a composite governance index that summarises the above six governance indicators into a comprehensive measure by employing the Principal Component Analysis (PCA). According to Laura et al. (2016), PCA is a more suitable measure of corporate governance since it detects governance indicators and eliminates the issue of variable multicollinearity. Next to good governance, we included other explanatory variables which are, human capital development, labour force, trade openness, gross fixed capital formation and inflation .
Data Analysis and Interpretation 4.1 Test for Stationarity
Given the fact that the ARDL method can estimate a cointegrating vector containing both I(1) and I(0) series, it is still necessary to rule out the likelihood that any of the series is (2). Thus The summary of the ADF unit root test as presented in Table 1, revealed a mix of other of integration among the series. The stationarity property is determined where the ADF statistics is less than the critical value (5%).
Journal of Economics and Sustainable Development www.iiste.org ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.12, No.22, 2021 Moreover, the significant p-value at 5% level of significance also proves the stationary status of the series While economic growth(GDP), foreign direct investment(FDI), human capital development (HC), the labour force(LAB), gross fixed capital formation(GFCF) and openness (OPN) attained stationary after first difference I (1), good governance(GOV) and inflation (INF) attained stationarity at level I (0). This mixed order of integration of the variables calls for the usage of the ARDL approach of cointegration. Therefore, the null hypothesis of the presence of unit root can be rejected.
Bounds Test Approach to Cointegration
Since the stationary status has been confirmed using the ADF unit root test, we then employed Autoregressive Distributed Lag (ARDL) bounds testing approach to examine the long-run relationship between FDI and economic growth within the period under study. Table 2 The result of the bound test, as shown in Table 2, revealed that the F-statistics (6.805299) is greater than the upper and lower bound at a 5 per cent level of significance. This implies that foreign direct investment and economic growth have a long-run relationship. Thus the null hypothesis of no cointegration between the variable is rejected.
Long-run elasticities Based ARDL-ECM model
Having confirmed that the variables are cointegrated, we estimate the long-run coefficient of the same equation and the associated ARDL error correction models. The ARDL model, on the other hand, necessitates prior information or estimation of the extended ARDL ordering. This good change of the ARDL model's orders is adequate to compensate for residual serial correlation and the issue of endogenous regressors at the same time (Pesaran and Shin, 1997). The Akaike Information Criterion (AIC) or the Schwartz Bayesian Criterion are used to determine the order of the distributed lag on the dependent variable and regressors (SBC). Based on Monte Carlo data, conclude that SBC is better than AIC because it is a parsimonious model that takes the shortest feasible lag length. In contrast, AIC selects the largest relevant lag length. SBC will be used as a lag selection criterion in this study. Table 3 present the summary of the Long-run elasticities based ARDL-ECM model The summary of the ARDL-ECM regression estimate as presented in table 3 revealed that the past value of GDP growth rate has a coefficient of 0.341533; this implies that the past of value of economic growth has a positive Journal of Economics and Sustainable Development www.iiste.org ISSN 2222-1700(Paper) ISSN 2222-2855(Online) Vol.12, No.22, 2021 impact on the present value; thus a 1 per cent increase in the past value will lead to about 0.34 per cent increase in the present value. The p-value of 0.0033 is also significant at 5 per cent level of significance. Foreign direct investment has a positive and significant impact on economic growth with a coefficient of 0.205510 and p-value of 0.025. This implies that a percentage increase in FDI will lead to about 0.20 per cent increase in economic growth. More also, good governance has a coefficient of 0.313711 and a significant p-value of 0.313711. This means that a percentage increase in good governance will lead to about 31 per cent in economic growth. The interaction between FDI and institutional quality (good governance) has a positive coefficient of 0.39337 and a significant p-value of 0.0026. With institutional quality (good governance), an increase in FDI will lead to a 0.39 per cent increase in economic growth. Moreover, that FDI tend to have a more positive and significant impact on economic growth when moderated with good governance Furthermore, gross fixed capital formation and human capital development have a significant and positive impact on economic growth; thus, their one per cent increase will lead to about 0.15 per cent and 0.09 per cent increase in economic growth, respectively. Inflation has a negative coefficient of -0.718841 and a significant pvalue of 0.0020, meaning that a percentage increase in inflation will amount to about 0.71 decreases in economic growth. Although labour force and trade openness have a positive coefficient of 0.089570 and 0.122517, respectively, the positive impact is not significant on economic growth since the p-value (0.0682 and 0.00978) is greater than the 5 per cent level of significance. The ECM represents the rate at which the dynamic model adjusts to regain equilibrium after a disruption. The ECM coefficient is -0.57, which means that divergence from long-run equilibrium caused by a short-run shock is corrected at an adjustment speed of 57% in the current period. The constant C of the regression model is 0.216450; it is positive and statistically significant at 5% level of statistical significance. The constant provides the value of economic growth when all the independent variables are simultaneously held at zero The Adjusted R-Squared, which is a more precise measure of goodness of fit, is 0.719921. This implies that about 71 per cent variation in the economic growth of Nigeria over the period under study is influenced by the explanatory variables in the model; thus, the remaining 29 per cent can be attributed to other variables that influence economic growth but not captured in the model. These variables are captured in the error term (ε). The F-statistic of the model is 4.164815 and it is statistically significant at the 5% level of significance since the pvalue is 0.0011307; this indicates that the model is well specified and therefore shows that the independent variables jointly have a significant influence on the dependent variable. The Durbin-Watson statistic value of the model is approximately 2, having a value of 1.714028. This value indicates that the model is free from any problem of serial correlation; therefore, the period residual of the model is not correlated with previous period residuals of the model.
Residual Diagnostics Tests
This section presents the post estimation test that was estimated to ascertain the reliability and validity of the result estimates. They include the Heteroskedasticity test and multicollinearity test. The results from the tests are shown in the tables below. Researcher's Computation using Eviews 10 Table 4 shows the outcome of the Breusch Pagan-Godfrey test, which was used to determine the residual status of the variables. The p-value of 0.5304 surpasses 0.05 at the 5% level of significance, indicating no heteroskedasticity in the series. As a result, the null hypothesis that there is no heteroskedasticity in the residual cannot be rejected. This implies that the model meets the heteroskedasticity test, demonstrating that the residuals have equal variance. Researcher's Computation using Eviews 10 Table 5 presents the result of the serial correlation test using the Breusch-Godfrey LM test for autocorrelation. The test for serial or autocorrelation in the residuals conducted reveals that the errors are with zero mean and serially uncorrelated given that the Chi-Square statistics in table (6) also exceeds the chosen level of significance (0.00959> 0.05). In other words, there is no serial correlation the residuals.
Stability Test
To assess the stability of the coefficients, we applied the CUSUM and CUSUMQ of Squares tests as proposed by to check the stability of the long-run parameters and the short-run movements for the ARDL-Error Correction Model.
Figure 1. Plot of CUSUM and CUSUMSQ (Stability Test)
Researcher's Computation using Eviews 10 Figure 1 indicates that the CUSUM and CUSUMSQ statistics are well inside the 5% critical bounds, suggesting that the ARDL-Error Correction Model's short-run and long-run coefficients are stable.
Conclusion and Recommendations
This study provides an empirical analysis of the moderating effect of institutional quality on Nigeria's FDI-growth relationship from 2006 to 2020. The study employed the good governance indicator as a proxy for institutional quality and the ARDL bound testing approach for the regression analysis. The following key findings are established. First, a long-run relationship was established between FDI and the economic growth of Nigeria within the period under study. Second foreign direct investment has an unconditional positive impact on the economic growth of Nigeria. We also found a significant positive impact of institutional quality (good governance) on economic growth. Third, regarding the interactive effect of institutional quality on FDI-growth effect, we find convincing evidence that institutional quality (good governance) alters the effect of FDI on economic growth favourably. Overall this study has established a net direct positive and significant effect of foreign direct investment on economic growth and that this effect is enhanced by institutional quality (good governance). The main policy conclusion of our research is that Nigeria should strengthen its governance quality to benefit more from FDI and achieve better economic growth results. Furthermore, to reap the benefits of FDI, Nigeria must move beyond strengthening general governance to enhance the fight against corruption and build the rule of law by making its judicial system trustworthy in the eyes of the public. Journal of Economics and Sustainable Development www.iiste.org ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.12, No.22, 2021 | 8,821.4 | 2021-11-01T00:00:00.000 | [
"Economics"
] |
Tuning of Catalytic Activity by Thermoelectric Materials for Carbon Dioxide Hydrogenation
An innovative use of a thermoelectric material (BiCuSeO) as a support and promoter of catalysis for CO2 hydrogenation is reported here. It is proposed that the capability of thermoelectric materials to shift the Fermi level and work function of a catalyst lead to an exponential increase of catalytic activity for catalyst particles deposited on its surface. Experimental results show that the CO2 conversion and CO selectivity are increased significantly by a thermoelectric Seebeck voltage. This suggests that the thermoelectric effect can not only increase the reaction rate but also change chemical equilibrium, which leads to the change of thermodynamic equilibrium for the conversion of CO2 in its hydrogenation reactions. It is also shown that this thermoelectric promotion of catalysis enables BiCuSeO oxide itself to have a high catalytic activity for CO2 hydrogenation. The generic nature of the mechanism suggests the possibility that many catalytic chemical reactions can be tuned in situ to achieve much higher reaction rates, or at lower temperatures, or have better desired selectivity through changing the backside temperature of the thermoelectric support.
Introduction
Thermoelectric (TE) materials have recently attracted widespread interest in research because they can convert a temperature difference directly into an electrical voltage via the Seebeck effect, S = −V/ΔT, where V is the voltage between the two ends of the TE material and ΔT the temperature difference, S is the Seebeck coefficient. The performance of a TE material is ranked by its figure of merit ZT = S 2 σT/κ, where σ is the electrical conductivity, κ the thermal conductivity, and thin film and highly dispersed (nanoscale particles) metal catalysts, by using TE materials as a catalyst support for CO 2 hydrogenation. Furthermore, we show that this profound promotional effect on catalytic activity by the TE effect also enables the oxide TE material itself to possess high catalytic activity for CO 2 hydrogenation.
The concentration of carbon dioxide in the atmosphere has risen from ≈280 ppm before the industrial revolution to ≈400 ppm in 2013 and is projected to be ≈500 ppm by 2050. [6] This contributes to the increase in global temperature and climate changes due to the "greenhouse effect." Hence there are extensive efforts to reduce CO 2 emissions around the world. Generally speaking, there are three strategies to achieve this: (i) reduce CO 2 production, (ii) storage, and (iii) usage. The first two options, which involve improving energy efficiency, switching to renewable energy, and CO 2 capture and sequestration, have been the major focus in the past. The third strategy, i.e., using CO 2 as a feedstock for making useful chemicals, is regarded as the most feasible and effective solution to our carbon conundrum. [7] The CO 2 hydrogenation may undergo two main processes, the first is the reverse water-gas shift (RWGS) reaction (Equation (1)) and the other leads to the formation of hydrocarbons (Equation (2) For x = 1, y = 4, and z = 0 (i.e., the inlet gas ratio CO 2 /H 2 = 1:4) one has the methanation reaction. RWGS reaction is one of the most established reactions to convert CO 2 into CO and H 2 O, from which liquid hydrocarbons conversion via Fischer-Tropsch synthesis can be achieved.
Theoretical Consideration
NEMCA, which involves a reversible change of catalytic properties of metal catalysts deposited on solid electrolytes, can be obtained by applying a small external electric current or voltage. NEMCA is due to the back spillover of ionic species from electrolytes to form a double layer at the catalyst surface, which leads to a change of the work function and chemisorption properties of the catalyst. [3,8] The effective change of surface work function leads to an exponential change of chemical reaction, [9] i.e., where r is the new reaction rate, r o is the open-circuit reaction rate, k b is the Boltzmann constant, α is an empirically determined constants, and Δφ is the change of work function due to the applied external voltage. Under certain conditions, Δφ is linearly proportional to the non-Ohm drop of external potential. [3,4,9] The basic idea of this work is to use a thermoelectric material to change the effective work function of catalyst particles to achieve a significant increase of the catalytic activity. First, consider the change of the Fermi level for TE material BCSO when there is a temperature change (Figure 1). BCSO is a p-type TE material, holes at the hot side diffuse into the cold side upon heating, forming an internal electrical field. Once equilibrium is reached the Fermi level (also called electrochemical potential) at the hot side ε F,h is higher than at the beginning and cold side ε F,c and Δε F = ε F,h − ε F,c = -eV, here −e is the charge of an electron and V is the Seebeck voltage.
As no external charges exist, the change of the work function at the surface is the inverse change of the Fermi level, i.e., Δφ = −Δε F , so Δφ = eV at the hot surface T h . If a metal particle is deposited on the TE material, at the hot surface, its Fermi level ε F,m must be the same as the Fermi level of the TE at the surface, i.e., ε F,m = ε F,h ( Figure 1). Therefore, Δε F,m = Δε F and the change of work function Δφ m = eV is also true for metal particles supported on the TE material.
Apply the generalized dependence of catalytic rate on catalyst work function Equation (3), then we have Here γ is a constant, to be determined by experiment. The introduction of a minus sign makes −eV the extra energy of an electron at the surface due to the Seebeck voltage V. Combining Equation (4) and the definition of Seebeck coefficient S gives And Ln / / / ,at cold side Equations (4)-(6) link catalytic activity with Seebeck voltage and temperature for a metallic catalyst supported on a TE material. Figure 2 shows the temperature dependence of (a) Seebeck coefficient and electrical conductivity, (b) thermal conductivity, and (c) power factor and dimensionless figure of merit ZT for the BCSO pellets after spark plasma sintering (SPS). The Seebeck coefficient was highest at room temperature with a value 516 µV K −1 , then decreased with increasing of temperature, and reached 328 µV K −1 at 764 K. The electrical conductivity σ decreased with increasing temperature from room temperature to about 460 K, then increased with further increasing temperature, and reached its highest value of 18.8 S cm −1 at 764 K. The thermal conductivity k was found to decrease continuously with temperature, being 0.84 W m −1 K −1 at 315 K and 0.42 W m −1 K −1 at 764 K; these values are very low even for TE materials. The highest power factor (S 2 σ) of 230 µW m −1 K −2 was obtained at ≈665 K. ZT values were found to increase with increasing temperature and reached 0.37 at ≈764 K. The Seebeck coefficient and electrical conductivity of the samples had lower values but simi lar trends with temperature as BCSO prepared using self-propagating high-temperature synthesis (SHS), [10] but higher than those of BCSO prepared using solid state reaction (SSR). [11] The thermal conductivity values were lower than both SHS and SSR prepared BCSOs. [10,11] As a result, our BCSO showed similar ZT values (0.36 at 665 K and 0.37 at 764 K) as SHS (0.33 at 675 K and 0.49 at 775 K), [10] but higher than SSR (0.09 at 725 K and 0.15 at 775 K) samples. [11]
Thin Film and Nanoparticle Catalysts on BiCuSeO
The surface microstructures of the catalysts on BCSO were investigated using scanning electron microscopy (SEM). (15), Pt(80), and Pt(NP) refer to the nominal Pt film thicknesses of 15, 80 nm, and as nanoparticles, respectively. The surface of BCSO was relatively smooth. Many Pt particles (indicated by arrow heads) could be observed on the grains of Pt(80)/BCSO and some Pt particles could be seen on the grains of Pt(15)/BCSO as well (indicated by arrow heads). Pt(NP)/ BCSO had smaller grains and more voids. This was probably due to its lower sintering temperature (823 K), compared with 923 K used for the other three samples. X-ray diffraction pattern (XRD) patterns for all of the four samples are shown in Figure 3e, indicating that the BCSO in every sample was almost a single phase (PDF#45-0296) with the ZrSiCuAs structure. No second phase was observed in BCSO, but a small peak at 2θ = 27.2 suggested some Bi 2 O 3 was present as a second phase in the other three samples. Also, the Pt(111) peak was apparent for Pt(80)/BCSO, but could not be seen in the XRD patterns for Pt(15)/BCSO and Pt(NP)/BCSO, indicating that the Pt particle sizes in the latter two samples were too small to be detected by XRD, probably less than 20 nm in diameters.
The Thermoelectric and Reduced Thermoelectric (RTE) Effect Conditions
The schematic diagram of the single chamber reactor which combines TE effect with catalytic chemical reaction is shown in Figure 4a and Figure S1 reaction chamber was placed on top of a hot-plate to create a large temperature difference (≈200-300 K when T h > 500 K) between the bottom floor of the chamber and the hot surface T h of the sample (Figure 4b). A large temperature gradient in the chamber can induce strong convection along the vertical direction, which can bring in the reactants and remove the products quickly from the reaction surface T h . Figure 3a-d shows that the samples were not porous; hence, there was no porediffusion limitation. For these reasons, it was assumed that there was no mass transportation limitation, and the intrinsic chemical reaction was the rate limiting step for all the reactions investigated. Disc samples with a diameter of 20 mm and thickness of 2 mm were tested for catalytic activity as represented by CO 2 hydrogenation conversion X (%) at different temperatures (CO 2 conversion X is proportional to the CO 2 reaction rate r if the backward reaction of Equation (1) is ignored, this will be discussed later). Catalytic activities are then compared between the TE and RTE conditions, at the same front (hot) surface temperature. Under normal TE conditions, the backside of the disc was in contact with water cooled stainless steel cap (with a thin mica sheet in between for electrical insulation), so its temperature was never higher than 373 K. A large temperature gradient across the disc thickness was created when the front surface reached a high temperature. Under RTE conditions, the backside of the disc was not in contact with the cooled cap, so the temperature gradient across the disc thickness was much smaller. At a particular hot-plate temperature, after reaching thermal equilibrium, the bottom surface of the disc sample was stabilized at a temperature T h , while the top surface was at a temperature T c . Hence, for the same sample at the same temperature T h under TE and RTE conditions, the only differences was that the top surface temperature T c was different, which led to a different Seebeck voltage across the sample. The Seebeck voltage V between the surfaces T c and T h was monitored continuously during the whole period of the experiment ( surface T c (nominal surface area 100 π mm 2 ), and the side wall of the disc (nominal surface area 40 π mm 2 ) sample. From Tables S1 and S2 (Supporting Information), it can be seen that when T h was below 403 K no CO 2 conversion was obtained, and T c was never higher than 331 K. Especially at high temperatures, T h was much higher than T c , and the temperature of the side wall was between T c and T h . For these reasons we assume that for all of the samples, the measured CO 2 conversion rate was contributed from the hot surface T h only, and the contributions from the cold surface T c and the side wall of the disc sample were negligible.
Higher Catalytic Activity at the Same Temperature under TE than RTE Conditions
The reaction products observed were only CO and CH 4 , with the vast majority (>90%) being CO (Figure 5a). Figure 5b shows the CO 2 conversion as a function of temperature the TE conditions, much lower than 553 K when it was first measured under the RTE conditions (0.2%). It is plausible to assume that relative to conditions without any TE effect, the promotional effect should be even higher. It is worthy to point out that similar experiments were repeated at least once and the results were reproducible (the same samples were used for oxidized ethylene to form CO 2 and H 2 O, and a repeatable and similar thermoelectric promotional effect was observed; these are the subjects for a separate publication), this ruled out the possible explanation that the conversion difference between the TE and RTE was due to the catalyst particles aggregation at the surface. Another observation was that the BCSO TE sample, without any Pt catalyst, was also catalytically active for CO 2 hydrogenation ( Figure 5b). This may not be a total surprise, as BCSO is electrically conductive, and other conductive oxides have been found to be good catalysts. [12] Moreover, Cu and CuO catalysts are widely used for CO 2 hydrogenation, [13] so the CuO containing BCSO itself could have very low catalytic activity even without thermoelectric promotion. The first measured CO 2 conversion (0.3%) was at 493 K under TE conditions and 633 K under RTE conditions (0.8%). As for Pt(80)/BCSO, at the same temperature T h the CO 2 conversion under TE conditions was much higher than under RTE conditions. At 698 K, the CO 2 conversion was 20.8% under TE conditions compared to 3.4% under RTE conditions. Figure 5c plots Ln(X) against −eV/k b T h for all of the cases (for the p-type BCSO, V was negative and the term −eV was positive). It can be seen that a very good linear relationship existed between Ln(X) and −eV/k b T h for each case.
Promotion of CO 2 Conversion by Thermoelectric Effect
To further investigate the relationships between the temperature, Seebeck voltage, and catalytic activity, the CO 2 reduction reactions were studied for different samples under different inlet gas compositions, all under TE conditions. Figure 6a displays the measured thermoelectric voltage as a function of the temperature difference ΔT across the sample thickness for four samples, namely Pt(80)/BCSO, Pt(15)/BCSO, Pt(NP)/ BCSO, and bare BCSO, at the inlet gas ratios of CO 2 /H 2 = 1:1 and 1:4. All of the four samples were weighted as 5.8 g. All of the samples had zero voltages when their bottom and top surfaces were at the same (room) temperature. The measured voltage for each sample increased linearly with temperature difference. The linear gradient for Pt(80)/BCSO was 319 µV K −1 for ΔT < 200 K, and then decreased with increasing ΔT. The gradients for BCSO and Pt(15)/BCSO were similar and did not change with the change of the inlet gas compositions. These are typical values for Seebeck coefficient of BCSO. [2] Note that the Seebeck coefficient of 319 µV K −1 here is lower than the values reported in Figure 2a for SPS processed material. The main reason for this was that the BCSO used for the above catalysis experiments was not densified by SPS but using conventional sintering, so an inferior crystallinity and density of this sample was expected, which led to a smaller Seebeck coefficient. Another reason was that the Seebeck coefficient obtained through the linear gradient here was the average value over a large temperature range, while those in Figure 2a were obtained by changing the temperature over a smaller range (≈50 K over a 13 mm long sample), and generally speaking, the Seebeck coefficient is temperature dependent. The gradient for the Pt(NP)/BCSO was much lower, at about 136 µV K −1 , again, it also kept the same value when the inlet gases ratio was changed from 1:1 to 1:4. This much lower Seebeck coefficient was due to the fact that this sample was sintered at a much lower temperature (823 K as compared to 923 K for other BCSO), and there were still some second phases such as Bi 2 O 3 and void in the sample (Figure 3d). These results demonstrate that the measured voltage was determined by the intrinsic thermoelectric properties of the sample and the temperature difference only, and was not affected by the gas compositions. Again, the reaction products were found to be CO and CH 4 , with the majority (>80%) being CO. Higher H 2 concentration in the inlet gases led to lower CO selectivity. The temperature dependences of CO selectivity for six cases are shown in Figure 6b, while the other two, BCSO 1:1 TE and Pt(80)/ BCSO 1:1 TE, are shown in Figure 5a. Generally speaking, at T > 600 K, the CO selectivity increased with temperature and voltage. Figure 6c shows the CO 2 conversion as a function of the hot-surface temperature T h for different samples at the inlet gas ratios of CO 2 /H 2 = 1:1 or 1:4. All of the samples showed a similar trend, i.e., the conversion increased with temperature. It can be seen that for the same sample, higher H 2 concentration leads to higher CO 2 conversion. Pt(80)/BCSO reached 48.4% conversion at 656 K, the highest for all of the samples, indicating that the Pt surface had the highest catalytic activity. Remarkably, even without any Pt catalyst, the TE material BCSO (CO 2 /H 2 = 1:4) itself reached a conversion of 41.2% at 703 K.
Combining the results as shown in Figures 5c and 6d, we can summarize the observed relationship in Equation (7) Ln / / Here, X 0 is the conversion rate when V equals zero, i.e., when T c = T h . For the p-type TE material BSCO, V at T h surface is negative, so − γeV is positive and the conversion rate at the hot side T h could be much higher with a TE voltage than without, we call this thermoelectric promotion of catalysis (TEPOC), or thermoelectrocatalysis as the TE material itself can be catalytic active. Take an experimental data point for Pt(80)/BCSO @ 1:4, T h = 656 K, V = −86 mV, and γ = 7.15, so Ln(X/X 0 ) = −γeV/k b T h = 10.88, and X/X 0 = 53103. This means that at 656 K, the conversion with a Seebeck voltage of −86 mV was more than 53 thousand times higher than without a Seebeck voltage. Equation (4) can lead to Equation (7), and vice versa, if the conversion X is proportional to the reaction rate r. This requires the conversion X to be much lower than the thermal equilibrium conversion (TEC) of the reactions in Equation (1) and Equation (2), so that the backward reactions were negligible. The TECs at 673 K for CO 2 conversion in RWGS reaction without methanation (to CO only) are about 22% and 42% for an inlet gas ratio CO 2 /H 2 = 1:1 and 1:4 respectively; with methanation, the corresponding values are about 23% and 80%, respectively. [14,15] Under RTE conditions, the conversion rate was very low; hence, the CO 2 conversion on both Pt(80)/BCSO and BCSO was far away from the TEC, so it is safe to assume that the backward water-gas shift reaction can be ignored and the CO 2 conversion X was linearly proportional to the reaction rate r. So for the two cases under RTE conditions, the experimental results confirmed the prediction of Equation (4).
The rate of chemical reactions usually follows the Arrhenius law, so r 0 = k 0 exp(−E a /k b T h ), here k 0 is a constant, E a the activation energy of the reaction. Equations (4) and (7) apparently suggest that the activation energy is reduced by −γeV, i.e., E′ a = E a + γeV, E′ a is the new activation energy when there is a TE voltage V (a negative value for our case, as the reaction take place at the hot side of a p-type TE material). Figures 5b and 6c show that at high temperatures around 673 K, when there was a high Seebeck voltage, the CO 2 conversion rate reached TEC (22.9% for Pt(80)/BCSO @ 1:1 TE at 673 K), or just slightly below TEC (17.6% for BCSO @ 1:1 TE at 673 K, 37.2% for Pt(15)/BCSO @ 1:4 at 678 K, 36.3% for BCSO 1:4 at 683 K), even above the TEC (48.4% for Pt(80)/ BCSO 1:4 at 656 K) without methanation. For the purpose of comparison, the TEC values for CO 2 conversion in RWGS reactions with and without methanation at CO 2 /H 2 ratios 1/1 and 1/4 at 673 K are also presented in Figure 6c. To the best of our knowledge, 48.4% is the highest reported CO 2 conversion to CO (with 100% CO selectivity) at atmosphere pressure below 673 K with an inlet gas ratio CO 2 /H 2 no larger than 4. [13,15,16] How can the CO 2 conversion to CO exceeds the TEC at 673 K? It can be seen from Figure 6b that at temperatures T h > 678 K, the CO selectivity was >95% for all the samples. Notably for Pt(80)/BCSO @ 1:4, the CO selectivity was 100% at 656 K. The CO selectivities observed at these temperatures were also much higher than the predicted values under the consideration of thermal equilibrium. [13,15,16] These results indicate that the Seebeck voltage promoted the conversion to CO and forward reaction in Equation (1), hence changed the TEC. This agrees with the observation that an electric field (via the NEMCA mechanism) shifted the chemical equilibrium, increased the RWGS reaction, and decreased the (backward) water-gas shift reaction. [15] With the assistance of an electric voltage of 1.6 kV, CO 2 conversion to CO on a Pt/La-ZrO 2 catalyst reached 40.6% with an inlet gas ratio CO 2 /H 2 = 1:1 at 648 K, much higher than the TEC of about 20% without electric field at the same temperature. [15] This shifted the chemical equilibrium and TEC by an electrochemical energy of −eV and may also be the reason why we observed the linear relationships in Figures 5c and 6d. Strictly speaking, if the conversion rate is close to the TEC and the backward water-gas-shift reaction cannot be ignored, and the conversion X is not determined by the reaction rate, and Equation (4) cannot lead to Equation (7). Nevertheless, a very good linear relationship between Ln(X) and −eV/k b T h was observed for all the cases investigated. The most plausible explanation is that the Seebeck voltage V (or electrochemical energy −eV) shifted the reactions in Equation (1) toward the forward reaction, i.e., the RWGS against the backward water-gas shift reaction. Hence, the achieved conversion rate was still far away from the new chemical equilibrium and Equation (7) can still be explained by Equation (4).
Discussion
Referring to Figure 6a,c,d, all the samples, either bare BSCO or BCSO with a continuous Pt thin film Pt(80)/BCSO, or BCSO with discontinuous Pt nanoparticles Pt(15)/BCSO and Pt(NP)/ BCSO, showed similar CO 2 conversion dependence with the temperature T h and Seebeck voltage V (Figures S4 and S5, Supporting Information). The four samples with similar Seebeck voltage at a particular temperature, i.e., BCSO @1:4, Pt(15) BCSO @1:1, Pt(15)/BCSO @1:4, and Pt(80)/BCSO @1:4, also had similar Ln(X) ∼ −eV/k b T h relationships. The sample Pt(NP)/ BCSO @ 1:1 and 1:4 had the lowest Seebeck voltage and also had a similar Ln(X) ∼ −eV/k b T h relationship. This suggests that the Seebeck voltage, not specific surface property, was the most important factor in determining the catalytic activity. This also agrees with the observation that the CO 2 conversion stronger dependence on the effect of the electric field than the nature of the catalyst. [15] These results also agree with the observations in NEMCA of CO 2 hydrogenation in that a negative (reduced) potential increased the selectivity and reaction rate to CO, and a positive (increased) potential increased the selectivity and reaction rate to CH 4 . [13,17] From the above discussion, all the above observed results can be explained by Equation (4), i.e., the change of work function lead to the promotion of catalytic activity. This mechanism based on the change of work function through the in situ and controlled TE effect suggests that TEPOC is an effective mechanism for any metallic catalysts, regardless of their properties such as particle size or total amount of the metal. This is because whatever the particle size or chemisorption property, the Fermi level of the metallic particle will be the same as that of the surface TE materials supporting them. The total amount of the metal particles, indeed any second phase materials, will affect the TE properties such as Seebeck coefficient and electrical conductivity, as the whole system can be regarded as a TE composite. This is because all of the samples, with or without metal Pt, are just thermoelectric materials with a different Seebeck coefficient. Of course, the metal particle surface and TE surface may have different adsorption properties, which may lead to different catalytic properties.
Since the TE effect can be realized independently of chemical reactions, its modification to the catalytic activity can be in situ under operational conditions, and controlled through the control of the backside temperature, e.g., changing the water cooling to liquid nitrogen cooling. For n-type TE materials, the Fermi level at the cold side is higher than at the hot side, but the relationship ε F,h − ε F,c = −eV is still valid, so is Δφ = eV, but V is now positive.
The significant promotional effect of the TE effect when there is a large Seebeck voltage can be understood from the energy point of view. −eV/k b T h can be regarded as the ratio between the extra electrochemical energy induced by TE effect and the thermal energy of an electron at the reaction surface. At 300 K, the thermal energy k b T is 25.9 meV. So, 104 mV of Seebeck voltage gives 104 meV extra electrochemical energy to an electron at the Fermi level, which is equivalent to the thermal energy of an electron at 1200 K, but a 104 mV Seebeck voltage can be generated by a temperature difference of 347 K by a TE material (such as BCSO) with an average Seebeck coefficient of 300 µV K −1 . So, TE effect is a very efficient way to enhance the electrochemical energy of an electron at the reaction surface.
Considering Δφ = eV in TEPOC, note that Equation (4) is similar to the rate equation for NEMCA [3,8,9] , which is Ln(r/r o ) = α(Δφ − Δφ*)/k b T, where r o is the open-circuit reaction rate, α and Δφ* are empirically determined constants, Δφ is the change of work function due to the applied external voltage. Under certain conditions, Δφ is linearly proportional to the non-Ohm drop of external potential, [3,8] so the rate Equation (4) for TEPOC looks exactly the same as the rate equation for NEMCA. However, there are a few important differences between NEMCA and TEPOC. (i) No electrolyte nor external voltage are needed in the TEPOC system, while for NEMCA, an electrical insulating electrolyte layer is crucial otherwise a non-Ohm drop of potential (or ionic current) cannot be established. In fact, the unusually low thermal conductivity of BCSO has been attri buted to its negligible ionic conductivity, so the back spillover of ionic species in BCSO would have been negligible. [18] Also, we did not observe any change of reaction rate when an external voltage (positive or negative) was applied to the Pt(80)/ BCSO or other samples. (ii) Unlike in NEMCA, the catalyst in TEPOC (e.g., Pt) does not need to be continuous, as TE materials are electrically conductive. Highly, separately dispersed catalysts, including nanoparticle catalysts, can be promoted by TEPOC. (iii) The constant α in NEMCA is smaller than unity, but the values for the constant γ in TEPOC have been found to be larger than 1. The fact that γ > 1 in the Equations (4)-(7) for TEPOC indicates that there is an amplification effect when the extra electrochemical energy eV is present during catalytic chemical reactions. The mechanism for this is not clear yet, but we speculate that this is related to the increase of the number of electrons available for catalytic reaction with the increasing temperature, as no change of electron density with temperature should mean γ = 1. (iv) The TE material itself can be used as a catalyst when there is a large Seebeck voltage. (v) Furthermore, more importantly, the mechanism for the change of work function at the catalyst surface in TEPOC is different from that in NEMCA. In NEMCA, the external voltage induces the diffusion of ionic species, which form a double layer on the catalyst surface, and produce a change of the work function. [3,8] Hence, the change of work function Δφ is an indirect consequence of the external voltage V, and the linear relationship between Δφ and V is true only under certain conditions and may be sample dependent. [19] In TEPOC, the relationship between Δφ and Seebeck voltage V is directly linked by the change of Fermi level, not through the formation of a double layer.
Conclusions
The thermoelectric oxide BiCuSeO has been produced using a facile solid state reaction method using B 2 O 3 as a flux agent in air. An innovative use of the thermoelectric material as a catalyst support and promoter has been proposed and investigated through the CO 2 hydrogenation to produce CO and CH 4 . A very high CO 2 conversion of 48.4% to CO with 100% CO selectivity under atmosphere at temperatures below 673 K with the inlet gas ratio CO 2 /H 2 = 1:4 was obtained.
It is proposed that the thermoelectric effect can change the Fermi level and therefore the work function of the electrons in the catalyst particles supported on a thermoelectric material. This change of work function leads to exponential increase of catalytic activity. It was indeed observed in experiments that the catalytic activity of metallic particles supported on the thermoelectric materials, as represented by the CO 2 conversion, was significantly promoted by a Seebeck voltage generated through a temperature difference across the thickness of the thermoelectric support. This thermoelectric promotion of catalysis also enabled the BiCuSeO itself to possess high catalytic activity. It was further confirmed by experimental results that there exists a linear relationship between the logarithm of the catalytic activity, and − eV/k b T, which can be regarded as the ratio of extra electrochemical energy (−eV) induced by thermoelectric effect and thermal energy (k b T) of an electron. This extra electrochemical energy can also change the chemical equilibrium and selectivity of the reaction.
The general nature of the mechanism suggests that thermoelectric promotion of catalysis could be a universal phenomenon.
Experimental Section
Thermoelectric Material Preparation: The TE material BCSO was synthesized by solid-state reaction using boron oxide (B 2 O 3 , Alfa Aesar, 99%) as a flux agent in air. During the flux synthesis, the melted B 2 O 3 served as a liquid-seal on the top of the crucible. The obtained product of each sample was then ground to a fine powder. The latter was densified at 150 MPa using a hydraulic press system to form a dense pellet of 20 mm in diameter and 2 mm in thickness. Then, the green pellet was sintered at 923 K for 10 h under an argon atmosphere. Further sintering by SPS was carried out before thermoelectric property measurements, using a HP D 25/1(FCT Systeme GmbH, Frankenblick, Germany).
Preparation of Catalysts: The film catalysts were deposited on BCSO by magnetron sputtering method (Nordiko). The Pt films were prepared using pure Pt (99.99%) as the sputtering target. The thicknesses of the Pt films were ≈80 nm for three min and ≈15 nm for 20 s of sputtering time, respectively named Pt(80)/BCSO and Pt(15)/BCSO. Another platinum nanoparticle sample (Pt(NP)/BCSO) was synthesized using an impregnation method. For this sample, the green pellet was calcined at 823 K for 2 h under argon atmosphere and then reduced under 5% H 2 in Ar at 773 K for 4 h.
The microstructural investigations were carried out using XRD (Siemens 5005) at 40 kV with a Cu Kα source and a scanning electron microscope (Philips, FEI XL30 SFEG).
Thermoelectric Property Measurements: The thermal diffusivity (D) was measured by using laser flash method (LFA-457, Netzsch, Germany) under a continuous argon flow. The total thermal conductivity (κ total ) was calculated by the formula κ = DC p ρ, where ρ was the mass density measured by the Archimedes method, while the specific heat (C p ) was determined using a differential scanning calorimeter instrument. The electrical conductivity and Seebeck coefficient were simultaneously measured (LSR-3/1100, Linseis) in a He atmosphere.
The Reaction Chamber: Chemical reactions were performed in a single chamber reactor. A schematic diagram of the reactor can be seen in Figure 1. The cover plate was cooled with continuous running water. Gold wires (Agar Scientific, Ø 0.2 mm) were used as electrical contacts, and temperatures were measured with K-type thermocouples (Ø 0.25 mm, TC Direct) placed directly on the sample surfaces. The reaction chamber was placed directly onto a high temperature hot plate (HP99YX, Wenesco, Inc.) with a temperature controller. The Seebeck voltage was measured continuously (Figures S1 and S2, Supporting Information) between the bottom surface and the top electrode (Au) using a potentiostat-galvanostat (VersaStat 3F, Princeton Applied Research).
Catalytic Activity Measurement: The catalytic activity measurements of different catalysts were carried out at atmospheric pressure in a continuous flow apparatus equipped with the stainless steel reactor (Figure 1b). The reaction reactants and products were continuously monitored using online gas chromatography (GC8340, CE instruments) and online IR analyzer (G150 CO 2 , Gem Scientific) to quantify the concentration of H 2 , CO, CH 4 , and CO 2 . To monitor the temperature, a K-type thermocouple was attached (and fixed using a high temperature tape) onto the catalyst surface for the T h measurement. Another K-type thermocouple was placed in proximity of the top surface for the T c measurement. Carbon mass balance for all of the experiments was found to be within 6%.
The catalyst activities were investigated with the composition of carbon dioxide and hydrogen at a ratios of CO 2 :H 2 = 1:1, and CO 2 :H 2 = 1:4. All samples were tested at an overall flow rate of 100 mL min −1 .
The conversion of CO 2 and the selectivity of CO and CH 4 were evaluated from the outlet carbon percentage values obtained by the gas analysis. H 2 O vapor was condensed before entering the GC to prevent deterioration of the GC column. CO 2 conversion X CO 2 , and the selectivity of CO and CH 4 were calculated as where y CO 2 , y CO , and y CH 4 were the mol fractions of CO 2 , CO, and CH 4 in the outlet, respectively. × × , where f v = 100 mL min −1 is the volumetric flow rate at the outlet of the reactor and r CO 2 is the CO 2 reaction rate.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author. | 8,370.6 | 2018-02-01T00:00:00.000 | [
"Chemistry"
] |
Re-Evaluating the Current ANCA Screening Method: Multiplex Indirect Immunofluorescence Assay in ANCA Testing
Objective: The Indirect Immunofluorescent Test (IIFT) method is a crucial component of Anti-Neutrophil Cytoplasmic Antibodies (ANCA) diagnostics as it is the only method that can be used to detect not only small vessel vasculitis related ANCA, but also ANCAs in Chronic Inflammatory Bowel Disease (CIBD) and other disease conditions. The conventional IIFT method only uses ethanol fixed granulocytes for initial testing, which results in a lack of specificity. The purpose of this study is to outline an improvement in the screening and interpretation of ANCA by IIFT, using a 3-chip combination per incubation well. With the inclusion of an ethanol fixed granulocyte chip, formalin fixed granulocyte chip, and mixed cell chip containing both ethanol fixed granulocytes and HEp-2 cells, we aim to demonstrate a reduction in the number of false positives and false negatives, increase ANCA testing specificity without the loss of sensitivity, as well as simplify the reporting. Design/Method: 261 serum samples were obtained from the serum bank of Euroimmun Canada. Each sample was tested for both Anti-Nuclear Antibodies (ANA) and ANCA (3-chip combination) using IIFT, and was also tested using Anti-MPO/PR3 ELISA. ANCA profile (including six antigens) ELISA was used as well. All reagents were from Euroimmun. Results: Distinct ANCA patterns were identified according to our simplified scheme, including: ANCA (cor p-ANCA), atypical ANCA (atypical cor p-ANCA) and atypical inconclusive ANCA. The use of a 3-chip combination eliminated ANA interference and resulted in a significant reduction of 75.6% in the unsure “positive” ANCA results that were initially determined from using ethanol fixed granulocytes only. The ethanol fixed granulocyte chip and formalin fixed granulocyte chip are important for pattern classification, while the mixed cell chip is crucial for differentiating true positive ANCA from ANA interference. Noticeably, this IIFT 3-chip combination testing revealed 5% more true ANCA positive samples, which were ANCA negative when using ethanol fixed granulocytes only as the initial screening. Conclusion: The use of a 3-chip combination multiplex IIFT approach is necessary for an accurate interpretation and analysis of patient serum for ANCA screening. This proposed inclusion of additional substrates in the IIFT ANCA diagnostic procedure has shown to result in increasing specificity of the IIFT, while maintaining a high sensitivity. The simplified reporting also makes the ANCA result standardization a reality. ANCA IIFT method continues to be very useful in routine testing for different groups of diseases. DOI: 10.29011/IJCP-105.000005 Citation: Reinhart WE, Gallan Y, Godbout T, Ma D (2017) Re-Evaluating The Current ANCA Screening Method: Multiplex Indirect Immunofluorescence Assay In ANCA Testing. Int J Clin Pathol Diagn, 2017: J105. 2 Volume 2017; Issue1
Introduction
Anti-Neutrophil Cytoplasmic Antibodies (ANCA) are a group of autoantibodies that react with various proteins within neutrophils. Screening patient serum for these antibodies holds clinical significance and gives insight into various autoimmune disorders, including ANCA-Associated Vasculitis (AAV), Chronic Inflammatory Bowel Disease (CIBD), autoimmune liver disease, collagenosis, and more [1][2][3][4][5][6][7][8]. There are three common ANCA patterns: Cytoplasmic ANCA (cANCA), Perinuclear ANCA (pAN-CA), and atypical ANCA [1,2,5,4,9,10]. cANCA shows fluorescence patterns of the cytoplasm on ethanol fixed granulocytes, which is typically a result of the presence of autoantibodies that react with the major target antigen, Proteinase 3 (PR3). pANCA demonstrates fluorescence of the outer edge of ethanol fixed granulocyte cell nuclei, resulting from antibodies that react with various target antigens, the major antigen being Myeloperoxidase (MPO) [4,5,9]. Atypical ANCA includes patterns not incorporated by the cANCA and pANCA patterns [2,5,11,12]. The atypical pattern is not clearly described in the literature and varies between laboratories. However, the target antigens for the described atypical pattern seem to include elastase, lactoferrin, cathepsin 3, Bactericidal Permeability Increasing Protein (BPI) and other specificities [5,7,8,13]. Diagnostic testing for ANCA is conducted using the Indirect Immunofluorescent Test (IIFT) and monospecific antibody testing. Currently the standard diagnostic algorithm for the testing and reporting of ANCA involves both the IIFT, using ethanol fixed granulocytes, and MPO and PR3 specific assays [2,14]. When patient sera are tested for ANCA, it is routinely tested using ethanol fixed granulocytes, displaying fluorescent patterns that illustrate the types of autoantibodies present in the sera and give insight into the specific autoimmune disease [15]. Monospecific ANCA testing is typically used to detect anti-PR3 and anti-MPO antibodies in AAV [1]. However, additional ANCA profile can be run on patient samples to determine whether alternative antigen specific autoantibodies are present in other disease groups, including CIBD, autoimmune liver disease, and other systemic rheumatic diseases, etc. Although monospecific assays are used in the diagnostic process of ANCA, the IIFT remains important since many antigens have yet to be identified. Oudkerk Pool M,Ellerbroek P,Ridwan B,Goldschmeding R,von Blomberg B,et al. [16] showed that combining both ELISA and IIFT in the screening algorithm increases the specificity and sensitivity of ANCA diagnostic testing as a whole. The IIFT is able to identify almost all ANCA positive sera, as it also detects the presence of antibodies for antigens aside from MPO and PR3, which are the only two included in the standard ANCA monospecific test. A study by Lin M,Silvestrini R,Culican S,Campbell D,Fulcher D [17] demonstrated that many ANCA positive patient samples are negative for PR3 and MPO antigens, which would have been identified as false negatives if IIFT was not used in addition to monospecific testing. The problem with the conventional IIFT is that the method is not standardized and lacks specificity as a result of using inadequate substrates. Due to the subjective nature of interpreting IIFT ANCA results, there is a low specificity of the test, resulting in a significant number of false positives. Studies have attributed false positives in IIFT testing for ANCA to the potential influence of ANA on ANCA results, as ANA patterns appear when using ethanol fixation [1,5,9,14]. Therefore, false positives can occur due to ANA positive results being mistaken for ANCA positive results. Since ANA can interfere with the results of ANCA when strictly using ethanol fixation, research investigating additional fixations is crucial. Hagen E,Daha M,Hermans J,Andrassy K,Csernok E,et al. [14] exemplified this requirement by establishing a sensitivity range of 81%-85%, for various forms of AAVs, and a specificity of 76%, for diseased controls. This study strictly used IIFT with only the ethanol fixation and demonstrated a significant amount of false positives. Stone J,Talor M,Stebbing J,Uhlfelder M,Rose N,et al. [1] explored the validity of the IIFT for ANCA diagnostics, and included both ethanol fixed granulocytes and formalin fixed granulocytes. The inclusion of the formalin fixation allowed for an increase in specificity from 76%, as seen in the study by Hagen,et al. [14], to 93% [1]. The inclusion of formalin fixed granulocytes has been shown in a number of studies to increase the specificity and result in more accurate ANCA testing. Additionally, ANA presence can interfere with ANCA results. A number of laboratories are including HEp-2 cells to differentiate ANA and ANCA which could subsequently further reduce the number of false positives present in the current ANCA IIFT algorithm [18]. However, it is important to have a clear and standard procedure that combines findings from the literature. Therefore, it is obvious that a multiplex approach is the best choice for ANCA testing, including the IIFT. This study aims to use a multiplex substrate approach in the IIFT, developed by Euroimmun, to standardize the IIFT for ANCA testing, that is easy to comprehend and reduces the subjectivity of analysing results. We intend to illustrate the reduction of false positives with the inclusion of formalin fixed granulocytes and a mixture of HEp-2 cells with ethanol fixed granulocytes, alongside an ethanol fixed chip. This study is focused on quality diagnostics from a laboratory point of view.
Clinical Samples
All patient samples were obtained through the serum bank of Euroimmun Medical Diagnostics Canada. No patient records were required for the purposes of the present study. 261 serum samples were included in the study. Most samples were previously tested as ANA positive and 55 samples were previously tested as ANCA positive. All samples were re-tested for ANA using IIFT and tested for ANCA using IIFT and Anti-PR3/Anti-MPO ELISA. Addition-ally, an ANCA profile (ELISA) was completed for 101 samples.
IIFT -ANA
Each well on the slide consists of a HEp-2 epithelial cell chip (Euroimmun, Germany). Samples were diluted 1:80 for testing. All testing for IIFT was done via the automated IF Sprinter (Euroimmun). The instrument automatically pipetted 30ul of the positive control, negative control, and the diluted sample, respectively, onto each BIOCHIP well. After this step, the slides were incubated at room temperature for 30 minutes. After the incubation period, the IF Sprinter brought the slides to the washing station to complete the washing procedure using PBS Tween 20 (PBST). After washing, the IF Sprinter pipetted 25 µl of the conjugate (FITClabelled goat anti-human IgG) to each of the BIOCHIP wells, which were then incubated. The wells were then washed again as previously described. After the automated processing, each of the BIOCHIP slides was removed from the trays. The coverslips were placed on each slide with the embedding medium. The slides were then read through EUROStar LED microscope (Euroimmun) and results were recorded.
IIFT -ANCA
Each 3-chip BIOCHIP well on the slide consists of ethanol fixed granulocytes, formalin fixed granulocytes, and a mixture of HEp-2 cells and ethanol fixed granulocytes (mixed cell chip) (Euroimmun, see Figure 1). Patient sera were diluted to 1:10. All testing for ANCA was processed via the automated IF Sprinter, following the same procedure explained previously for ANA testing.
Analysis of IIFT Results
For both ANA and ANCA, the pattern and intensity level was determined and recorded. Specifically, for ANA, the identified patterns were nuclear patterns (homogenous, speckled, nuclear dots, centromeres, nucleolus, nuclear membrane, mitotic), cytoplasmic patterns (fine granular, coarse granular, droplets and filamentous) and combination of these. Through visual analysis under EURO-Star LED microscope, ANA intensity was categorized into 5 levels: 0= (negative), 1= (weak positive), 2= (positive), 3= (strong positive), 4= (very strong positive). With ANCA IIFT, the ethanol fixed granulocytes and formalin fixed granulocytes were used to determine the pattern. The mixed cell chip was used to check ANA interference and antibody specificity. In most cases, ANCA exists when the granulocytes in the mixed cell chip have greater fluorescence intensity than the HEp-2 cells; and the comparison can be used to differentiate ANA and ANCA coexistence from ANA interference. The ANCA patterns were analyzed and categorized as following: ANCA (c-or p-ANCA) with a positive reaction on the ethanol • fixed granulocyte chip, the formalin fixed granulocyte chip and the granulocytes on the mixed cell chip.
Atypical ANCA (atypical c-or atypical p-ANCA) with a posi-• tive reaction on the ethanol fixed granulocyte chip and the granulocytes on the mixed cell chip, and a negative reaction on the formalin fixed granulocyte chip.
Atypical inconclusive ANCA with a negative reaction on the • ethanol fixed granulocyte chip and a positive reaction on the formalin fixed granulocyte chip and a positive or negative reaction of the granulocytes on the mixed cell chip.
ANA intensity at sample dilution of 1:10 was categorized using the mixed cell chip into 5 distinct levels, as described for ANA intensity.
Monospecific Confirmation ELISA Testing for ANCA
All samples were tested using anti-MPO and anti-PR3 antibody ELISA kits (Euroimmun) and 101 samples were tested additionally for anti-elastase, anti-lactoferrin, anti-cathepsin 3 and anti-BPI, using ANCA profile ELISA kit (Euroimmun).
Anti-MPO and Anti-PR3 ELISA
All Anti-MPO and anti-PR3 antibodies were tested using the fully automated Euroimmun Analyzer 1. In short, patient samples were diluted 1:201 in sample buffer and placed into wells. 100µL of calibrators, positive controls, negative controls and diluted samples were pipetted into the ELISA plate wells, respectively, and incubated for 30 minutes. The wells were then washed with wash buffer in the automated washing station. After washing, 100µL of enzyme conjugate (peroxidase-labelled rabbit anti-human IgG) was added to each plate well and incubated for 30 minutes. The wells were then washed as previously described. After washing, 100µL of substrate solution (TMB/H 2 O 2 ) was pipetted into each well and incubated for 15 minutes. 100µL of stop solution (0.5 M sulphuric acid) was added to each microplate well. The complete plate was then read on the wavelength of 450nm, with a reference wavelength of 620nm.
ANCA Profile
101 of the samples were tested using ANCA profile, to determine alternative antigen specificity in addition to MPO and PR3, including: lactoferrin, elastase, cathepsin 3 and BPI. The ELISA testing for ANCA profile was completed using the Euroimmun Analyzer 1, following the same procedure explained previously for anti-MPO and anti-PR3 ELISA.
Analysis of ELISA and ANCA Profile
For the anti-MPO and anti-PR3 ELISA test, a quantitative value was calculated according to a standard calibrator curve. Results that are equal to or greater than 20.0 (RU/mL) are considered positive for either anti-MPO or anti-PR3 antibodies. For the ANCA profile ELISA test, semi-quantitative value was obtained using a cut off calibrator. Results that are equal to or above 1.0 (ratio) are positive for antibody presence.
Statistics
A chi-square test and a sign-test were conducted for the statistical analysis. A p-value less than 0.05 indicated a significant difference.
Categorizing the ANCA Patterns
Positive ANCA samples were identified and categorized into distinct ANCA patterns, including: ANCA (c-or p-ANCA). • Atypical ANCA (atypical c-or atypical p-ANCA). • Atypical inconclusive ANCA. • Identification of the specific pattern, pANCA or cANCA, of the sample is dependent on whether the ethanol fixed granulocyte chip is positive. A cANCA or pANCA pattern is presented when the formalin fixed granulocyte chip, the ethanol fixed granulocyte chip and the granulocytes on the mixed cell chip are positive, as seen in Figure 2a and Figure 2b, respectively. A sample is defined as atypical pANCA or atypical cANCA, when the ethanol fixed granulocyte chip is positive, the formalin fixed granulocyte chip is negative, and the granulocytes on the mixed cell chip are positive, as seen in Figure 2c (shown as atypical pANCA). There were 6 samples with an atypical cANCA-like pattern; however, the atypical cANCA pattern was excluded for these cases due to non-ANCA related cytoplasmic autoantibody interference, identified through the use of the HEp-2 and granulocyte mixed chip.
The last type of ANCA pattern is termed as atypical inconclusive ANCA, which occurs when the ethanol fixed granulocyte chip is negative, the formalin fixed granulocyte chip is positive, ANA Fluorescence Intensity at a Sample Dilution of 1:10 vs. 1:80 As previously described, all 261 samples were tested using IIFT for ANA, using HEp-2 cells, at a 1:80 sample dilution. Additionally, all samples were tested for ANCA, using HEp-2 cells (mixed with ethanol fixed granulocytes), at a 1:10 dilution. The intensity level of the HEp-2 cells at both dilutions were analyzed and recorded as described in the methods section. The intensity levels between the two test results were compared, illustrating significantly higher intensity level when using the 1:10 dilution than the 1:80. All statistical analyses were completed on the 200 samples that tested positive for ANA (see Table 1). and the granulocytes on the mixed cell chip are either negative (shown in the Figure) or positive (not shown in the Figure), as illustrated in Figure 2d.
ANCA Results: Ethanol Fixed Granulocytes Alone vs. 3-Chip BIOCHIP
200 samples that tested positive for ANA were selected for the comparison of results when using only ethanol fixed granulocytes and those when using the 3-chip BIOCHIP. 135 of the ANA positive samples were determined to be unsure ANCA "positive", as they were based solely on the ethanol fixed granulocytes. With the additional analysis of the formalin fixed granulocyte chip and the mixed cell chip, only 33 samples were identified as true ANCA positives. With the 3-chip BIOCHIP, the number of false positives caused by ANA interference were significantly reduced by 75.6% (see Figure 3).
ANCA Pattern Distribution
A total of 82 true positive ANCA samples were identified and an ANCA pattern distribution was produced as shown in Figure 4.
Atypical Inconclusive ANCA
This ANCA pattern is defined as a pattern that shows the ethanol fixed granulocytes as negative and the formalin fixed granulocytes as positive. Four of the 82 true positive ANCA samples were identified as atypical inconclusive ANCA. Monospecific confirmation showed these samples to have either anti-MPO and anti-PR3 antibodies (see Table 2).
ANCA Positive Samples with Multiple Antigen Specificities
Monospecific confirmation demonstrated that a total of 8 out of the 82 true positive ANCA samples had reaction to multiple antigens (see Table 3).
Simplifying ANCA Patterns
The complexity of ANCA pattern recognition described in previous consensus articles [2,11] is difficult to follow in a routine laboratory setting. Using the multiplex 3-chip approach, ANCA patterns can be simplified according to the positivity of different substrates used. In this study, 3 ANCA pattern categories were identified, as described in the results (see Figure 2). This multiplex approach allows for results to be confidently analyzed with certainty of classification, providing reliable and consistent results.
ANA Interference
ANA (both anti-nuclear and anti-cytoplasmic) interference often accounts for false ANCA positives when using ethanol fixed granulocytes as the only IIFT substrate [1,5,9,14]. Anti-nuclear antigens could contribute to the pANCA-like pattern, while anticytoplasmic antigens could cause the cANCA-like pattern [1,14]. When either of these situations occurs, the ANCA results are truly negative when the coexistence of ANA and ANCA is excluded. Therefore, it is important to evaluate the ANA interference to determine whether the ANCA-like patterns are caused by either ANA alone or the coexistence of ANA and ANCA. The use of the mixed cell chip (HEp-2 and granulocytes) is crucial to further aid in ANCA analysis, by excluding ANA interference and revealing the coexistence of ANA and ANCA.
This mixed cell chip is very helpful when ethanol fixed granulocytes and formalin fixed granulocytes cannot give a clear result of ANCA. In the mixed chip, if the HEp-2 cells display stronger fluorescence intensity than the granulocytes, or the same intensity, then there is ANA interference and the ANCA result is considered as negative if the substrate of the formalin fixed granulocytes (chip) shows negative as well. When the granulocytes in the mixed cell chip are of greater fluorescence intensity than the HEp-2 cells, positive ANCA and the specific pattern can be recorded with the combined use of the other 2 chips. During ANA routine testing, ANA results are based on a sample dilution of 1:80 (some laboratories use different screening dilutions). In comparison, ANCA routine testing uses a 1:10 dilution, which is at a lower dilution than that of the ANA testing procedure. Since the dilutions differ between the routine ANA testing at a 1:80 dilution and the ANCA testing at a 1:10 dilution, the predetermined ANA results from routine testing (1:80 dilution) cannot be used to exclude ANA interference. The significant difference between ANA results at a sample dilution of 1:10 and 1:80 can be seen in Table 1. Therefore, it is important to test patient sera on HEp-2 cells at the same dilution as the granulocytes to differentiate ANA interference on ANCA results.
Ethanol Fixed Granulocytes Alone vs. 3-Chip BIOCHIP
The results demonstrate that with the use of multiple substrates and fixations, the IIFT becomes a more accurate screening assessment. Conventional ANCA diagnosis of patient sera strictly includes the analysis of ethanol fixed granulocytes. Some laboratories may use separate slides containing formalin fixed granulocytes to confirm ANCA positivity either after obtaining positive results from ethanol fixed granulocytes, or in parallel. This additional step is time consuming and impractical. As seen in a previous study, the specificity and sensitivity of the IIFT with only ethanol fixed granulocytes has been identified as 76% and 81%-85%, respectively. The low specificity was attributed to a high number of false positive pANCA results in the disease control group, which was thought to be attributed to potential ANA interference [1,14,19,20]. ANA and atypical ANCA patterns appear when using alcohol fixation, which could explain why strictly testing sera on ethanol fixed granulocytes results in many false positives and misdiagnoses [9,21]. Baslund,et al. [22] determined that 9% of the positive ANCA samples identified in their study were false positives when using ethanol fixation alone, which can be minimized by using additional substrates. In the present study (see Figure 3), a higher number of indeterminate ANCA positives were identified when a single substrate (ethanol fixed granulocytes only) was used for ANCA testing. These unsure ANCA results were due to ANA interference since the majority of the samples used in this study were ANA positive. A small number of false ANCA negatives were also found when using a single substrate (ethanol fixed granulocytes only) due to the lack of a formalin fixed granulocyte substrate being used as well (discussed under the atypical inconclusive ANCA). The addition of a formalin fixed granulocyte chip and mixed cell chip resulted in the identification of a significant number of false positives attributed to ANA interference and ensured reliable ANCA results.
Atypical Inconclusive ANCA
A recent study by Lin,et al. [17] identified the following distinct positive IIFT ANCA patterns, pANCA, cANCA, atypical pANCA, and atypical cANCA, with the use of ethanol and formalin fixations. Furthermore, the study outlined a potential method for analysis and interpretation of ANCA results that is clear and concise. In the present study, an additional pattern was identified with the use of a 3-chip BIOCHIP: "Atypical Inconclusive ANCA".
The atypical inconclusive ANCA pattern is not described in the literature. Four of the 82 (5%, see Figure 4) ANCA positive samples that were identified in this study were categorized as atypical inconclusive ANCA. Interestingly, the monospecific confirmation of these 4 samples were either anti-MPO or anti-PR3 positive, without multiple specificities (see Table 2). This indicates that this pattern will be missed if ethanol fixed granulocytes are used alone for ANCA screening, resulting in false negatives. Using ethanol and formalin fixed granulocytes together increases the sensitivity of ANCA IIFT testing.
ANCA with Multiple Monospecificities
It is well documented and is common knowledge that the major target antigen of the cANCA pattern is PR3, while MPO is the major target antigen of the pANCA pattern [4,5,9]. However, exceptions do exist. In this study, of the 82 ANCA positive samples, 3 of the pANCA samples demonstrated anti-PR3 positivity. Moreover, 8 of the positive ANCA samples exhibited 2 or 3 monospecific antibodies (see Table 3). This indicates that monospecific confirmation for ANCA should include more target antigens than just MPO and PR3. Therefore, only testing anti-MPO and anti-PR3 cannot conclude single monospecificity. ANCAs that have multiple monospecificities may indicate different clinical relevance [23].
Conclusion
The use of a 3-chip BIOCHIP in the IIFT is necessary for an accurate interpretation and analysis of patient serum for ANCA screening. This proposed inclusion of additional substrates in the IIFT ANCA diagnostic procedure has shown to result in an increased specificity of the IIFT, while maintaining a high sensitivity. Figure 5 illustrates the proposed testing strategies for ANCA by using this 3-chip BIOCHIP. In the present study, the IIFT ANCA test did not fail to detect any anti-MPO or anti-PR3 positive samples. However, due to the relatively small sample volume used, it is still cautiously recommended to confirm the ANCA IIFT negative samples with the use of monospecific assays until more studies with a larger sample size are done. The simplified reporting makes the ANCA result standardization a reality. The ANCA IIFT method continues to be very useful in routine testing for different groups of diseases. | 5,454.4 | 2017-01-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Heterogeneity-induced lane and band formation in self-driven particle systems
The collective motion of interacting self-driven particles describes many types of coordinated dynamics and self-organisation. Prominent examples are alignment or lane formation which can be observed alongside other ordered structures and nonuniform patterns. In this article, we investigate the effects of different types of heterogeneity in a two-species self-driven particle system. We show that heterogeneity can generically initiate segregation in the motion and identify two heterogeneity mechanisms. Longitudinal lanes parallel to the direction of motion emerge when the heterogeneity statically lies in the agent characteristics (quenched disorder). While transverse bands orthogonal to the motion direction arise from dynamic heterogeneity in the interactions (annealed disorder). In both cases, non-linear transitions occur as the heterogeneity increases, from disorder to ordered states with lane or band patterns. These generic features are observed for a first and a second order motion model and different characteristic parameters related to particle speed and size. Simulation results show that the collective dynamics occur in relatively short time intervals, persist stationary, and are partly robust against random perturbations.
www.nature.com/scientificreports/ collective motions from individual behaviors is fundamental to authorities for the control of crowd and traffic dynamics and the development of intelligent transportation strategies. In self-driven particle systems, collective dynamics can result from heterogeneity effects in the microscopic behaviour of the particles, upon other inertia or delay mechanisms. Pedestrian dynamics describe for instance lane formation for counter-flow or for pedestrians walking in the same direction but with different speeds 32,50 . Other examples are stripe, diagonal travelling band or chevron patterns for crossing flows 40,41 . In this article, we show by simulation that heterogeneity effects can generically initiate segregation and spontaneous formation of lane or band patterns in two-species flows of polarised agents. Two heterogeneity mechanisms are identified: static heterogeneity in the agent characteristics and dynamic heterogeneity in the interactions. Static heterogeneity refers to quenched disorder in solid state physics and the terminology of random walks, when dynamic heterogeneity relies on annealed disorder (see 51,52 and references therein). Interestingly, lanes spontaneously occur when the heterogeneity relies statically on the agent features (quenched disorder), while bands emerge if the heterogeneity operates dynamically in the interactions (annealed disorder). The lane and band patterns are stable and persist stationary, although no alignment interaction rules are defined (explicitly or implicitly). The features are generically observed with different microscopic motion models, namely the first order collisionfree speed model 53 and the inertial second order social force model 54 , and different types of parameters related to agent speed or agent size. Lane and band patterns are observed with different binary mixtures of interacting particles [55][56][57] , e.g. oppositely charged colloids subject to, respectively, DC and AC external electric fields 58,59 . In the presented models, the heterogeneity comes from internal interaction mechanisms. Potential applications are mixed urban traffic flow and the modelling of the interactions between different types of road users.
Models.
We consider in the following two types of agents evolving on a torus. We denote n = 1, . . . , N the agent's ID while k n = 1, 2 is the agent's type. The agent's motion is given by a dynamic model F p (X n ) that defines the agent speed as in the collision-free model 53 or the agent acceleration as in the social force model 54 according to local spatio-temporal variables X n (e.g. the position and speed differences with the neighbours) and a set of parameters p (namely, desired speed, desired time gap, repulsion rate, agent size, and so on). We assume two different settings p 1 and p 2 for the parameters. Two types of heterogeneity are then considered.
1. Heterogeneity in the agent characteristics-We attribute statically the two parameter settings p 1 and p 2 to the two types of agents: We aim here to model different types of agents (for instance pedestrians and bicycles) with specific characteristics in term of desired speed, agent size, etc. This kind of heterogeneity is usually called quenched disorder in solid state physics. It refers to static heterogeneity features remaining constant (i.e. quenched) over the time. 2. Heterogeneity in the interactions-We attribute dynamically the two parameter settings p 1 and p 2 according to the type of the closest agent in front. The parameter setting is p 1 if the agent in front is of the same type, while it is p 2 in case of interaction with an another agent type: with k (X n ) the type of the closest agent in front (see "Methods" for details). Such a mechanism may be realized in mixed urban traffic where cyclists or electric scooter drivers are adapting their behaviour, e.g. increasing the time gap or reducing his/her desired speed, when following a group of pedestrians. The heterogeneity features are here time-dependent. They are usually called annealed disorder in the literature of solid state physics 51,52 . In contrast to the model Eq. (1) for which the heterogeneity statically lies in agent characteristics, the model Eq. (2) induces a dynamic heterogeneity mechanism taking place in the interactions. See Fig. 1 for an illustrative example in one dimension. www.nature.com/scientificreports/ Analysis. We qualitatively observe by simulation that the static heterogeneity model M 1 Eq. (1) initiates the formation of lanes in the system, while the dynamic heterogeneity model M 2 Eq. (2) allows the formation of bands (see Fig. 2 below). To classify the state of the system, we measure the agent's mean speed and also order parameters for the lane and band formation. The order parameter has been introduced to detect lanes in a colloidal suspension 60 and used in pedestrian dynamics 61 . We denote in the following (x n , y n ) the positions of the agents n = 1, . . . , N . The order parameter for lane formation is Here L n is the number of agents with the same type in front of the agent n on a lane of width � > 0 , card(A) being the operator counting the elements of an ensemble A, while L n is the number of agents with different types. The order parameter L tends by construction to be close to one when the system describes lanes. Assuming a disordered state for which the agents are uniformly randomly distributed on a w × h rectangle with h > � > 0 the system's height and w > 0 the system's width, the number L n of agents with the same type is distributed according to the binomial model B (m, p) , with m = N k n and p = �/h . Here N k n is the total number of agents with type k n . The distribution of the number L n of agents with different types can be deduced similarly.
For band formation, the order parameter is The band order parameter includes a term w/h, w and h being the width and height of the system. The distribution of the order parameters for lanes and bands is by construction the same in cases of random positions of the agents. Indeed for disordered states, the number B n of agents on the sides with the same type has a binomial distribution B (m, p) with m = N k n and p = �w/(hw) = �/h as well. This makes the lane and band order parameters directly comparable. In particular,
Simulation results
We carry out simulations of two-species flows on a 9 × 5 m rectangular with top-down and right-left periodic boundary conditions (torus). We simulate the evolution of N = 45 agents (density of 1 agent/ m 2 ) from random initial conditions using the first order collision-free (CF) pedestrian model 53 and the inertial social force (SF) model 54 in the Supplementary Materials. The desired directions of motion of all agents are polarised to the right. The heterogeneity in the two settings p 1 and p 2 is introduced by varying model parameters related to the speed (i.e. desired speed or time gap parameters) or to the size of the agents. We quantify the heterogeneity level in the two-species system using the index δ s when we vary parameters related to agent speed, and the index δ l when we vary parameters related to agent size. The definitions of the microscopic motion model and details on the setting of the model's parameters and heterogeneity indexes are provided in "Methods".
Preliminary experiment. We first present single simulation histories of the two-species system with the two heterogeneity models Eqs. (1) and (2). We simulate the evolution of the agents using the collision-free model 53 for given heterogeneity indexes δ s on the parameters related to the agent speed, namely the desired speed and the time gap parameters (see "Methods" for details on the setting of the model parameters). Successive snapshots of the system are presented in Fig. 2. The evolution of the system with the static heterogeneity model Eq. (1) is shown in the left panels while the evolution with the dynamic heterogeneity model Eq. (2) is displayed in the right panels. The bottom panels provide the evolution of the order parameters for lane and band formation. We observe fast formation of two lanes by agent type within the first heterogeneity model, while two bands emerge with the second model. The parameter settings are statically attributed to the agent type for the model defined by Eq. (1). Thus, the segregation also involves the parameter setting. In contrast, the parameter setting depends on the type of the agent in front for the model defined by Eq. (2). This results in four bands according to the parameter settings. Note that further simulations with larger systems may describe more lanes and bands with different sizes.
The order parameters converge after a transient phase to stationary performances with lanes or bands where they are polarised to one or zero. The duration of the transient states is approximately 40 seconds of simulation. Note that the duration of the transient states varies from a simulation to another but the system systematically converges to a stationary state with lanes or bands. Furthermore, lane and band formation in larger systems require longer simulation times, especially for the band formation (see the blue dotted curves in Fig. 2, bottom panel, for a 15 × 9 m system three times larger with 135 pedestrians). Similar performances are observed when using the social force model instead of the collision-free model (see Fig. S1 in the Supplementary Materials).
Here, the heterogeneity of the two parameter settings p 1 and p 2 and corresponding index δ s are relatively high. Reducing the heterogeneity index can result in a longer transient phase or even no formation of lanes and bands. We may expect that lanes and bands progressively emerge as the heterogeneity index increases. This is however not the case. As described in the next section, we observe in stationary states an abrupt phase transition from disorder states to order states with lanes or bands as the heterogeneity index increases.
Stationary performances. The preliminary experiment shows that lanes tend to emerge in the dynamics when the heterogeneity relies on agent characteristics (quenched disorder model M 1 Eq. (1)) while bands www.nature.com/scientificreports/ 2)). The results presented in Fig. 2 are obtained for given values of the heterogeneity index δ s between the two parameter settings p 1 and p 2 . The index is sufficiently high to rapidly observe the formation of lanes or bands. In this section, we analyse the performances by progressively increasing the heterogeneity indexes δ s and δ l . We repeated one thousand Monte-Carlo simulations from independent random initial configurations for the two heterogeneity models M 1 Eq. (1) and M 2 Eq. (2) by varying the heterogeneity indexes δ s and δ l over twenty levels. The differences between the two parameter settings p 1 and p 2 are zero at the lower heterogeneity level, while they are important at the higher level. We Fig. 3, right panels). An abrupt phase transition occurs as the heterogeneity index δ s increases from a disordered state for which the order parameters are close to 0.2 (dotted line in Fig. 3, top panels) to an ordered dynamics with lanes or bands for which the order parameters are polarised on zero or one. A critical heterogeneity index can be identified. The lane patterns allow the speed of the agents with faster characteristics to be higher than the speed of agents with slower features (Fig. 3, bottom left panel). This makes the agent speed on average close to the mean speed of a homogeneous flow (dotted line). In contrast, the band patterns in the model M 2 Eq. (2) correspond to gridlocks for which the speed of all the agents have slower features (Fig. 3, bottom right panel). Similar performances occur by varying parameters relying on agent size (see Fig. 4 Fig. 4, right panels. In contrast to heterogeneous models relying on agent speed, varying the agent size induces bi-dimensional steric effects making the average speed in the presence of lanes less than the mean speed of a homogeneous flow (see Fig. 4, bottom left panel). On contrast, the mean speed can be higher than the homogeneous one in the presence of bands (see Fig. 4, bottom right panel). Indeed, varying the agent size acts in two dimensions, reducing or increasing the available space in case of presence of lanes or bands. Similar performances occur when using the social force model instead of the collision-free model (compare Fig. 4 and Fig. S3 in the Supplementary Materials).
Transient states and perturbed systems. The simulations above describe stationary situations. Yet, it
is interesting to observe the transient states of the system and the time required for the emergence of lanes or bands. In Fig. 5, we run simulations for different simulation times ranging from t 0 = 0 to t 0 = 3000 s before starting the measurements. The initial conditions are random. The lanes and bands spontaneously emerge during the first minute of simulation when the heterogeneity index δ s is sufficiently high. Similar phase transition to lane and band patterns occur for t 0 = 600 , t 0 = 1200 and t 0 = 3000 s suggesting that the dynamics can be considered stationary as soon as t ≥ 600 s. The simulation times required to obtain stationary performances fluctuate from one simulation to another. They also depend on the size of the system and the density level. Generally speaking, larger or more dense systems require on average longer simulation times to reach a stationary state than smaller or least dense systems.
So far, the modelling approach is deterministic. Analysing whether the collective motion is robust against random noising may reveal unexpected behaviours. In Fig. 6, we present the order parameter for stochastic systems for which the agent speeds are subject to independent Brownian noises. Simulations are carried out for a noise amplitude σ = 0.1 , 0.2 and 0.5 m/s. The noise monotonically perturbs the lane formation in the static heterogeneity model M 1 Eq. (1) (Fig. 6, left panel). No phase transition occurs for σ = 0.5 m/s. This phenomenon is well known in the literature as the freezing-by-heating effect 63 . The concept is borrowed from the plant growth stimulation process. On the contrary, introducing a low noise in the dynamics allows improving the www.nature.com/scientificreports/ when the lane formation does not (freezing-by-heating effect). Further simulation results show similar behaviours when the noise is introduced in the agent polarity (i.e. the desired direction) instead of the speed. Regardless of the modelling order of the motion models (speed-based or acceleration-based models) and related parameters, relying the heterogeneity on agent characteristics or on the interactions initiates generic segregation and the formation of lanes and bands in the dynamics. Such results corroborate the universality of the lane formation observed in pedestrian counter-flows, oppositely charged colloids, upon other binary mixtures of interacting particles 32,[56][57][58] . They open new explanation perspectives for the formation of bands. Further theoretical investigations remain necessary to rigorously characterise the phase transitions. A possibility is to analyse mean-field instability phenomena of discrete lattice representations of the model 41 . The presence of walls and obstacles and the role of the geometry of given facilities may also be of interest. Preliminary simulation results show segregation effects of slower or bigger agents in case of bottleneck within the static heterogeneity model. These are expelled at the edges of the system and are obstructed by the presence of walls. These simulation results require more attention, notably for elderly people, people with motor disabilities, or in the current context of social distanciation.
Methods
The two agent motion models used in the simulations are the collision-free (CF) model 53 Collision-free model. In the collision-free model, the dynamics of an agent n with position x n is given by the first order differential equation composed of the scalar speed model here V j ≥ 0 is the desired speed, T j > 0 denotes the desired time gap, and ℓ j ≥ 0 the agent size, the index j = 1, 2 representing the two parameter setting p 1 and p 2 , and the direction model with e 0 = 0 the desired direction (polarity), U(x) = A exp (ℓ j − x)/B with parameters A = 5 and B = 0.1 m a repulsive potential with the neighbors, and C > 0 a normalisation constant. A bi-dimensional white noise ξ n (i.e. the time derivatives of two independent Wiener processes) with amplitude σ > 0 is used for the stochastic model in "Transient states and perturbed systems". The function s(X n ) = �x n − x m 0 (X n ) � in the scalar speed model determines the minimal distance in front, being the closest agent in front of the agent n. Note that in the definition of the dynamic heterogeneity type Eq. (2), the type of the closest agent in front is k (X n ) = k m 0 (X n ) . The simulations are carried out using an explicit Euler numerical scheme in deterministic cases, and using an Euler-Maruyama scheme for the simulations including the stochastic noise. The time step is δt = 0.01 s in both cases.
Setting of the parameters. The default values for the parameters p = (ℓ, V , T) of the CF model are based on the setting proposed in the literature 53 Note that = 0.6 m in the order parameters corresponds approximately to two times the size of a pedestrian. Starting from the default values, we vary using heterogeneity indexes the parameter settings p 1 = (ℓ 1 , V 1 , T 1 ) and p 2 = (ℓ 2 , V 2 , T 2 ).
x n = F cf p j (X n ) = V (X n , p j ) e(X n , p j ) + σ ξ n , (6) V (X n , p j ) = max 0, min V j , s(X n ) − ℓ j /T j , | 4,477.2 | 2021-10-12T00:00:00.000 | [
"Physics"
] |
EXPERIENCES IN MANPOWER PLANNING FOR GEOMATICS
This paper addresses the issue of manpower planning in meeting the needs of national and international economies for trained geomatics professionals. Estimated statistics for the numbers of such personnel, and experience in assessing recruitment into the profession reveal considerable skills gaps, particularly in the mature economies of the developed world. In general, centralised manpower planning has little official role in western economies. However, informal surveys of shortfalls in supply of qualified graduates in many fields, including geomatics, are undertaken by professional organisations, educational establishments and consultancies. This paper examines examples of such manpower surveys and considers whether more effective manpower planning would ensure a more efficient geomatics industry in a nation, and what the nature of such an exercise should be.
INTRODUCTION
The disciplines of geomatics are in the middle of a revolution in scope, demands, and influence.Fundamental changes to the way in which the technology operates and is used are apparent, whilst the impact of geomatics on society is significantly different to anytime in the previous long history of our subject.Further, scientific demands of, and input into, subjects of surveying, mapping, cartography and remote sensing (all disciplines within the field of geomatics) require an ever broadening mind-set among participants in geomatics activity.
In cartography, specifically, the nature of the field has changed: previously, it was part of a linear workflow-line which encompassed firstly geodesy, then imaging and data capture, before moving into representation and visualisation of the geospatial information, not exclusively derived from topographic observation, but certainly rooted in rigorous spatial frameworks.Today, there are many alternatives to each step in the flow-line, and indeed the flow-line is no longer linearit can branch, backtrack, circumvent; steps can be followed in very different ways using different procedures, and can sometimes be skipped entirely.This variety, this availability of data, tools, procedures, and this freedom to select and apply them, is what now characterises the work of cartographers and must be reflected in the contemporary education of cartographers.
Further, such education and awareness-raising needs to be extended to those engaged in manpower planning and management of the labour market.
Previous manpower planning exercises have been based on outdated knowledge of what the geomatics industry needs from its workforce and what it can supply to clients and public service.Thus, the influence of studies of the subject of geomatics on human resource needs and the use of the labour market is also considered.The identification of 'critical and priority skills' is a typical outcome of such studies, and whilst such outcomes interact with educational syllabus development, they also importantly can feed in to national manpower planning.
Educational developments for effective employment
The linkage between education and training, and employment in the cartographic and geomatics sector is as clear as it is in other fields of human activity.The technico-scientific nature of the discipline implies that the 'industry' is seeking qualified graduates and apprentices who have followed a focussed curriculum, whilst those engaged in education and training seek to certify that their offerings are appropriate and relevant, ensuring that industry needs are met.The dynamic nature of the field therefore means that educational provision, delivery and curriculum must be addressed alongside issues of employment forecasting, workforce demands and manpower planning.
The development and maintenance of a 'Body of Knowledge' can meet the educational needs of the former, assisting in the development of learning objectives for both specific training courses, and any more general didactic provision in the discipline of cartography.Contemporary debates on, and updates for, curriculum developments associated with the Body of Knowledge, and other educational initiatives, are described in a number of recent and ongoing studies (e.g.Veenendal, 2014).The manpower planning aspect, however, is relatively neglected.This is despite the fact that the supply of a welltrained workforce is regarded as paramount by most progressive corporations and employers (industry, government and research bodies).Fairbairn (2013) concluded that a well-researched 'Body of Knowledge' can assist in determining the scope of education; but that significantly more effort needs to be directed towards manpower planning in the cartographic industry to ensure a meaningful focus for such educational provision.
Concepts of manpower planning
Manpower planning as a human activity can be characterised in two distinct ways: firstly, the epitome of centralised planning, in Stalinist command economies, was typified by Eastern European states in the post-World War II period.Here employment and labour ministries ensured that human resources were directed as needed by the demands of five year economic plans and long-term strategic forecasting of supply and demand.Such inflexible centralised allocation of resources can be contrasted with the free market approach which relied on the 'unseen hand' of resource allocation under which the laws of supply and demand, and market economics, would direct individuals, and by extension cohorts of workers, to sectors which had requirements for work to be done.
In reality, these two extreme models were implemented in somewhat modified form.In particular, attempts were made in free market economies to at least predict the employment patterns of the future.Brown, Green and Lauder (2001, p. ix) noted that: "In the post-war era, full employment could be achieved by the Keynesian expedient of manipulating demand ... This strategy met its nemesis in the in the early 1980s ... Thereafter, a subtle change occurred in political rhetoric from promises of full employment to full employability.Full employability signalled a shift from demand side policies to promote employment to supply-side policies which emphasised individuals' education and skills".
Thus, even for the fully free-market oriented economies of the 1980s and 1990s, subject to a Thatcherite and Hayekian philosophy of economic management, there was a view that personal development in the form of educational and training attainment would allow for individuals to find a place in the ruthless world of employment.And this 'personal development' would allow a citizen to flourish in such developed economies, where planning of any type was regarded as unnecessary state intervention and the adjustment of labour markets was to be done by market forces, only influenced by personal ambition and development.
Manpower planning has been viewed by both command and 'free-enterprise' economies as a means of ensuring more effective, more productive and less wasteful use of human resources.Integrated with national economic, social and political philosophy, some thus believe that manpower planning can actually strengthen market forces, whilst others believe it is an integral part of a socialist economy.
Practical approaches to manpower planning
In fact, even in the most open of free-market, enterprise-driven, cultures, there have always been some attempts to at least monitor the labour market, if not to optimise its outcomes.Intervention becomes more important when a significant amount of state money is spent on the education and training elements of an individual's personal development, which (as has been seen) is a preferred method of developing employment opportunities for 'free-enterprise' cultures.
Thus, medical training is a burden on state finances, even in the most liberal of economies (it uses up institutional resources which cannot be retrieved; its length takes resourceful and intelligent workers out of the national economy for a significant time period; its volume and quality have a direct impact on the (difficult to quantify) health of the citizen), and thus it is no surprise that manpower planning is practiced in such a field.
Crothers, in 2003, for example declared that "Manpower planning in New Zealand over the (previous) few decades … waxed and waned according to prevailing political ideologies; ... unfashionable for the last decade-and-a-half, it is confined to the health sector (where training is expensive and where manpower crises abound)" (Crothers, 2003).
Issues in manpower planning
So, how does manpower planning actually work?It is difficult to see beyond the so-called 'Delphi approach', which suggests that forecasting in fields such as technology, human activity and policy-making can be most effective by communication of ideas, background, and discipline-expertise, by a group of panellists who can detect trends, judge innovation and predict change.Unfortunately, the results of such an approach are mainly a combination of personal opinions and subjective views, often with little scientific basis as predictions.
Such exercises are normally done at a 'macro'-scale, where a complete national picture is sought, and the major focus is on an aggregate picture across very broadly defined areas of human employment.Quite often a government can abrogate any responsibility for the process of manpower planning (although it often is the only source of the aggregate level data) and an individual company can take the view that its own internal manpower planning is good commercial practice.Such 'human resource planning' is a microcosm of the national exercise, but is often regarded as a sensible way for individual organisations to behave.
Thus, surveys can be undertaken internally by companies trying to optimise their use of their human resources.A similar approach can be adopted by educational establishments who have an interest in assessing the value of a course and qualification to a graduate (and future employee in industry).
MANPOWER PLANNING IN GEOMATICS
With the demise of centrally-planned economic management in Eastern Europe, the level of manpower planning worldwide has decreased significantly, with most national governments seemingly reluctant to engage in such an intrusive, nation-wide, centralised exercise.For the past two decades, most industrysectors have had to undertake their own assessments of where their discipline and commercial activity stand in terms of the use of human resources.
Even within a broad-ranging discipline such as geomatics there have been remarkably few studies of how the industry is faring its strengths, weaknesses, opportunities, and threats: the time for an overall SWOT analysis of geomatics as a human activity is long overdue.There are, however, some noteworthy exceptions.A striking calculation from Molenaar (2009) estimates that there are currently 2.5 to 3 million geomaticians employed worldwide: if each has a 40 year career then 75,000 new recruits are needed each year; and if 10% of the total workforce needs some form of continuing educational provision in order to update, then a further 300,000 students can be counted upon.A significant proportion of these potential geomatics students will be specialist in the large number of different (although overlapping) branches such as measurement science, remote-sensing, cartography etc.This set of estimates was presented in order to demonstrate to the geomatics industry that education and training are vitally important to the future health of the profession, and to assist in making some estimates of the demand for and supply of geomaticians.There are few other such estimates, or studies with a focus on geomatics manpower planning.
Despite the resistance to state-led economic management in the USA, there are interesting and instructive examples of the quantitative assessment of labour patterns and market sectors in that country, along with efforts to optimise the personal development which, we have seen, is the foundation to employability for free-enterprise cultures .The United States Department of Labor has a long-standing interest in reporting on and improving the American labour market.In 2010 it released its Geospatial Technology Competency Model (accessible through http://www.careeronestop.org/competencymodel/)aiming to give potential employees a fairly prescriptive overview of the skills required to successfully enter the geomatics sector.
Whilst not giving specific instruction about employment rates and numbers within sectors in geospatial technologies, the model is quite detailed in its coverage of expected skills, background abilities, and professional standing of expected entrants into the geospatial profession, and thus has an influence on the way in which the workforce in areas such as cartography and other geospatial sciences can be directed and managed.
Further north, a decade ago, the Canadian government, through its Department of Human Resources and Skills Development commissioned an extraordinarily comprehensive Human Resources Study on the Geomatics Sector of the Canadian Economy (CCLS et al., 2001).Prepared by learned and professional organisations in the Canadian geomatics arena, this report covered a wide range of issues, including description of the disciplines within geomatics (including cartography), the nature of the industry (both inside Canada and also how well Canadian geomatics works overseas) and a thorough review of the technology currently used and how it would develop in the future.Two further extensive sections of the report describe education and training in geomatics in Canada, and the human resources profile.A survey of geomatics activities showed expected increases in activity in all sectors in the 2001-2006 period, but some, including photogrammetry, geodesy, cartography and land surveying, were expected to increase at a slower rate than other areas such as navigation, GIS, decision support and consulting.The overall rate of growth led the survey to estimate a need for two thousand university graduates in geomatics by the year 2004, compared to an estimated supply of 950 at the time of survey in 2001.In retrospect, of course, this was an underestimate and, particularly in cartography, a more vibrant sector would require an even greater increase in trained personnel during the first decade of this century: the survey was not able to foresee the extraordinary growth in several sub-disciplines of cartography now central to the contemporary common everyday work of a geomatics professional.
The continuing expanding demand for experienced cartographic data handlers in internet cartography, web mapping, location-based services (LBS), spatial data infrastructures (SDI), sensor networks, and augmented reality is increasing even more, exposing the continuing need to skills to meet market demands.
The final example of manpower planning and labour market analysis presented here is more contemporary, and does take into account the rapid developments of the geomatics disciplines.A 'Final Report' on manpower strategies published by the Botswana Training Authority in 2010 (BOTA, 2010) was commissioned to "forecast and identify a list of priority vocational skills and develop strategies to fast track priority skills development".Its comprehensive overview of the national economy, the health of market sectors, the basic methodology adopted to data collection and skills forecasting, and the list of critical and priority skills, mark this report out as a document of considerable importance.
One particular outcome of interest is that even in a nation with a population as low as Botswana's (just over 2 million in 2011), there is an estimated shortfall of over 3,000 educated and trained geomaticians by the year 2016 (Table 1).It is incumbent on those who are educating geomaticians to ensure that provision is made to match such needs both in this country and throughout the world.And it is clear that some formal assessment of the labour market in this way is needed to ensure the continued vitality of the geomatics profession, and interest in the disciplines of geomatics as viable professions for substantial numbers of the national workforce to consider entering.
Professional organisations in geomatics, and large employers of geomaticians in industry and government should be urged to produce national overviews of manpower requirements for our discipline, and to ensure that such overviews are translated, through government policy, into practice.The provision and take-up of educational programmes in geomatics is dependent on the perception of need for geomaticians within a national economy; and there is a corresponding need in industry and government for there to be a supply of trained and educated geomaticians to meet that need.
FURTHER ISSUES
A major aim of manpower planning is to determine such skills gaps which need to be filled to improve the efficiency of national economy.Mismatches at a national level between the market supply of qualified personnel and the requirements of industry, commerce and government administration can be filled in two ways: firstly, a country can rely on attracting a number of qualified immigrants; or measures can be put in place to improve the employability and skills of the current student body and the existing workforce.
The major problem of the first of these is the negative effects on developing economies, and a reliance on expatriate communities of guest workers.The problems of a 'brain drain' whereby the most experienced, well-qualified, and ambitious products of a country's educational system are attracted by high rewards overseas, have long been felt by populous countries such as Egypt and Pakistan.
The market for expatriate workers relies on a supply-and demand-adjusted set of rewards and remuneration which normally ameliorate the disturbance caused to the domestic life of those involved.But 'quality of life' factors can often have a confounding effect on the free market for labour, as workers can opt for a lifestyle and work-life balance which does not maximise their economic reward.Future prospects, promotion possibilities, support for training, and intangible factors such as job satisfaction, also each affect the employment market and the workforce in unpredictable ways.
The market for staff can be significantly influenced by the prevailing balance between the private and public sectors within a national economy, and indeed within an industry sector.
Theoretically, the results of effective manpower planning will be more obviously felt within those industries and activities controlled by government policy, and when governments have control over the workforce employed by them.However, private companies may actually have a more responsive and flexible approach to employment with an ability The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences,Volume XL-6, 2014ISPRS Technical Commission VI Symposium, 19 -21 May 2014, Wuhan, China This contribution has been peer-reviewed.doi:10.5194/isprsarchives-XL-6-25-2014 to more quickly respond to short-term variation in demands for labour.
Adjustment of supply of well-qualified staff to such short term demands is affected by recruitment policies which can often prioritise project-by-project employment of staff over longer term retention of key personnel.And the definition of 'key personnel' can vary, as the market in geomatics for technical staff may, or may not be, more vital than that for professional scientific staff.
A further constraint on the free working of a liberal labour economy is the existence of, and value placed upon, licencing and accreditation of professionals in a large number of disciplines.Geomatics is no exception, as there is a large number of professional and institutional bodies which aim to regulate and restrict the workforce and the patterns of activity within the field.Whilst such licencing may be written into law as a statutory requirement for the efficient and safe operation of the profession, such arrangements do have an effect on the numbers, quality and rewards of those who practice, and make accurate predictions of supply and demand of personnel more difficult to manage.
Related to such constraints on supply and demand is the impact of educational and training facilities and opportunities on the labour market.
Effective manpower planning cannot be undertake without reference to those institutions preparing workers for the profession, and those engaged in continuous professional development to ensure continued capacity of the workforce to respond to the demands for its services.
In its traditional sense, manpower planning has been directed towards using the human resource as effectively as possible within a national economy.
In terms of measurable achievement, the most straightforward way in which to assess such effectiveness has been to quantify productivity using standard measures such as output, gross domestic product, and added-value statistics.Such metrics inevitably concentrate on the calculable results of actually producing 'something'.It is clearly much more difficult to quantify less tangible outputs such as research activity, public service, or national administration.As much geomatics activity falls into such categories, the ability to assess the impact of geomatics on the national economy is limited, and the consequent level of impact of manpower planning in this field is also difficult to ascertain.
CONFLATING MANPOWER STUDIES
One of the few studies which looked at many of such issues together was reported by Crothers who studied the New Zealand economy in 2003.Table 1 (extracted from Crothers, 2003) shows that whilst the average salary of a geomatic engineer is relatively high, one reason may be that the average age of such an employee in this sector is also well above average.Such figures have likely changed in the past 10 years, but anecdotal evidence shows that geomatics is still perceived as an ageing profession, with younger entrants into the profession often categorising themselves as something other than a 'geomatician'.The study by Crothers also indicated that the level of qualification on entry had some variability: in fact, he identified computer science and land-based professions as having a bifurcated pattern of entry-level qualification, with some with advanced study at PhD level employed alongside those with low levels of school-leaving achievement.
CONCLUSION
Demands on the geomatics industry worldwide are pressing: there are continuing global challenges in the future which must be met with reliance on the skills, experience and imagination of well-educated cartographers and geomaticians.
It is important to make sure that there are sufficient numbers of such educated and trained people able to work in these areas.How this aim is successfully achieved is difficult to formulate: with the demise of command economies and their associated national economic planning, it seems that we must fall back onto the more laissez faire methods which concentrate on individual career planning and personal development.Such an approach has ramifications for education and training provision in geomatics, and efforts must be maintained to raise awareness in the general population, recruit and retain qualified staff, and promote career development.
Table 1 :
Extract from BOTA (2010)showing the estimated 'skills gap' in employment fields in Botswana by 2016
Table 2 :
Crothers, 2003)bour market statistics (relative entry-level qualification; number of employees, average annual income (in NZ dollars); average age) from a survey of New Zealand employment 2002 (fromCrothers, 2003) | 4,928 | 2014-04-23T00:00:00.000 | [
"Economics"
] |
Exosomes of endothelial progenitor cells repair injured vascular endothelial cells through the Bcl2/Bax/Caspase-3 pathway
The main objective of this study is to evaluate the influence of exosomes derived from endothelial progenitor cells (EPC-Exo) on neointimal formation induced by balloon injury in rats. Furthermore, the study aims to investigate the potential of EPC-Exo to promote proliferation, migration, and anti-apoptotic effects of vascular endothelial cells (VECs) in vitro. The underlying mechanisms responsible for these observed effects will also be thoroughly explored and analyzed. Endothelial progenitor cells (EPCs) was isolated aseptically from Sprague–Dawley (SD) rats and cultured in complete medium. The cells were then identified using immunofluorescence and flow cytometry. The EPC-Exo were isolated and confirmed the identities by western-blot, transmission electron microscope, and nanoparticle analysis. The effects of EPC-Exo on the rat carotid artery balloon injury (BI) were detected by hematoxylin and eosin (H&E) staining, ELISA, immunohistochemistry, immunofluorescence, western-blot and qPCR. LPS was used to establish an oxidative damage model of VECs. The mechanism of EPC-Exo repairing injured vascular endothelial cells was detected by measuring the proliferation, migration, and tube function of VECs, actin cytoskeleton staining, TUNEL staining, immunofluorescence, western-blot and qPCR. In vivo, EPC-Exo exhibit inhibitory effects on neointima formation following carotid artery injury and reduce the levels of inflammatory factors, including TNF-α and IL-6. Additionally, EPC-Exo downregulate the expression of adhesion molecules on the injured vascular wall. Notably, EPC-Exo can adhere to the injured vascular area, promoting enhanced endothelial function and inhibiting vascular endothelial hyperplasia Moreover, they regulate the expression of proteins and genes associated with apoptosis, including B-cell lymphoma-2 (Bcl2), Bcl2-associated x (Bax), and Caspase-3. In vitro, experiments further confirmed that EPC-Exo treatment significantly enhances the proliferation, migration, and tube formation of VECs. Furthermore, EPC-Exo effectively attenuate lipopolysaccharides (LPS)-induced apoptosis of VECs and regulate the Bcl2/Bax/Caspase-3 signaling pathway. This study demonstrates that exosomes derived from EPCs have the ability to inhibit excessive carotid intimal hyperplasia after BI, promote the repair of endothelial cells in the area of intimal injury, and enhance endothelial function. The underlying mechanism involves the suppression of inflammation and anti-apoptotic effects. The fundamental mechanism for this anti-apoptotic effect involves the regulation of the Bcl2/Bax/Caspase-3 signaling pathway.
EPC-Exo could attach to the wall of a damaged carotid artery, improve endothelial function, and inhibit neointimal hyperplasia in rat carotid artery after balloon injury
We injected phosphate buffer saline (PBS) and EPC-Exo into the rats after injury to assess attachment to the injured luminal surface in vivo.The exosome group animals were injected with PKH26-labeled EPC-Exo (Exo-PKH26).The distribution of Exo-PKH26 with orange fluorescence in injured carotid arteries was monitored on day 7 after injury.Compared to the balloon injury (BI) model group, Exo-PKH26 was detected in the vascular endothelium with less neointimal hyperplasia.The results showed that EPC-Exo could attach to the wall of a damaged carotid artery (Fig. 2a).Vascular morphology was studied through hematoxylin and eosin (H&E) staining to investigate the effects of EPC-Exo in the arterial wall after BI.The carotid intima in the sham group was in good condition and with no increase in neointimal thickness.However, compared to the sham group, the carotid intima in the BI model group was significantly thickened with a narrower lumen area.Endothelin-1 (ET-1) is a VECs-synthesized polypeptide, which could induce vascular contraction and increase single-core cell adhesion 18 .The serum ELISA results revealed elevated levels of ET-1 in the BI model group, which were subsequently reversed by the injection of EPC-Exo (Fig. 2b).Furthermore, the H&E results revealed that compared to the sham group, intimal thickness (IT) and hyperplasia ratio of intima thickness (HRIT) substantially increased in the BI model group, but these effects were alleviated by EPC-Exo treatment (Fig. 2c).At the same time, EPC-Exo treatment upregulated endothelial nitric oxide synthases (eNOS), which were analyzed to evaluate endothelial function (Fig. 2d-Supplementary material Figure S2).Overall, these data showed that EPC-Exo could attach to the wall of damaged carotid arteries, thereby improving endothelial function and inhibiting neointimal hyperplasia in rat after carotid artery after BI.
EPC-Exo alleviated inflammation and regulated the expression of apoptosis-related genes and proteins
Vascular cell adhesion molecule-1 (VCAM-1), a cell surface protein typically expressed by endothelial cells, plays a crucial role in regulating the adhesion and migration of white blood cells, commonly employed as an indicator for assessing inflammatory status 19 .Immunohistochemistry revealed that both the vascular endothelium and middle membrane are visible to the (VCAM-1)-positive brown cell in BI model group, but the levels of these adhesion molecules were significantly decreased in the EPC-Exo group (Fig. 3a).The serum ELISA results revealed that the interleukin-6 (IL-6) and tumor necrosis factor-α (TNF-α) levels were significantly increased in serum after rat carotid artery BI (Fig. 3b).Notably, compared to the BI model group, VCAM-1 expression and concentration of inflammatory cytokine including IL-6 and TNF-α levels were significantly lower after EPC-Exo treatment.Associated genes and protein expression were also evaluated to explore the underlying mechanisms of the therapeutic effects of EPC-Exo treatment.According to qPCR analysis results, EPC-Exo treatment downregulated apoptosis-related genes, including the pro-apoptotic Bcl2-associated x (Bax) and Cleaved-caspase 3 genes (which were considered important cell apoptosis markers), and increased anti-apoptotic B-cell lymphoma-2 (Bcl2) gene expression (Fig. 3c).Consistent with qPCR detection, the WB assay revealed that EPC-Exo treatment affected the expression of apoptosis-related proteins (Fig. 3d-Supplementary material Figure S3).
EPC-Exo enhanced endothelial function and stimulated VECs proliferation, migration, and tube formation in vitro
The in vitro ability of EPC-Exo to repair endothelial damage is still being investigated.Here, lipopolysaccharides (LPS) were incubated with VECs to establish endothelial damage models for evaluating VECs function, proliferation, migration, and tube formation after exosome treatment.The actin cytoskeleton forms the network of fibers within eukaryotic cells, serving as a crucial structural element.It is a valuable tool for observing cell morphology and assessing the extent of cellular damage.The actin cytoskeleton revealed a strong link between the cells as well as smooth and consistent micro-fibers constituting Fibrosactin (F-actin) in the control.However, the connections between cells were disrupted, and the micro-fibers were fractured in the LPS group.The actin cytoskeleton of VECs was largely restored after EPC-Exo treatment (Fig. 4a).The CCK-8 assay was used to determine the effect of EPC-Exo treatment on VECs proliferation, while the scratch and tube formation assays were performed to assess the VECs migratory and tune formation abilities.Compared to the control, LPS could suppress VECs proliferation, migration, and tube formation.However, these LPS-induced alterations were reversed by EPC-Exo therapy (Fig. 4b-d).
EPC-Exo treatment reduced the LPS-induced VECs apoptosis and effectively regulated the activation of the Bcl2/Bax/Caspase-3 signaling pathway
An IF assay of Cleaved-caspase 3 along with transferase dUTP nick end labeling (TUNEL) staining was used to detect apoptosis in VECs.Through TUNEL staining, we observed that the LPS-induced VECs nuclei had mixed green fluorescent dyes, indicating that the cells were undergoing apoptosis.Subsequently, the IF assay demonstrated a substantial increase in the fluorescence intensity of cleaved-caspase 3 in the LPS group compared to the control.These findings showed that LPS could induce VECs apoptosis, whereas EPC-Exo treatment could lower the VECs apoptosis rate and Cleaved-caspase 3 protein expression (Fig. 5a,b).Additionally, WB and qPCR were used to detect the expression of related proteins and mRNA levels to explore the specific mechanism of EPC-Exo in inhibiting VECs apoptosis.The WB and qPCR analyses revealed that EPC-Exo treatment downregulated the apoptosis-related proteins and mRNAs, including Bax and Cleaved-caspase 3, but increased Bcl2 expression (Fig. 5c,d-Supplementary material Figure S5).These findings were consistent with those detected in vivo and collectively suggested that EPC-Exo can inhibit VECs apoptosis via the Bcl2/Bax/Caspase-3 signaling pathway.
Discussion
Vascular endothelial damage is the primary etiological factor contributing to CVDs, thereby necessitating a strategic focus on endothelial injury repair and disease treatment, especially in advanced cases 20 .Although contemporary medical research presents promising preventive interventions like colchicine 21 , percutaneous coronary intervention (PCI) remains the principal technique for the treatment of coronary artery diseases (CADs) 2,22 .Despite the developments in CVDs treatment, persistent vascular restenosis after angioplasty remains a primary challenge in clinical practice due to its high recurrence rate.Therefore, it is imperative to explore novel therapeutic approaches to stabilize the PCI treatment efficacy for CVDs and effectively manage post-PCI complications.In this regard, clinicians can proactively monitor the sequelae of PCI surgery and achieve greater stability in CVDs management by identifying and implementing innovative treatment modalities.Stem-cell therapy was previously believed to promote endothelial regeneration by replenishing and differentiating new cells 5 .However, subsequent research has revealed an alternative mechanism wherein EPCs primarily exert their vascular protective effects by secreting specialized factors, particularly exosomes 23 .
Exosomes are extracellular vesicles that have gained considerable attention in recent years and are commonly employed in CVDs treatment due to their potential anti-inflammatory, anti-apoptotic, and tissue regenerationpromoting actions [25][26][27] .Moreover, EPC-Exo have demonstrated remarkable efficacy in modulating inflammatory responses, enhancing survival rates, and minimizing acute lung damage.EPC-Exo can effectively enhance tissue perfusion when injected into the body and transplanted into the vasculature of ischemic tissues 28 .This remarkable therapeutic outcome underscores the potential of exosome-based treatment as a novel and promising cell-free vascular repair therapeutic option.According to recent pharmacological research, the primary mechanisms underlying the recurrence of vascular stenosis post-PCI involve the proliferation and migration of smooth muscle cells (SMCs) close to the endothelial injury site 29 .Hence, most medications employed to mitigate restenosis are aimed at inhibiting SMC proliferation.These findings provided the impetus for developing drug-eluting stents (DES) 30 .Interestingly, EPC-Exo primarily repair and restore endothelial cells by promoting their proliferation and migration, and facilitating repair of vascular endothelium 31 .These findings have been established in various studies and reinforce the potential significance of EPC-Exo in vascular regeneration and therapeutic interventions.
In this study, EPCs were extracted from rats, and their purity was meticulously determined.After acquiring high-purity EPCs, the exosomes released by these cells were extracted and identified.A rat carotid artery BI model was constructed by inducing endothelial damage in rats after feeding them with high-fat diet for two weeks.The BI model group exhibited noticeable signs of vascular stenosis, disrupted endothelial cell structure, and vascular neointimal hyperplasia.Furthermore, EPC-Exo demonstrated remarkable therapeutic potential in this model.Specifically, these exosomes adhered to the damaged vascular endothelium, effectively enhancing endothelial function and inhibiting excessive endothelial hyperplasia thereby impeding vascular restenosis.
Numerous factors influence vascular reendothelialization post-injury, among which post-PCI inflammation and endothelial cell apoptosis induced by various factors are the most critical.However, the apoptosis of injured cells and the underlying mechanisms involved in this process have not been comprehensively explored.Besides PCI-induced acute inflammation, direct endothelial cell injury and the release of inflammatory elements in the plaque also frequently trigger inflammatory responses 32 .Apoptosis is primarily defined as programmed cell death 33 , which is mainly characterized by chromatin condensation, cell constriction, and the formation of apoptotic bodies, and is regulated by various chemicals and proteins 34,35 .There are two main apoptotic pathways: intrinsic and extrinsic 36 .The Bcl2 family of proteins controls mitochondrial permeability to various proteins as well as mitochondrial outer membrane permeabilization (MOMP) 37 , playing a crucial role in regulating the intrinsic mitochondrial apoptotic pathway 38 .Although Bcl2 was initially identified as a cancer gene, its anti-apoptotic properties were found to enhance B-cell lymphoma proliferation 39 .However, Bcl2 protein does not impair normal cell proliferation control, but enhances cell survival by inhibiting programmed cell death.Meanwhile, Bax, the pro-apoptotic member of the Bcl2 family, increases mitochondrial membrane permeability, thereby releasing apoptotic factors into the cytoplasm and ultimately activating Caspase-3, which, in turn, leads to apoptosis 36 .These two protein-mediated apoptosis pathways constitute the intrinsic mitochondria-mediated apoptosis pathway.The extrinsic apoptosis pathway is triggered after cell damage when death signals are received following stimulation by exogenous inflammatory cytokines, such as TNF-α.These cytokines interact with membrane receptors TNFR and TLR4, triggering a series of reactions that eventually lead to cell apoptosis 36 .Overall, we deduced that Caspase-3 is the executor of cell apoptosis, and that it is activated by the above-mentioned intrinsic and extrinsic apoptosis pathways to initiate cleavage, ultimately mediating apoptosis (Fig. 6).
In this study, the levels of adhesion molecules and inflammatory factors were also measured.Their concentrations increased after carotid artery BI, but this effect was reversed by EPC-Exo treatment.These findings suggest that EPC-Exo can mitigate the initiation of the inflammatory factor-mediated extrinsic apoptosis pathway.Furthermore, EPC-Exo were found to suppress the expression of genes and proteins associated with the intrinsic apoptosis pathway.In vitro experiments were conducted to examine the mechanisms underlying EPC-Exo therapy.
We treated VECs with LPS to establish an in vitro endothelial cell oxidative damage model.The LPS-induced damage disrupted cell connections and caused microfilament fractures.However, EPC-Exo treatment mitigated the LPS-induced damage, preserving actin cytoskeleton integrity.Additionally, EPC-Exo treatment enhanced VECs proliferation, migration, and tube formation functions.Furthermore, LPS increased the VECs apoptosis rate and upregulated the apoptosis-related proteins.However, EPC-Exo treatment reversed these effects, lowering the apoptosis rate by down-regulating the apoptosis-related proteins, Bax, and Caspase-3 but up-regulating Bcl2.According to the Bcl2/Bax/Caspase-3 protein and gene expression analysis results, EPC-Exo treatment could suppress the signaling pathways involved in the intrinsic apoptosis cascade.The results of the in vitro and in vivo experiments suggest that the EPC-Exo-induced endothelial apoptosis inhibition is mediated through the intrinsic and extrinsic apoptosis pathways, which involve the Bcl2/Bax/Caspase-3 signaling pathway.In summary, combining in vitro and in vivo strategies, EPC-Exo could inhibit excessive carotid intimal hyperplasia after BI, promote the repair of endothelial cells in intimal injury, and enhance endothelial function.The underlying mechanism involves the suppression of inflammation and anti-apoptotic effects.The fundamental mechanism for this anti-apoptotic effect involves the regulation of the Bcl2/Bax/Caspase-3 signaling pathway.This study illuminates the mechanism for preventing vascular restenosis through apoptosis regulation, providing a novel perspective on EPC-Exo treatment in repairing vascular endothelial injuries by targeting VECs.Moreover, this study elucidates the mechanisms and therapeutic effects of exosomes, which not only enhance our understanding of their multifaceted roles in vascular biology but also offer an avenue for developing innovative and effective cell-free therapeutic approaches for vascular regeneration.However, this study had some limitations.The exact components of exosomes responsible for therapeutic effects, such as proteins, lipids, and nucleic acids, remain unknown.Furthermore, this study did not explore the specific mechanisms underlying the inhibition of the inflammatory response by exosomes.Moving forward, it is imperative to address these gaps by elucidating the specific substances through which exosomes exert therapeutic effects and clarify the mechanisms involved in the suppression of the inflammatory response.
Animal
All animal experiments were ethically approved by the Animal Experiment Center of Hunan University of Chinese Medicine (HNUCM) Ethics Committee (Approval number: LL2022091403).All experiments were performed in compliance with national and institutional laws.The acquisition and description of data followed the recommendations are reported in accordance with the ARRIVE guidelines.All animals were kept in the barrier system at the animal experiment center of HNUCM at a controlled temperature and humidity.The rats were randomly categorized into different treatment groups.All surgeries and follow-up analyses were performed through a blinded intervention approach.www.nature.com/scientificreports/Cell isolation, culture, and exosome extraction First, Sprague-Dawley (SD) rats weighing 100-120 g were euthanized by CO2 inhalation and immersed in 75% alcohol for 15 min to achieve disinfection in preparation for EPCs extraction.Following the manufacturer's instructions, the EPCs isolation kit was used to extract EPCs, which were then cultured in rat bone marrowderived endothelial progenitor cells complete medium in a 37 °C and 5% CO2 incubator.After cell passaging, the medium was changed to endothelial cell growth medium-2 (EGM-2).For exosome extraction, 70-80% confluent EPCs were washed with phosphate-buffered saline (PBS) and changed to a fresh exosome-depleted culture medium.After culturing for 24 h, the medium was collected in centrifuge tubes.The medium containing exosomes was extracted through high-speed centrifugation combined with ultrafiltration.The supernatant was subjected to gradient centrifugation at 4 °C: 500×g for 25 min, 3000×g for 15 min, and 12,000×g for 30 min, aiming to remove cellular debris, apoptotic vesicles, and macrovesicles.The cell-free supernatant, cleared of impurities, was then transferred into a 30kD ultrafiltration tube and centrifuged at 3000×g for 25 min to collect exosomes.The resulting precipitate was further purified through centrifugation for 90 min at 120,000×g.Thereafter, the precipitate was resuspended in 200 μL PBS and stored at − 80 °C for further analysis.
Establishment of carotid BI model 15 Male SD rats [Special Pathogen-Free (SPF) weighing 300-350 g; aged 6-8 weeks] were purchased from Hunan Slaughter Jingda Laboratory Co.S, Ltd. (Animal license number: SCXK (Xiang) 2019-0004).High-fat chow was purchased from Beijing Keo Collaborative Feed, and the formulation was prepared as reported in our previous study 41 .The animals were fed with high-fat chow for two weeks.The experimental method of establishing carotid BI model was described previously 42 .The rats were anesthetized with 2.5% pentobarbital and injected with penicillin three days post-surgery to prevent infection.The rats in the sham group had only their carotid arteries isolated.All rats with established carotid BI were simultaneously randomized into the BI model group and the EPC-Exo group.The exosome group animals were injected with EPC-Exo (30 μg) 12 h post-surgery and on the third day after the operation, while the other groups were injected with an equal volume of PBS.The rats were euthanized by CO 2 inhalation after 14 days.Half of the carotid arteries were fixed in 4% paraformaldehyde and the other half were stored at − 80 °C.
Delivery of exosomes
To monitor the internalization of EPC-Exo, the purified exosomes were labeled using an exosomal red fluorescent labelling dye (PKH26) kit.The exosome group animals were injected with PKH26-labelled EPC-Exo ( 30 μg) 12 h post-surgery and on the third day after the operation, while the other groups were injected with an equal volume amount of PBS (Fig. 7).Whether EPC-Exo adhered to the injured carotid artery vessel wall after injection and the binding of exosomes were assessed by fluorescence.After seven days of modeling, the rats in the model and EPC-Exo groups were euthanized by CO 2 inhalation.The intact carotid arteries were extracted and fixed in paraformaldehyde.Subsequently, the fixed carotid arteries were dehydrated, made transparent, embedded and made into paraffin sections.Nuclei were counterstained with 4′,6-diamidino-2-phenylindole (DAPI).The sections were visualized and imaged under a fluorescence microscope, focusing on the intimal area of the carotid artery.Image J software was used to analyze the fluorescence intensity.
Flow cytometry
The EPCs were centrifuged two times at 1000 rpm to remove the residual medium and then resuspended in PBS.Subsequently, the cells were incubated with CD34 (1:100) and CD133 (1:100) antibodies and left on ice for 30 min.Following incubation, the cells were centrifuged and resuspended in 200 μl PBS, and the results were analyzed by DxP AthenaTM flow cytometry (Cytek, US).
Transmission electron microscope
The exosome solution was first added dropwise to the copper grid and incubated for 5 min at room temperature (RT) before blotting the excess solution with filter paper.Subsequently, 10 μL of 2.5% glutaraldehyde solution was added dropwise to the copper grid for 10 min and washed with PBS 1-2 times for 3 min each.Following that, the surface was washed once with PBS and allowed to dry at RT. Finally, transmission electron microscopy was used to examine the morphology of the exosomes.
Exosome particle size analysis
To avoid clogging the injection needle, the exosomes were removed and diluted to the appropriate multiple with a PBS gradient.After verifying the accuracy of the particle size and concentration analyzer against set standards, we tested the sample by placing it on the machine and obtained information on exosome particle size and concentration.
Hematoxylin and eosin (H&E) staining
To assess intimal hyperplasia after endovascular injury, the sections were stained with H&E staining, focusing on the endovascular area.The fixed vessels were dehydrated, made transparent, embedded, sectioned, and then stained using hematoxylin and eosin.They were then photographed under the microscope to be sealed and dried.Three fields of view (40×) were randomly selected under the microscope, and the mean value was taken as the measurement.The intramembranous area of the inner outer elastic membranes, lumen and their perimeters were analyzed using Image Pro Plus software 6.0.The Intima-media thickness (IT) and hyperplasia ratio of intima thickness (HRIT) were calculated based on previous studies 41 .The calculations are as follows: IT = (perimeter of the internal elastic tunica − perimeter of the lumen)/2π; MT = (perimeter of the external elastic tunica − perimeter of the internal elastic tunica)/2π; HRIT = IT/(IT + MT) × 100%.
Immunofluorescence (IF)
To identify the EPCs, cultured EPCs were fixed in 4% paraformaldehyde for 30 min.After washing with PBS, the cells were blocked with 3% bovine serum albumin (BSA) for 60 min.The EPCs were then incubated with primary antibodies CD34 (1:500) and CD133 (1:500) at 4 °C overnight.After incubation, the cells were washed with PBS and then stained for 60 min with a fluorescent secondary antibody (1:500) at RT in the dark.Thereafter, the nuclei were stained with DAPI, and the EPCs were examined by immunofluorescence (IF).EPCs have the ability to take up Dil-LDL and bind fluorescein isothiocyanate (FITC)-UEA-1.Cultured EPCs were added to complete medium containing Dil-LDL and cultured in a 37 °C and 5% CO 2 incubator for 4 h.The cells were then fixed in 4% paraformaldehyde for 20 min before adding PBS containing FITC-UEA-1 and allowing the mixture to settle at RT for 2 h.Subsequently, the EPCs were stained with DAPI for 6 min.The cells were then fixed in 4% paraformaldehyde for 20 min before adding PBS containing FITC-UEA-1 and allowing the mixture to settle at RT for 2 h.Subsequently, the EPCs were stained with DAPI for 6 min.
To investigate the expression of apoptosis-related proteins in different groups of VECs, the VECs were fixed with 4% paraformaldehyde for 30 min for IF testing.Cultured VECs were blocked with 3% bovine serum albumin (BSA) for 60 min, and then incubated with Cleaved-caspase-3 primary antibody at 4 °C overnight.Thereafter, the VECs were washed and incubated with FITC-conjugated secondary antibody for 1 h.Nuclei were stained with DAPI.All cells were sealed with an anti-fluorescence quenching sealer and photographed under a fluorescence microscope.Image J software was used to analyze the fluorescence intensity.
Immunohistochemistry
Paraffin sections were first dewaxed with water and then antigenically repaired through dropwise addition of an Antigen Repair Solution.Subsequently, the sections were incubated in a 3% hydrogen peroxide solution for 25 min at RT to block the endogenous peroxidase.The sections were then incubated overnight at 4 °C in a wet box with a drop of 3% BSA at RT for 30 min.Primary antibodies [VCAM-1 (1:100)] were then added to the sections.After incubation, the cells were washed before adding a freshly prepared diaminobenzidine (DAB) coloring solution dropwise onto the sections under the microscope for a controlled period.The hematoxylin-stained nuclei were sealed with neutral resin.Three fields of view (40×) were randomly selected under the microscope, www.nature.com/scientificreports/and the mean value was taken as the measurement.Pictures were captured under a microscope after drying the sections.Image J software was used to analyze the positive area: Mean density = sum of integral optical density (IOD)/area of measurement area.
Enzyme-linked immunosorbent assay (EILISA)
The rat serum was obtained and left to stand at 4 °C for two hours, then centrifuged at 3000 rpm for 15 min.The serum was divided and stored at − 80 °C.According to the instructions of the ELISA kit, the levels of ET-1, IL-6, and TNF-α in rat serum was determined.
Cell treatment and proliferation
The cell counting kit-8 (CCK-8) assay was used to detect VECs viability.The VECs were seeded in 96-well plates at a density of 1 × 10 5 cells/ml.The VECs in the control group were treated with a complete ordinary medium for 24 h, whereas cells in other groups were treated with LPS (1 mg/ml) for 24 h, as outlined in previous research 43 , to establish an endothelial cell injury model.Furthermore, the control and LPS groups were treated by replacing their culture media with fresh culture media, while the exosome group was treated with a complete culture medium containing exosomes for 24 h (Fig. 7).According to the treatment of cells, the cells were divided into three groups: control (added to complete medium), LPS [both added to complete medium containing LPS (1 μg/ ml)], and LPS + EPC-Exo [added to a complete medium containing LPS (1 μg/ml) and complete medium containing EPC-Exo (10 μg/ml)].Subsequently, a 10% CCK-8 solution was added, and the mixture was placed in the incubator before measuring the absorbance values at a wavelength of 420 nm with an enzyme marker for 1-2 h.
Scratch experiment
VECs were inoculated into 6-well plates (2 × 10 5 cells/well).After the cells had adhered, a straight line was drawn vertically and evenly in the middle of the well plate with the tip of a gun.The cells were washed with PBS to remove the scratched cells, the control group was treated with regular medium, while the other groups were treated with normal medium containing LPS (1 μg/ml) for 24 h.The cells were incubated at 37 °C in a 5% CO2 incubator.The same location was photographed at 0 h and 24 h.The scratch experiment were observed using an Aiovert A1 inverted microscope (Zeiss, Germany).Respectively, the area before and after injury was analyzed by ImageJ.
Tube formation assay
The matrigel solution was allowed to melt overnight at 4 °C, and the 96-well plates and 200 μl tips were pre-cooled at − 20 °C.The thawed matrigel solution was then added to the 96-well plates on the following day and placed in the incubator for 30 min.Subsequently, 100 μl of cells (1 × 10 5 cells/ml) were added to the wells.The cell tube formations were observed using an Aiovert A1 inverted microscope (Zeiss, Germany), and three randomly selected fields of view were photographed and counted.
Transferase dUTP nick end labeling (TUNEL) staining
The cells were treated as previously described.Following VECs fixation, PBS containing 0.3% Triton X-100 was added, and the mixture was incubated for 5 min at RT.The cells were then washed two times with PBS after incubation.The TUNEL assay was performed using a TUNEL assay kit per the manufacturer's instructions.After preparation, 50 μl of the TUNEL assay solution was added to the samples and incubated for 60 min at 37 °C.During cultivation, an appropriate amount of water should be added to the excess well space to keep it moist, thereby minimizing the TUNEL determination solution's evaporation.After washing, the nuclei were stained with DAPI for 6 min before sealing with an anti-fluorescence quenching blocking solution and observing under a fluorescence microscope.
Actin cytoskeleton staining
Actin cytoskeleton staining using phalloidin is commonly used to study the morphology and integrity of the network of fibers of the cells, which causes the skeleton fiber filaments to exhibit red fluorescence.Cell treatment was performed as previously described.The VECs were fixed and blocked with a 3% BSA solution for 30 min.After washing with PBS three times, 250 μl Phalloidin (1:500) was added and incubated for 1 h at RT.The cells were then stained with DAPI for 6 min.Subsequently, the cells were sealed with an anti-fluorescence quencher after washing three times with PBS and observed under a fluorescent microscope.
Fluorescent quantitative PCR (qPCR)
Following the standard protocols, the total RNA kit was used to extract total RNA from carotid tissues and VECs.
The RNA was reverse transcribed to synthesize cDNA and then amplified by PCR.Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) was used to evaluate and standardize the relative gene expressions through the 2 − ΔΔ Ct method.Table 1 shows the primer sequences used.
Western blot (WB) analysis
Rat carotid arteries were transferred into a grinding tube and finely ground before lysing on ice for 30 min to extract the supernatant.Subsequently, Sodium Dodecyl Sulphate (SDS) was added to determine and denature the proteins.Cell and exosome proteins were extracted as previously described without grinding.The proteins were then separated through polyacrylamide gel electrophoresis, transferred to a PVDF membrane, blocked with 10% milk for 1 h, and incubated overnight at 4 °C with primary antibodies CD63 (1:2000), tumor susceptibility
Figure 1 .
Figure 1.Characteristics and functional validation of EPCs and EPC-Exo.(a) Growth morphology of EPCs A: 0 days cell form is a transparent circular shape.B: EPCs grow into long shuttle cells in 7 days (scale bar = 50 μm).C: EPCs growth resembles a paved stone.D: The shape of EPCs in P2 generation (scale bar = 100 μm).(b) Fluorescent identification of EPCs.A: Specific marker identification of cells.B: Pochism function appraisal of cells.EPCs appear as double-positive and can be recognized as such (scale bar = 20 μm).(c) Flow cytometry of EPCs.(d) Particle size analysis and particle concentration of EPC-Exo.(e) Cup-shaped morphology of EPC-Exo (arrowhead) assessed by transmission electron microscope (scale bar = 20 μm).(f) Representative images of western blot showing the exosomes protein markers.
Figure 2 .
Figure 2. EPC-Exo could adhere to the wall of a damaged carotid artery, improve endothelial function and inhibited neointimal hyperplasia in rat carotid artery after balloon injury.(a) PKH26-labelled EPC-Exo with orange fluorescence in injured endothelial was monitored on Day 7 after injury (arrowhead).On the contrary, the BI model group did not monitor fluorescence compared with EPC-Exo treatment.A: BI model on Day 7. B: BI + PKH26-labelled EPC-Exo on Day 7 (scale bar = 200 μm).(b) Quantitative level of endothelin-1 (ET-1) in serum.EPC-Exo could decrease expression of ET-1 in serum.(c) In HE staining, the endothelial hyperplasia and the quantitative data of its thickness were observed.EPC-Exo therapy substantially lowers endothelial hyperplasia as well as IT and HRIT values (scale bar = 100 μm).(d) Representative western blot images and quantitative data of eNOS expression in carotid artery are shown.β-actin served as the reference protein (***p < 0.001, **p < 0.01, *p < 0.05, n ≥ 3 per group).The data are displayed as the M ± SD.
Figure 3 .
Figure 3. EPC-Exo alleviated inflammation and regulate the expression of genes and proteins relevant to apoptosis.(a) The expression of adhesion molecules on the vascular wall of carotid artery in rats were determined by immunohistochemistry. Representative images and quantitative data of VCAM-1 were shown.EPC-Exo could reduce adhesion molecule expression (scale bar = 20 μm).(b) Quantitative levels of inflammatory cytokines TNF-α and IL-6 in serum.EPC-Exo could inhibit the content of inflammatory factors in serum.(c) qPCR detected Bcl2, Bax, and Caspase 3 mRNA levels in the carotid artery.(d) Representative western blot images and quantitative data of Bcl2, Bax, and Cleaved-caspase 3 selection in carotid artery are shown.β-actin served as the reference protein (***p < 0.001, **p < 0.01, *p < 0.05, n ≥ 3 per group).The data are displayed as the M ± SD.
Figure 4 .
Figure 4. EPC-Exo enhanced endothelial function and stimulate the proliferation, migration and tube formation of VECs in vitro.(a) In actin cytoskeleton, the micro-silks constituted by fibrosactin (F-actin) were more complete, and the cell skeleton structure was obvious in EPC-Exo treatment group.(b) CCK-8 was used to detect the proliferation of VECs.The assay indicated that EPC-Exo stimulated the proliferation of VECs.(c) The scratch experiment revealed that EPC-Exo therapy could improve the capability of VECs to migrate after injury (scale bar = 100 μm).(d) In the tube formation assay, EPC-Exo increased the total length of tube of VECs (scale bar = 50 μm) (***p < 0.001, **p < 0.01,*p < 0.05, n ≥ 3 per group).The data are displayed as the M ± SD.
Table 1 .
The primer sequences of target genes. | 7,193.2 | 2024-02-23T00:00:00.000 | [
"Medicine",
"Biology"
] |
An entropic generalization of Caffarelli's contraction theorem via covariance inequalities
The optimal transport map between the standard Gaussian measure and an $\alpha$-strongly log-concave probability measure is $\alpha^{-1/2}$-Lipschitz, as first observed in a celebrated theorem of Caffarelli. In this paper, we apply two classical covariance inequalities (the Brascamp-Lieb and Cram\'er-Rao inequalities) to prove a sharp bound on the Lipschitz constant of the map that arises from entropically regularized optimal transport. In the limit as the regularization tends to zero, we obtain an elegant and short proof of Caffarelli's original result. We also extend Caffarelli's theorem to the setting in which the Hessians of the log-densities of the measures are bounded by arbitrary positive definite commuting matrices.
Introduction
In [Caf00], Caffarelli proved the following seminal result.
Here, ϕ 0 : R d → R is a convex function, known as a Brenier potential.The optimal transport map ∇ϕ 0 : R d → R d pushes forward P to Q, in the sense that if X is a random variable with law P , then ∇ϕ 0 (X) is a random variable with law Q.See Section 2.2 and the textbook [Vil03] for background on optimal transport.
Caffarelli's contraction theorem can be used to transfer functional inequalities, such as a Poincaré inequality, from the standard Gaussian measure on R d to other probability measures [BGL14].Towards this end, recent works have also constructed and studied alternative Lipschitz transport maps (e.g.[KM12, MS21, MS22, Nee22]), but still the properties of the original optimal transport map remain of fundamental interest, with many questions unresolved [Val07,CFJ17].
Indeed, besides the application to functional inequalities, the structural properties of optimal transport maps play a fundamental role in theoretical and methodological advances in optimal transport, such as the control of the curvature of the Wasserstein space through the notion of extendible geodesics [LPRS19,ACLGP20], the stability of Wasserstein barycenters [CMRS20], and the statistical estimation of optimal transport maps [HR21].
In applied domains, however, the inauspicious computational and statistical burden of solving the original optimal transport problem has instead led practitioners to consider entropically regularized optimal transport, as pioneered in [Cut13].In addition to its practical merits, entropic optimal transport enjoys a rich mathematical theory, rooted in its connection to the classical Schrödinger bridge problem [Léo14], which has led to powerful applications to high-dimensional probability [Led18,FGP20,GLRT20].As such, it is natural to study the properties of the entropic analogue of the optimal transport map.
In this paper, we prove a generalization of Caffarelli's contraction theorem to the setting of entropic optimal transport.Namely, we study the Hessian of the entropic Brenier potential (see Section 2.3), which admits a representation as a covariance matrix (Lemma 1).By applying two well-known inequalities for covariance matrices (the Brascamp-Lieb inequality and the Cramér-Rao inequality), we quickly deduce a sharp upper bound on the operator norm of the Hessian which holds for any value ε > 0 of the regularization parameter.
As a byproduct of our analysis, by sending ε ց 0 and appealing to recent convergence results for the entropic Brenier potentials [NW21], we obtain the shortest proof of Caffarelli's contraction theorem to date.Notably, our argument allows us to sidestep the regularity of the optimal transport map, which is a key obstacle in Caffarelli's original proof.
Recently, in [FGP20], Fathi, Gozlan, and Prod'homme gave a proof of Caffarelli's theorem using a surprising equivalence between Theorem 1 and a statement about Wasserstein projections, which was discovered through the theory of weak optimal transport [GJ20].In order to verify the latter, their proof also used ideas from entropic optimal transport.In comparison, we note that our argument is more direct and also allows us to handle the case of non-zero regularization (ε > 0).
To further demonstrate the applicability of our technique, in Section 4 we prove a generalization of Caffarelli's result: if ∇ 2 V A −1 and ∇ 2 W B −1 , where A and B are arbitrary commuting positive definite matrices, then the Hessian of the Brenier potential from P to Q is pointwise upper bounded (in the PSD ordering) by A −1/2 B 1/2 .This result implies a remarkable extremal property of optimal transport maps between Gaussian measures, namely: the optimal transport map from N (0, A) to N (0, B) maximizes the Hessian of the Brenier potential at any point among all possible measures P and Q satisfying our assumptions.To the best of our knowledge, this result is new.
Assumptions
We study probability measures P and Q on R d satisfying the following mild regularity assumptions.
Assumption 1 (Regularity conditions).We henceforth refer to the source measure as P and the target measure as Q.We say that (P, Q) satisfies our regularity conditions if: 1. P has full support on R d and Q is supported on a convex subset of R d .Let Ω Q denote the interior of the support of Q, so that Ω Q is a convex open set.
2. P and Q admit positive Lebesgue densities on R d and Ω Q , which we can therefore be written exp(−V ) and exp(−W ) respectively for functions V, W : R d → R ∪ {∞}.We abuse notation and identify the measures with their densities, thus writing P = exp(−V ) and Q = exp(−W ).
3. We assume that V and W are twice continuously differentiable on R d and Ω Q respectively.Some of these assumptions can be eventually relaxed, but they suffice for the purposes of this work.Throughout the rest of the paper and for the sake of simplicity, these regularity assumptions are assumed to hold for the probability measures under consideration.
Optimal transport without regularization
Let P and Q be probability measures with finite second moment.The optimal transport problem is the optimization problem minimize π∈Π(P,Q) where Π(P, Q) is the set of joint probability measures with marginals P and Q.The following fundamental result characterizes the optimal solution to (1).
Theorem 2 (Brenier's theorem).Suppose that P admits a density with respect to Lebesgue measure.
Then, there exists a proper, convex, lower semicontinuous function ϕ 0 : R d → R ∪ {∞} such that the optimal transport plan in (1) can be written π 0 = (id, ∇ϕ 0 ) ♯ P .The function ϕ 0 is called the Brenier potential, and the mapping ∇ϕ 0 is called the optimal transport map from P to Q.Moreover, the optimal transport map ∇ϕ 0 is unique up to P -almost everywhere equality.The Brenier potential ϕ 0 is obtained as the solution to the dual problem where ϕ * is the convex conjugate to ϕ, and Γ 0 is the set of proper, convex, lower semicontinuous functions on R d .
We refer to [Vil03] for further background.
Theorem 3 (Entropic optimal transport).Let P and Q be probability measures on R d and fix ε > 0.
Then there exists a unique solution π ε ∈ Π(P, Q) to (3).Moreover, π ε has the form where (f ε , g ε ) are maximizers for the dual problem The constraint that π ε has marginals P and Q implies the following dual optimality conditions for (f ε , g ε ) (see [MNW19,NW21]): In particular, f ε and g ε are smooth.In this work, it is more convenient to work with the entropic Brenier potentials, defined as Since (f ε , g ε ) are only unique up to adding a constant to f ε and subtracting the same constant from g ε , we fix the normalization convention f ε dP = g ε dQ.Under this condition, it was shown in [NW21] that we have convergence to the Brenier potential ϕ ε → ϕ 0 as ε ց 0.
Adopting this new notation, with P = exp(−V ) and Q = exp(−W ), we can rewrite the entropic optimal plan as The entropic Brenier potentials were first introduced to develop a computationally tractable estimator of the optimal transport map ∇ϕ 0 [SDF + 18, PNW21, PCNW22].Indeed, this is motivated by the following observation, which acts as an entropic version of Brenier's theorem.Write π Y |X=x ε for the conditional distribution of Y given X = x for (X, Y ) ∼ π ε , and similarly define π For clarity of exposition, we abuse notation and abbreviate π Y |X=x ε by π x ε and π X|Y =y ε by π y ε when there is no danger of confusion.
Lemma 1.It holds that In particular, both ϕ ε and ψ ε are convex.Moreover, under our regularity conditions,
Covariance inequalities
In our proofs, we make use of the following key inequalities.
Lemma 2. Let P = exp(−V ) be a probability measure on R d and assume that V is twice continuously differentiable on the interior of its domain.Then, the following hold.
1. (Brascamp-Lieb inequality) If in addition we assume that P is strictly log-concave, then it holds that Cov X∼P (X) The Brascamp-Lieb inequality is classical, and we refer readers to [BL00, BGL14, CE17] for several proofs.To make our exposition more self-contained, we provide a proof of the Cramér-Rao inequality in the appendix.
Main theorem
We now state and prove our main theorem.
1. Suppose that (P, Q) satisfy our regularity assumptions, as well as Then, for every ε > 0 and all x ∈ R d , the Hessian of the entropic Brenier potential satisfies 2. Suppose that (Q, P ) satisfy our regularity assumptions, as well as Then, for every ε > 0 and all x ∈ Ω P := int(supp(P )), the Hessian of the entropic Brenier potential satisfies Observe that as ε ց 0, we formally expect the following bounds on the Brenier potential: In particular, this recovers Caffarelli's contraction theorem (Theorem 1).We make this intuition rigorous below by appealing to convergence results for the entropic potentials as the regularization parameter ε tends to zero.
Proof of Theorem 4. Upper bound.Fix x ∈ R d .Recall from Lemma 1 that . By an application of the Brascamp-Lieb inequality, this results in the upper bound where in the last inequality we also used the lower bound on the spectrum of ∇ 2 W . Next, using Lemma 1 and the Cramér-Rao inequality (Lemma 2), we obtain the lower bound where we used the upper bound on the spectrum of ∇ 2 V .Combining these inequalities, . Now, define the quantity Then, we have shown Taking the supremum over x ∈ R d , Solving the inequality yields Lower bound.The lower bound argument is symmetric, but we give the details for completeness.Using Lemma 1 and the Cramér-Rao inequality (Lemma 2), Applying Lemma 1 and the Brascamp-Lieb inequality (Lemma 2), Combining the two inequalities and setting we deduce that On the other hand, from Lemma 1, we know that ℓ ε ≥ 0. Solving the inequality then yields Next, we rigorously deduce Caffarelli's contraction theorem from Theorem 4.
Remark 1.Our main theorem provides both upper and lower bounds for ∇ 2 ϕ ε .In the case when ε = 0, the lower bound follows from the upper bound.Indeed, if ϕ 0 is the Brenier potential for the optimal transport from P to Q, then the convex conjugate ϕ * 0 is the Brenier potential for the optimal transport from Q to P .By applying Caffarelli's contraction theorem to ϕ * 0 and appealing to convex duality, it yields a lower bound on ∇ 2 ϕ 0 .However, we are not aware of a method of deducing the lower bound from the upper bound for positive values of ε.
Remark 2. In Appendix B, by inspecting the Gaussian case, we show that Theorem 4 is sharp for every ε > 0.
An inspection of the proof of the upper bound in Theorem 4 reveals the following more general pair of inequalities.
Proposition 1.Let (P, Q) be probability measures satisfying our regularity conditions.Then, for all x ∈ R d and y ∈ Ω Q , In the next section, we use these inequalities to prove a generalization of Caffarelli's theorem.
A generalization to commuting positive definite matrices
In the next result, we replace the main assumptions of Caffarelli's contraction theorem, namely where A and B are commuting positive definite matrices.Recall that the Hessian of the Brenier potential between the Gaussian distributions N (0, A) and N (0, B) is the matrix In light of this observation, the following theorem is sharp for every pair of commuting positive definite (A, B), and shows that the Brenier potential between Gaussians achieves the largest possible Hessian among all source and target measures obeying the constraint (11).
Theorem 5. Let (P, Q) satisfy our regularity conditions as well as the condition (11).Then, the Hessian of the Brenier potential satisfies the uniform bound: for all x ∈ R d , it holds that As in Theorem 4, the proof technique also yields a lower bound on ∇ 2 ϕ 0 under appropriate assumptions.We omit this result because it is straightforward.
In light of Theorem 4, C ε is well-defined and finite.Equivalently, Let (x, e) achieve the above supremum.(If the supremum is not attained, then the rest of the proof goes through with minor modifications.) Using our assumptions and Proposition 1, we obtain From our assumptions and Theorem 4, we know that the spectrum of M ε := A −1/2 B 1/2 + C ε I is bounded away from zero and infinity as ε ց 0, which justifies the Taylor expansion Hence, This shows that lim εց0 C ε = 0 (otherwise (C ε ) ε>0 would have a strictly positive cluster point which would contradict the above inequality for small enough ε > 0).By combining this fact with convergence of the entropic Brenier potentials as in the proof of Theorem 1, we deduce the Next, we show how our theorem recovers and extends a result of Valdimarsson [Val07].Valdimarsson proves that if: • Ā, B, and G are positive definite matrices; • Ā G and B commutes with G; • P = N (0, BG −1 ) * µ where * denotes convolution and µ is an arbitrary probability measure on R d ; and then the Brenier potential satisfies ∇ 2 ϕ 0 G.This result was then used to derive new forms of the Brascamp-Lieb inequality. 1 To prove this result, we first check that convolution with any probability measure only makes the density more log-smooth.
Lemma 3. Let P ∝ exp(− V ) be a probability measure, where V : R d → R is twice continuously differentiable.Let P := P * µ = exp(−V ) where µ is any probability measure on R d .Suppose that for some positive definite matrix A −1 , we have ∇ 2 V A −1 .Then, ∇ 2 V A −1 as well.
Proof.An elementary computation shows that if we define the probability measure from which the result follows.
From the lemma, we deduce that under Valdimarsson's assumptions, for P = exp(−V ), we have . By Theorem 5, the Brenier potential ϕ 0 satisfies ∇ 2 ϕ 0 G.However, it is seen that our argument yields much more.For example, rather than requiring P to be a convolution with a Gaussian measure, we can allow P to be a convolution with any measure exp(− V ) satisfying ∇ 2 V B−1 G.
Remark 3. It is natural to ask whether Theorem 5 can be obtained by first applying Caffarelli's contraction theorem to show that the optimal transport map T 0 between the measures (A −1/2 ) ♯ P and (B −1/2 ) ♯ Q is 1-Lipschitz, and then considering the mapping T 0 (x) := B 1/2 T 0 (A −1/2 x).Although T 0 is indeed a valid transport mapping from P to Q, under our assumptions ∇T 0 is not guaranteed to be symmetric, so it does not make sense to ask whether or not ∇T 0 A −1/2 B 1/2 .In Valdimarsson's application to Brascamp-Lieb inequalities, it is crucial that the transport map T 0 is chosen so that ∇T 0 is a symmetric positive definite matrix.Symmetry of ∇T 0 implies that T 0 is the gradient ∇ϕ 0 of a function ϕ 0 : R d → R, and positive definiteness implies that ϕ 0 is convex.By Brenier's theorem, the unique gradient of a convex function that pushes forward P to Q is the optimal transport map.Thus, it is crucial that we consider the optimal transport map here; in particular, alternative maps such as the ones in [KM12, MS21] cannot be applied.
Discussion
We have proven a generalization of Caffarelli's celebrated theorem on the Lipschitz properties of the optimal transport map to the setting of entropic optimal transport using two complementary covariance inequalities (the Brascamp-Lieb inequality and the Cramér-Rao inequality).
We conjecture that our proof technique can also be used to recover the bounds on the moment measure mapping in [Kla14], provided that the existence of an "entropic moment measure" can be established (with convergence towards the true moment measure as the regularization tends to zero).As this is outside the scope of this work, we do not pursue this question here.
Integration by parts shows that ∇V ⊗2 dP = ∇ 2 V dP , and upon rearranging we deduce that Var P h ≥ E P ∇h, (E P ∇ 2 V ) −1 E P ∇h .
By approximation, this continues to hold for any locally Lipschitz h : R d → R with E P ∇h < ∞.Specializing the inequality (13) to h := e, • for a unit vector e ∈ R d then recovers the Cramér-Rao inequality of Lemma 2. | 4,097.8 | 2022-03-09T00:00:00.000 | [
"Mathematics"
] |
Rapid calculation of diffuse reflectance from a multilayered model by combination of the white Monte Carlo and adding-doubling methods
: To rapidly derive a result for diffuse reflectance from a multilayered model that is equivalent to that of a Monte-Carlo simulation (MCS), we propose a combination of a layered white MCS and the adding-doubling method. For slabs with various scattering coefficients assuming a certain anisotropy factor and without absorption, we calculate the transition matrices for light flow with respect to the incident and exit angles. From this series of precalculated transition matrices, we can calculate the transition matrices for the multilayered model with the specific anisotropy factor. The relative errors of the results of this method compared to a conventional MCS were less than 1%. We successfully used this method to estimate the chromophore concentration from the reflectance spectrum of a numerical model of skin and in vivo human skin tissue.
Introduction
The reflectance of skin noninvasively provides information about the inner conditions, such as the scattering coefficients, absorption coefficients, and chromophore concentrations [1]. When deriving this information, a Monte Carlo simulation (MCS) has often been used as the standard [2][3][4][5][6][7][8][9], since a MCS precisely follows the behavior of each photon that is scattered from or absorbed in a medium, and the range of scattering and absorption coefficients to which it can be applied is not limited. In addition, a MCS can be applied easily to arbitrary multilayered structures. A multilayered structure is unavoidable when evaluating the reflectance of skin, which consists of an epidermis that contains melanin and an underlying dermis that contains hemoglobin. Because of its high reliability, a MCS is often used to evaluate approximation methods [2][3][4][5][6][7][8][9], although it has a high computation time. In a MCS, a large number of photons are generated, and each photon numerically propagates through the skin model following designated probabilities of scattering and absorption; from a statistical analysis of these results, the reflectance and other physical quantities are then calculated. Increasing the number of photons increases the precision of the MCS, but it also increases the calculation time, which may become too long to be used for the inverse problem, imaging, or interactive tools. A sufficiently short calculation time is crucial for those applications, since it is necessary to calculate the reflectance many times. Therefore, to shorten the calculation time, several methods have been considered for approximating the results of an MCS. For example, various studies have proposed diffusion approximation [3], hybrid diffusion and two-flux approximation [4], the path-integral method [2], and an empirical method aided by MCS [10]. However, in each of these methods, the difference from the MCS depends on the type of approximation, and often the applicable range of parameters is limited. The methods based on the diffusion approximation [3,4] limit the size of the ratio of the absorption coefficient μ a to the reduced scattering coefficient μ s '. In the path-integral method, although the trajectories of the light are represented by the classical path, it is difficult to find an appropriate path for the various absorption and scattering coefficients [2]. Finally, although the empirical method aided by a MCS can estimate the chromophore concentrations over a wide range, it is not optimized for the estimation of the reflectance [10].
Several studies have attempted to reduce the time required to calculate the reflectance while keeping the results equivalent to those of an MCS [5-7, 9, 11]. These studies were based on a method known as white MCS (WMCS) [5] or single MCS [6,7,9], which were primarily developed for solving time-domain problems [5][6][7]9]. The WMCS is based on the Beer-Lambert (B-L) law, which states that the absorbance of photons traveling through a medium along a certain trajectory can be calculated from the path length and the absorption coefficient. The WMCS also utilizes similarity: In a semi-infinite and homogeneous medium without absorption, if μ s ' grows to α times the original condition, then a homothetic trajectory that is 1/α times smaller can be associated with the original one [5,6]. With similarity, we only need to calculate the MCS one time without absorption, and then for any absorption and scattering coefficient of the medium, we can derive a histogram for the path length and determine the spatial distribution of reflectance in the time domain. Not apply only to semiinfinite and homogeneous media, a WMCS can be applied to an arbitrary composite structure composed of multiple media if the path length in each medium can be calculated for each trajectory [8,11]. However, in such cases, similarity cannot be fully utilized, and the application range for a certain precalculated data set is limited. For a multilayered structure like skin, the optical path-length matrix method (OPLM) has been developed [8]. In the OPLM, a MCS is used to determine the path lengths for each photon in the epidermis and in the dermis; this is recorded on a two-dimensional histogram. The absorption coefficient is determined, and the reflectance is then calculated from the two-dimensional histogram by using the B-L law [8]. Although this method works with a multilayer model, if we want to change any of the scattering coefficients or the thickness of any of the layers, we must recalculate everything.
To resolve this issue, we developed a faster method for estimating the reflectance of an arbitrary multilayered model; its results are in close agreement with those of the MCS. Here, we omit the spatial and time-resolved information. In this method, the WMCS was modified for application to a slab by the introduction of transition matrices for light flow without absorption from the incident angle to the exit angle, based on the distribution by path length. The method is called the layered WMCS (LWMCS). Once the value is given for μ a , the transition matrices for each layer can be calculated for that μ a . The transition matrices of all the layers are then combined by using the adding-doubling (AD) method [12]. From the previously calculated values, the transition matrices can be derived for all the layers by matrix arithmetic; the calculation time is thus substantially shorter than it is with the conventional MCS (cMCS). In addition, with this method, we can change the number of layers, the thickness, μ a , μ s ', and the refractive indices without having to repeat the initial calculations. To demonstrate the benefit of the quicker calculation, we used this method to estimate the chromophore content, which requires repeated calculations of the reflectance. Fig. 1. Range of direction of incident angle Θ i and exit angle Θ j . Here, Θ i and Θ j indicate the direction between the two respective cones of θ i-1 and θ i , and of θ j-1 and θ j , respectively.
Outline of the method
In this paper, we focus on the incoming angle of the light, measured in relation to the normal to the surface, and we exclude any analysis of lateral distribution or time. We begin by deriving the transition matrices of reflectance R and transmittance T, which are based on the transition probability relative to the incident and exit angles. The incident and exit angles are separated by the angles {θ 0 ,θ 1 ,…,θ n-1 ,θ n }, where θ 0 = 0° and θ n = 90°. For convenience, the ranges [θ i-1 ,θ i ] and [θ j-1 ,θ j ] are represented by Θ i and Θ j , respectively, where i = 1,2,…,n and j = 1,2,…,n (Fig. 1). R and T are expressed as: Here, R(Θ i →Θ j ) and T(Θ i →Θ j ) represents the discretized conditional probability of the exit angle being between [θ j-1 ,θ j ] when the incident angle is Θ i , for reflection and transmission, respectively. R(Θ i →Θ j ) and T(Θ i →Θ j ) can be expressed using the conditional probability densities: Here, r(Θ i →θ j ) and t(Θ i →θ j ) are, respectively, the conditional probability densities of the exit angle θ j when the incident angle is Θ i , for reflectance and transmittance. We should note that, for simplicity, these values are not normalized to a solid angle, but they can be converted to normalized values if needed. Although the expression does not consider the azimuth angle, the reflectance and transmittance can be derived from these matrices, under the condition that either or both the incident angle and measurement angle distributions are axially symmetric. This precondition will be satisfied in the case of normal or diffuse incidence, or when the reflected light is gathered with an integrated sphere. To derive R and T for a model of skin, we developed the LWMCS [ Fig. 2]. In the LWMCS, the transition matrices of reflection and transmission involving path lengths without absorption are calculated in advance by a MCS for several μ s 's for a specific thickness (phase 1, Sec. 2.2). In this case, each element of each of the transition matrices is a histogram of the path lengths. For an arbitrary μ s ', the thickness of a layer d can be associated with one from the precalculated base data by using a scale factor; this step is called resizing (phase 2, Sec. 2.3). To take account of μ a , the B-L law was used, and the absorbance for a certain path length was derived from μ a and the path length. The intensity for each path length was calculated, and these values were then accumulated. This phase is called coloring (phase 3, Sec. 2.4). Since the accumulation is done across all path length, the results are in the form of Eq. (1) and Eq. (2) (i.e., each element becomes a scalar). Then, from the derived transition matrices of each layer, the transition matrices of the multilayered model can be calculated by using the AD method [12]; this is called lamination (phase 4, Sec. 2.5). The refractive index is not considered in the first three phases, but it is considered in phase 4. The protocol described above is called the generalized LWMCS (gLWMCS).
Phase 1 requires repeated evaluations of a MCS; however, once the parameters of a multilayered model are defined, the reflectance can be calculated quickly. This allows us to shorten the trial-and-error process of adjusting the parameters. The method can also be applied to optimization problems that require recursive calculations of the reflectance; an example of this would be estimating chromophore concentrations.
To estimate the error of the interpolation as a function of ξ in gLWMCS, we also examined a specified LWMCS (sLWMCS). In an sLWMCS, the transition matrices from phase 1 are specified for a particular μ s ' and d in the model used; this means that the interpolation in phase 3 is not necessary. However, phase 1 must be repeated when μ s ' or d are changed in any layer.
Precalculation of the transition matrices of a single layer (phase 1)
The aim of phase 1 is to use the path length without absorption to calculate R and T for each layer and for several different scattering powers with the MCS. The results will then be used to quickly derive R and T for layers with an arbitrary absorption coefficient. Assuming a path length l and layer thickness d 0 , the normalized path length 0 l d ζ = can be separated as {ζ 0 ,ζ 1 ,…,ζ m-1 , ζ m } where ζ 0 = 0 and ζ m = ∞. For convenience, the range [ζ j-1 ,ζ j ] is represented by Ζ j , where j = 1,2,…,m. A single MCS produced one column each of R and T for path lengths without absorption and for a particular pair of Θ i and the normalized reduced scattering coefficient ξ = μ s '·d 0 , as follows: Here, R W,ξ (Θ i →Θ j ,Ζ k ) and T W,ξ (Θ i →Θ j ,Ζ k ) are the conditional probabilities that the exit angle and the path length are between [θ j-1 ,θ j ] and [ζ k-1 , ζ k ], respectively, when the incident angle is Θ i . The subscript W represents white (no absorption). Since those transition matrices are defined by the product of μ s ' and d 0 as mentioned in Sec. 2.3, d 0 can be set as an arbitrary constant in the MCS. Each row in R W,ξ (Θ i ) and T W,ξ (Θ i ) forms a histogram of the path length. Since there is no absorption in MCS of phase 1, the photons will never be attenuated in slabs. Therefore, the trace of a photon is terminated when the photon exits from a slab. In sLWMCS, the particular values of ξ in the skin model are used for the precalculation. However, in the gLWMCS, the value of ξ used in the precalculation was not the same as the value used in the model.
Resizing the thickness of a layer (phase 2)
Since, from similarity, the transition matrices can be associated with the product of μ s ' and the thickness d, it is not necessary that d has the same thickness as d 0 in the precalculation. Thus, the ratio α of d to d 0 is calculated and is used as a scaling factor. The concept of similarity is described as follows. The transition matrices for the case with reduced scattering coefficient s μ α ′ , absorption coefficient a μ α , and thickness d α ⋅ are the same as those for μ s ', μ a , and d, where α is an arbitrary factor. The probability that a photon follows an arbitrary trajectory magnified by α, with s μ α ′ , a μ α , and d α ⋅ is the same as that of a photon following the original trajectory in a layer with μ s ', μ a , and d ( Fig. 3) [5,6]. This means that the transition matrices in phase 1 can be expressed as a function of the normalized reduced scattering coefficient ξ = μ s '·d and the scale factor α = d/d 0 . The form of the transition matrices as a function of ξ do not change with α, while ξ changes not only with μ s ' but also with d. Therefore, if the transition matrices for the series of ξ = {Ξ 1 , Ξ 2 ,…, Ξ m } are prepared as in phase 1, appropriate transition matrices can be derived for arbitrary μ s ', μ a , and d. Although Ξ i has a discrete value, we can derive R and T for a given ξ by interpolating between the largest and smallest values of ξ in Ξ i . In phase 2, an appropriate set of transition matrices is chosen. For a particular layer with μ s ' and d, we choose the precalculated data set with ξ = μ s '·d. In the sLWMCS, the transition matrices for the particular ξ in the skin model are calculated in phase 1. In the gLWMCS, the largest and smallest values of ξ near to the value of Ξ i are obtained in phase 1, and the results are then interpolated in phase 3.
Coloring of transition matrices for a single layer (phase 3)
In the LWMCS, the absorbance for a particular absorption coefficient and path length can be calculated in accordance with the B-L law. Assuming that an absorption coefficient μ a , thickness d, and normalized path length ζ = l/d are given, the absorbance A can be expressed using the normalized absorption coefficient μ a ·d as Therefore, the total amount of reflected and transmitted light can be expressed as a function of the angle, as follows: This vector becomes a column of the transition matrix for the associated incident angle. If we calculate for all incident angles and then combine them, the transition matrices of a layer with an arbitrary absorption coefficient are derived as: In the sLWMCS, a particular ξ in the skin model is used in phase 1, therefore the transition matrices can be simply calculated from Eqs. (9) and (10). In the gLWMCS, the value of ξ in the model does not usually match the value of ξ used in phase 1; instead, R and T are derived by simple linear interpolation with Eqs. (11) and (12). For a particular ξ, assuming that the transition matrices for Ξ i were calculated in phase 1, the transition matrices for ξ are calculated by linear interpolation, as follows: where Ξ + and Ξare the smallest and largest Ξ i near ξ. We will number the layers from the top down. We will not allow the refractive index to change at the boundaries, and instead, any change in the refractive index will be considered to take place in a virtual layer. This means when there is a change in the refractive index, the boundary (in the usual sense) should be counted twice, and the effect of the refractive index ratio is treated as a characteristic of a virtual layer at that location. R, T and L are defined as in Fig. 4(a). The notation of the transition matrix is expanded to multiple layers, and the transition matrix of the combined layers, from boundaries k to l, are expressed as R k:l and T k:l [ Fig. 4
Lamination of layers with the adding-doubling method (phase 4)
Our aim in phase 4 is to derive the matrix R 1:N that contains the transition probability from an arbitrary incident angle to an arbitrary exit angle for the layers with boundaries 1 to N. We begin by combining two pairs of transition matrices of two adjacent layers into a single pair of transition matrices. Then, by using recursion, the transition matrices of an arbitrary number of layers can be combined into a pair of transition matrices.
The reflection and transmission transition matrices for a layer between the boundaries k and k + 1 (Fig. 4) can be written as: Here, R k:k+1 (Θ i →Θ j ) and T k:k+1 (Θ i →Θ j ) represent the discretized conditional transition probability of reflected and transmitted light, respectively, and they can be expressed as: in the same manner as in Eqs. (3) and (4). Here, r k:k+1 (Θ i →θ) and t k:k+1 (Θ i →θ) are the conditional probability density of the exit angle of reflected and transmitted light, respectively, when the incident angle is Θ i .
Assuming depolarized light, the transition matrix of a virtual layer k:k + 1 with a change in refractive indices (a single boundary in the usual sense) can be derived by using the Fresnel equation. If , where N 1 and N 2 are the refractive index of the reflected side and the transmitted side, respectively, the transition matrix for reflectance can be expressed as follows: Here, Β i is the transmission angle, which can be derived by using Snell's law ( , and δ i,j is the Kronecker delta. With regard to the transition matrix for transmission, the following relationship is useful: The transition matrix of transmission can be derived if we assume the transmitted light is distributed uniformly across the range of [β i-,β i+ ], where for the incident angles θ i-1 and θ i , we have the transmission angles β i-and β i+ , respectively. When The angular dependency of the light flow at a boundary k is expressed as , We can obtain similar relationships for the boundary 2:3. In addition, by considering two layers to be one composited layer, we can obtain the same kind of equation for the composited boundary 1:3. By eliminating L 2,↓ and L 2,↑ , noting that R and T are noncommutative, and noting that the equations should hold for arbitrary values of 1, L ↓ , 1, L ↑ , 3, L ↓ , and 3, L ↑ , R 1:3 and T 1:3 can be expressed in terms of R 1:2 , R 2:1 , R 2:3 , T 1:2 , T 2:1 , and T 2:3 as follows [12]: ( ) where E is the unit matrix. By interchanging 1 and 3, we can obtain similar equations for R 3:1 and T 3:1 . Once R 1:h and T 1:h have been derived, by substituting R 1:h , R h:h+1 , T 1:h , and T h:h+1 for R 1:2 , R 2:3 , T 1:2 , and T 2:3 , respectively, R 1:h+1 and T 1:h+1 can be obtained. We can obtain R h+1:1 and T h+1:1 in a similar way. Thus, the pairs of transition matrices can be derived recursively for an arbitrary number of layers. We adopted the adaptive bin with non-equal divisions of logarithm for binning of path length as shown in Fig. 5. With a particular constant σ R , the normalized path length ζ is transformed into η R = exp(σ R × ζ) for R. For T, since the path length is greater than or equal to the thickness of layer d, the path length was transformed into η T = exp(σ T × (ζ-1)), noting that ζ = l/d. Then, η R and η T (0 to 1) are divided by the following manner: a certain point η c is set between 0 and 1. The region from 0 to η c and the region from η c to 1 are then each divided separately into m l or m h sections, respectively. This binning can be characterized by four parameters (σ, η c , m l , and m h ). For R and T, the parameters are expressed as (σ R , η cR , m lR , m hR ) and (σ T , η cT , m lT , m hT ), respectively. By doing this, the bin widths of ζ for shorter path lengths became narrower than those for longer path lengths. Another benefit of the transformation is that the semi-infinite range of path lengths [0, ∞] is transformed to a finite range [0, 1], which enables fading of the binning.
Skin model
We used a two-layered model: an epidermis with uniform melanin, and an underlying dermis with uniform oxygenated and deoxygenated hemoglobin. Each parameters are shown in Table 1. Here, C m , C oh , and C dh are the concentrations in each layer of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin, respectively. For ε m (λ), we used the average absorption coefficient of the monomer melanosome with a concentration of 1 mole/liter; this was approximated as 6.6 × 10 11 × λ -3.33 in cm −1 , where the unit of λ was nm [13]. For ε oh and ε dh , we used the extinction coefficients of oxygenated and deoxygenated hemoglobin, respectively, converted to a hematocrit concentration of 45 [14]. The scales of C m , C oh , and C dh were the ratio against the concentrations under which ε m (λ), ε oh (λ), and ε dh (λ), respectively, were derived. The scattering coefficient μ s (λ) was derived from the reduced scattering coefficient μ s '(λ) [15][16][17] and the anisotropy factor g [18] as μ s (λ) = μ s '(λ) /(1-g). For the error estimation presented in Sec. 3.4.1, μ a was set from 0 to290cm −1 in intervals of 10cm −1 for the epidermis and was set from 0 to 29cm −1 in 1cm −1 intervals for the dermis. The range of μ a from 0 to 290cm −1 of the epidermis covers 0 to 20% of C m, and that from 0 to 29cm −1 of the dermis covers 0 to 1.1% of C oh, and 0 to 1.0% of C dh for wavelengths in the range of 400 to 700 nm. These concentrations are the actual values for skin that is light to moderately pigmented [16]. Other than μ a , the parameters of the skin model in Sec. 3.4.1 were the same as those of the skin model described above.
MCS
For calculating the MCS, the program MCML [19] was modified to record the histograms of reflectance and transmittance against the path length, and to perform the calculations for an arbitrary incident angle. With the modified MCML, we executed the cMCS and phase 1 of the sLWMCS and gLWMCS. The number of photons for each condition was 10 5 [20]. For the standard condition for the skin model, the incident angle was set to 0°, and the reflectance was integrated over all exit angles.
Conventional MCS (cMCS)
As a standard, we used conventional MCS (cMCS). In the cMCS, the parameters μ s , g, μ a , d, and the refractive index of each layer, were set as described in Sec. 3.2, and photons were considered to travel in a multilayered model of skin.
sLWMCS
Following the procedure described in Sec. 2, we used a sLWMCS to calculate R and T for the skin model in Sec. 3.2. Phase 1 was calculated by modified MCML with the condition described in Table 2. The divisions of binning for the incident angle and exit angle θ i were both set to {0°,1°,…,89°,90°}, and the representative values Θ i were defined as {0.5°,1.5°,…,88.5°,89.5°}. Phases 2 to 4 were executed using MATLAB ® (Mathworks, Natick, MA, USA). The incident angle (for the whole model) was set to 0.5°, which approximated the 0° incidence angle of the standard condition. The thickness d of the slabs in the models of phase 1 were set to 0.06mm and 4.94mm for epidermis and dermis, respectively, and the μ s at each wavelength was used.
As mentioned above, the appropriate values of σ R and σ T depend on μ s ', g, and d. The value of σ was determined visually to be such that the elements of the histogram did not concentrate in the lowest and highest bins (around 0 and 1 for η).The values for {σ R , η cR , m lR , m hR } and {σ T , η cT , m lT , m hT } for binning of path length were empirically derived and summarized in Table 3. As the output of phase 1, the size of the transition matrices with path length was 90 × 90 × 100. As the output of phase 3, the transition matrices lost the information about path length, then the size of them became 90 × 90.
The MCS was executed 180 times (90 incident angles × 2 layers) for each of 31 wavelength from 400nm to 700nm in 10nm interval, 5580 times in total. The total process time was about 16 days with our computational environment. The size of the base data set became 63 MB for epidermis, and 74 MB for dermis as MATLAB data files.
gLWMCS
Following the procedure described in Sec. 2, R and T of the skin model in Sec. 3.2 were calculated by using the gLWMCS. Phase 1 was calculated by modified MCML with the condition described in Table 4. The same values that were used for the sLWMCS (Sec. 3.3.2) were used for the discretization of the incident and exit angles. The value of Ξ i was set to 10 (i/10-1) for i in the range of 0 to 30 in each interval, which means Ξ i varied from 0.1 to 100 with equal intervals on a log scale in gLWMCS. The range of Ξ i is comparable to 238-238095cm −1 in μ s for d = 0.06mm (where d is for the epidermis in our skin model), and to 3-2886cm −1 in μ s for d = 4.94mm (where d is for the dermis in our skin model). Since μ s is 1473.3cm −1 at 400nm and monotonically decreases to 273.3cm −1 at 700nm in our skin model, the variation covers μ s in the epidermis and dermis at wavelengths from 400 to 700 nm. Due to the similarity, d 0 does not affect the results basically, however, it has an effect on the appropriate values for binning parameters of path length. The MCS was repeated for each Ξ i ; this was achieved by setting d to 1cm and μ s to Ξ i /(1-g) in the modified MCML. Phases 2 to 4 were executed using MATLAB ® . The value of σ was defined visually to be such that the elements of the histogram did not concentrate in the lowest and highest bins (around 0 and 1 in the η); this was the same as for the sLWMCS. The parameters for binning of path length are described in Table 3. The size of the transition matrices was the same with sLWMCS.
The MCS was executed 90 times (90 incident angles) for each of 31 Ξ i from 10 (0/10-1) to 10 (30/10-1) , 2790 times in total. The total process time was about 12 days with our computational environment. The size of the base data set became 70 MB as MATLAB data files.
Error estimation
To estimate the error, the skin model in Sec. 3.2 was used. For μ s , we used the values at 400, 500, 600, and 700 nm, which were, respectively, 1473.3, 712.7, 414.9, and 273.3 cm −1 . The value of μ a for the epidermis was set to range from 0 to 290 cm −1 in intervals of width 10, and μ a for the dermis was set to range from 0 to 29 cm −1 in intervals of width 1. We calculated the relative error E for the cMCS, sLWMCS, and gLWMCS. The relative error of reflectance from a method evaluated was defined as follows: reflectance from a method evaluated reflectance from cMCS .
For the cMCS, the reflectance was calculated again under each condition besides the calculation as the reference, and the results from the second trial were used to evaluate the method. Since the difference between the first calculation and the second one is the sequence of random numbers, E for the cMCS demonstrates statistical error. The difference between sLWMCS and the gLWMCS demonstrates the effect of interpolation.
The reflectance spectra representing the actual human skin under the conditions of normal, occlusion, and post-occlusion reactive hyperemia were simulated with cMCS, sLWMCS, and gLWMCS. In those simulations, the concentrations of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin were selected as follows: normal condition, C m = 6.15%, C oh = 0.10% and C dh = 0.10%; occluded condition, C m = 6.21%, C oh = 0.00% and C dh = 0.19%; post-occlusion reactive hyperemia, C m = 6.20%, C oh = 0.33% and C dh = 0.13%. The values of C m , C oh , and C dh were derived from the estimated values of human skin at 0, 300, and 510 s shown in Fig. 10 of Ref [10]. The MCML for the cMCS and phase 1 of each LWMCS was executed on Windows 8 ® (64bit). The PC had 4 GB of memory, and the CPU was Core i5-3230M ® , 2.60GHz (Intel, Santa Clara, CA, USA). For the calculation of phases 2 to 4 in the sLWMCS and gLWMCS, we used MATLAB ® 7.0.4 on a Windows XP ® emulator in a VMWare Player ® 5.0.2 on Windows 8 (64bit); note that MATLAB ® 7.0.4 does not work in Windows 8. Processing time was evaluated with the MATLAB profiler. Windows XP ® and Windows 8 ® are products of Microsoft (Redmond, WA, USA), and VMWare Player ® is the product of VMWare ® (Palo Alto, CA, USA). The memory allocated to the Windows XP emulator was 2GB, and 3 of the 4 CPU cores were allocated to the emulator.
Estimating chromophore concentrations from a spectrum
The inverse problem, such as the estimation of chromophore concentrations, is where rapid calculations become important, because the spectrum must be calculated iteratively under several different conditions. From a given spectrum, we searched for the chromophore concentrations as an optimization problem that minimized the evaluation function F. The optimization problem was solved with the built-in function fminsearch of MATLAB, in which the Nelder-Mead simplex method is implemented. The skin model of Sec. 3.2 was used, and the wavelength was set from 400 to 700 nm at intervals of 10 nm. However, 500 to 600 nm at intervals of 10 nm was used for the actual skin spectra in the experiment of Sec. 3.4.3. Because this range of wavelength is often used for chromophore estimation due to the distinguishable difference between oxygenated and deoxygenated hemoglobin concentrations. In this case, the ranges of 400 to 490nm and 610 to 700nm were extrapolated using respective LWMCS with the estimated chromophore concentrations. The evaluation function to be minimized in the optimization problem was defined as follows: S c c c λ is the estimated spectrum for given chromophore concentrations.
With the spectrum from the cMCS as the input, we used the sLWMCS and gLWMCS to search for the chromophore concentrations that minimized F. The results were compared with the input of the cMCS. For the input of cMCS, the concentrations of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin were set to 1%, 0.1%, and 0.1%, respectively, which are in the range of Asian skin, and the initial values of the iteration were set to 2%, 0.5%, and 0.5%, respectively. The processing time and number of iterations were evaluated with the MATLAB profiler.
Experiments with in vivo human skin
To confirm the applicability of the proposed method to an actual human skin, we performed the time series measurements of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin on in vivo human skin tissue during cuff occlusion. Experimental procedure was approved by the Ethical Committee for experiments involving human subjects at Tokyo University of Agriculture and Technology. Written informed consent was obtained from a Japanese male subject. A pressure cuff was fixed around the upper arms of the subject and pressured up to 250 mmHg was applied by using a rapid cuff inflator (E-20, D.E. Hokanson Inc., Bellevue, WA, USA). Figure 6 schematically shows experimental setup for measuring diffuse reflectance spectra. A 150-W halogen-lamp light source (LA-150SAE, Hayashi Watch Works Co., Ltd, Tokyo, Japan), which covers the visible wavelength range from 400 to 700 nm, illuminates the skin surface of the forearm via a light guide and lens with a spot diameter of 4 mm. The diameter and focal length of the lens are 54 mm and 100 mm, respectively. In the time series measurement, forearm of the subject was fixed on the sample stage to prevent motion artifacts. The skin surface was fixed at the sample port of an integrating sphere (RT-060-SF, Labsphere Inc., North Sutton, NH, USA). The detected area of the skin surface was circular, with a diameter of 22 mm. Before making the measurements of reflectance spectra, we measured the intensity of the light that is incident on skin to collect the reflectance spectrum by using the optical power meter. In the 400-700 nm range, the maximum power was 495 μW at 510 nm, and the minimum power was 45 μW at 700 nm. Light diffusely reflected from this area was received at the input face of an optical fiber having a core-diameter of 400 μm placed at the detector port of the integrating sphere. The optical fiber transmits the received light into a multichannel spectrometer (SD-2000, Ocean Optics Inc., Dunedin, FL, USA), which measures reflectance spectra in the visible wavelength range under the control of a personal computer. A standard white diffuser with 99% reflectance (SRS-99-020, Labsphere Inc.) was used to correct the spectral intensity of light source and the response of spectrometer. In the in vivo human skin measurement, a single reflectance spectrum was obtained by averaging ten successive recordings of reflectance spectrum, in which one recording was made with the integration time of 200 ms. Therefore, the acquisition of the single reflectance spectrum needs 2 s in total. In the time series measurement of in vivo reflectance spectra before, during and after cuff occlusion, the acquisition of each single reflectance spectrum was repeatedly made at 30-s intervals for 14 min (840 s). According to the estimation procedure for chromophore concentrations described in Sec. 3.4.2, the concentrations of melanin C m , oxygenated hemoglobin C oh , and deoxygenated hemoglobin C dh for each time point were obtained. The concentration of total hemoglobin C th was simply calculated as sum of C oh and C dh . The tissue oxygen saturation StO 2 was calculated as StO 2% = 100 × {C oh /(C oh + C dh )}.
Error estimation
In Fig. 7, the values of E for the cMCS, sLWMCS, gLWMCS are plotted against the cMCS, for the various conditions of μ a in the epidermis and dermis. The average and standard deviation of E over the whole range of μ a are shown in Table 5. On average, the absolute values of E for both the sLWMCS and the gLWMCS were one order of magnitude larger than those of the cMCS. The standard deviation of E for both the sLWMCS and the gLWMCS were smaller than that for with the cMCS. It is seen that the values of E increase on average for the values of higher μ a in Fig. 7. This is probably due to the discretization error in binning of path length. On the other hand, the variations in E also increase for the values of higher μ a in Fig. 7. This is probably due to the statistical error originated from the Monte Carlo methods. The number of photons reemitted from the skin model will decrease as the value of μ a increases. In such a case, the error in reflectance will be statistically increased. This limits the accuracy of sLWMCS and gLWMCS as well as cMCS. Increase in the number of incident photons used in the phase 1 will overcome this problem. The simulated reflectance spectra representing the actual human skin under the conditions of normal, occlusion, and postocclusion reactive hyperemia obtained from cMCS, sLWMCS, and gLWMCS are shown in Fig. 8. The processing time of the cMCS was 25270 sec. for the derivation of a spectrum (31 wavelengths) on average. The processing time of phases 2 to 4 in the sLWMCS and gLWMCS were 3 sec. and 5 sec., respectively, for the derivation of each spectrum, which are, respectively, about 8000 times and 5000 times faster than that of the cMCS. As can be seen in Fig. 8, the skin reflectance spectrums from both the sLWMCS and the gLWMCS were practically identical to that from the cMCS. The difference between the cMCS and the sLWMCS or gLWMCS was more than two digits smaller than the reflectance, which reflects the precision shown in Table 5.
Estimating chromophore concentrations from the cMCS spectrum
The results are summarized in Table 6. With the sLWMCS and gLWMCS, the estimated concentrations of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin are shown, for the spectrum from the cMCS with the concentrations of 1%, 0.1%, and 0.1%, respectively. The processing time and iteration count are also shown. The processing time of phase 2 was described as 0 sec., because only the homothetic ratio was calculated in the phase. Figure 9(a) shows the comparisons between the typical measured reflectance spectrum obtained from the actual human skin under the normal, occluded and post-occluded condition with the reconstructed reflectance spectra from the chromophore concentrations estimated by the optimization method with the gLWMCS. The reconstructed reflectance spectrum with gLWMCS agrees reasonably with the measured reflectance spectrum in the wavelength range for the fitting (from 500 to 600 nm). Figure 9(b) shows the time courses of C m , C oh , C dh , C th , and StO 2 during cuff occlusion at 250 mm Hg. The average values of C m and C th were 6.24 ± 0.06% and 0.211 ± 0.008%, respectively, in pre-occlusion (normal), which are close to typical values for Japanese subjects reported in the literatures [10,21,22]. The average value of 44.3 ± 3.4% for StO 2 in pre-occlusion agrees with the mean blood oxygen saturation of normal human subjects (range 30.2-52.4%) reported in the literatures [23]. During cuff occlusion, C oh and C dh decreased and increased, respectively. The value of StO 2 exhibited the well-known deoxygenation curve, in which the oxygen saturation falls exponentially. The slight increase in C th probably has a physiological cause. This is because, during occlusion, the venous outflow is reduced more than the arterial inflow. After the occlusion, C th increased substantially due to the endothelial function. Despite the remarkable changes in C oh , C dh , C th , and StO 2 , the value of C m , which is independent of temporary hemodynamics, remained almost unchanged during the measurements. In this way physiological conditions of actual human skin tissue were successfully monitored by using the proposed method.
Discussion
As can be seen in Fig. 7 and Table 5, the values of the E of the sLWMCS and gLWMCS were less than 1% across the values of μ s ' and μ a that are included in the range of actual human skin, from light to moderately pigmented [16]. This shows that we achieved our aim of quickly obtaining results that are in good agreement with those of the cMCS on average. The standard deviations of these methods were smaller than that of the cMCS. This means that the statistical errors in the sLWMCS and the gLWMCS are smaller than that of the cMCS, when the same number of photons is used. In the MCS, the reflectance is calculated stochastically, and so statistical errors result from the finite number of photons. In the LWMCSs, the MCS was executed for each layer and for each incident angle, and so the total number of photons was much larger than it was in the cMCS. On the other hand, the average of the values of E from the LWMCSs was larger than that of the cMCS, which is probably due to the discretization error in the path length and the angles in both the sLWMCS and gLWMCS. The errors of the sLWMCS are close to those of the gLWMCS. This means ξ was successfully estimated by interpolation. The value of σ in the gLWMCS also seems to be appropriate, based on a comparison of the errors of the gLWMCS and the sLWMCS with the cMCS. In Fig. 9(a), there is a good agreement in the wavelength range from 500 to 600nm, but not that good for wavelengths near 400 to 450nm and 650 to700nm. The discrepancy between the measured reflectance spectrum and the fitted reflectance spectrum can be observed not only in the shorter wavelength region at which light is strongly absorbed by hemoglobin and melanin but also in the longer wavelength region. Therefore, the discrepancy between the measured reflectance spectrum and the fitted reflectance spectrum is probably due to the difference between the simulation model and actual human skin rather than the value of higher absorption coefficient. In real cases, the thickness of the layers and the other structural attributes of actual skin are not exactly the same as in the skin model that we used. We also did not consider polarization, although, according to Fresnel's law, transmitted and reflected light are partially polarized, and this affects reflectance. The calculations of the sLWMCS and gLWMCS were four orders of magnitude faster than that of the cMCS in our calculation model and computational environment. The calculation time will become even shorter if the latest MATLAB is used without an emulator. It is worth noting that the processing times of these methods will not be affected by the number of photons, the absorption coefficient, the scattering coefficient, or the anisotropy factor. In contrast, the calculation time of cMCS strongly depends on those parameters. Although the processing time of phase 1 can be varied by those parameters, phase 1 is only calculated once, and thus, it will not affect the processing time of phases 2, 3, and 4. The statistic errors can be minimized whatever we need without taking time to phase 2 to 4, by increasing the number of photons of MCS in phase 1.
The time to calculate the reflectance was sufficiently short that the chromophore concentrations could be successfully estimated by solving an optimization problem ( Table 6). The calculation time of the estimation was about ten to twenty minutes in the environment we used; this is comparable to that for a single cMCS. This may be acceptable for estimating chromophore concentrations, and it is better than the several weeks required for doing this with the cMCS, which has a processing time that is more than 5000 times longer than that of the gLWMCS. The optimization problem for the chromophore estimation was adapted to actual skin reflectance spectrum. The same time courses of changes in chromopohore concentrations with the result reported in literature [10,21].
We prefer gLWMCS in terms of flexibility of a base data set, although gLWMCS requires longer processing time than sLWMCS. gLWMCS can be extended to handle more complex skin model such as the multilayer model including a subcutaneous fat layer, without additional calculation of phase 1. Even when ξ of the new layer is larger than any Ξ i , transition matrices for larger ξ can be created by doubling a layer of Ξ i (or multiplying more). This is a significant characteristic of gLWMCS. Only in the case that all the parameters are considered as constant except μ a , sLWMCS is superior to gLWMCS because of the shorter processing time. However, if we want to treat any one of those parameters such as thickness or μ s as variable, the additional sets of base data must be prepared in sLWMCS, and thus, the calculation time for the preparation become crucial. The OPLM [8] was also developed in order to quickly derive the reflectance with values identical to those obtained from a cMCS of a multilayered model. However, the gLWMCS has apparent advantages over the OPLM in terms of the ease with which the structure of the model can be changed. With the OPLM, if there are any changes in the parameters or the number of layers, the baseline data must be recalculated. The gLWMCS, however, allows us to alter the μ s ', thickness, or refractive index of any layer, or to add more layers, without the need to recalculate the baseline data. In terms of calculation time, we note that for the OPLM, the number of dimensions of the path length matrix increases if we expand the number of layers. This causes the computational cost to increase exponentially. With the gLWMCS, the additional calculations are only required for creating the transition matrices and for combining the additional layer with the other layers, so the computational cost only increases linearly with the number of layers. According to a study of the OPLM [8], the errors were larger than those of the sLWMCS and gLWMCS, especially for longer wavelengths [8]. However, this is probably due to the equal divisions in the binning of the path length, and we assume it would be improved if the binning were optimized.
For some applications, such as imaging, the calculation time is still too long. Since most of this is due to arithmetic operations on matrices in phases 3 and 4, the time could be shortened by reducing the size of the matrices. However, if this is done by simply decreasing the number of bins in the path length histograms, the accuracy will deteriorate. Optimizing the binning strategy (i.e., finding a better binning strategy than equal logarithmic widths with dual partitioning) will be effective for improving both the calculation time and the accuracy. The incident and exit angles were discretized with equal angle intervals, but other methods, such as equal cosine intervals [12], could be considered as possible alternatives to maximize the performance. Increasing the number of photons in the MCS will improve the error; it will lengthen the precalculation (phase 1), but not the subsequent calculations (phases 2 to 4). When estimating the chromophore concentration, the initial values affect the number of iterations required. We intentionally set the starting values far from the actual values to demonstrate the robustness of the methods. However, if we determine appropriate starting values by using an approximation method, such as a diffusion model, the iteration count, and therefore the processing time, will be reduced. In addition, we could use fewer than 31 wavelengths, and the method used for interpolation could be improved. In the gLWMCS, the transition matrices are calculated twice for each layer (for Ξ + and Ξ -), but they are only calculated once in the sLWMCS. This is the primary reason for the difference in their processing times. If the transition matrices could be interpolated at step 2, then they would not need to be calculated twice. Upgrading the computer hardware and software would also reduce the calculation time, since the environment we used was not the best currently available.
The selection of the Ξ i could also affect the accuracy of the calculation. A narrower interval of the Ξ i will enlarge the amount of baseline data and lengthen the time to complete phase 1, but it will improve the results. In optimizing the interval of the Ξ i , the dominant type of scattering should be taken into account. For low Ξ i , single scattering is dominant, while multiple scattering is dominant for high Ξ i . In the intermediate region, the linear interpolation error is expected to become larger due to the phase transition. Therefore, for the most accurate calculations of reflectance and transmittance, the interval of the Ξ i in the intermediate region should be smaller than those of the other regions.
In the cMCS for the multi-layered medium, the photons propagate throughout the medium due to the scattering and these scattered photons can take a random path across multiple layers. On the other hand, in LWMCS, the tracking of scattered photons is done for a single layer only in phase 1. In the lamination process of multiple layers in phase 4, only the energy transfer of photons across the different layers is treated to calculate diffuse reflectance and total transmittance. This lamination process in phase 4 inherits the characteristic of AD method [12]. It cannot be used for time-domain analysis, and it cannot derive the reflectance as a function of the distance from the incident point. The model is limited to a homogeneous multilayer model in which the interfaces are parallel and infinite, and each layer must have uniform scattering and absorption properties, and a uniform refractive index. Either or both the incident light or the detected light must be axially symmetric. However, if we limit the model to the condition that g = 0, the method may be adapted to a nonsymmetric condition by separate treatment of scattered light and nonscattered light. This works because once the light has been scattered, the direction is independent of the azimuthal angle of the incident light.
This method also inherits benefits of the AD method [12], as follows. The characteristics of the scattering properties of each layer are derived separately, which is a remarkable characteristic and enables a physical interpretation of the results. In addition, The fluence rate at an arbitrary depth or absorption in a particular layer can be derived from R and T for the layers above the depth of interest as well as those for the whole medium based on the theory of AD method [12].
We only varied μ s to model the scattering characteristics. If we wish to consider varying the anisotropy, the base transition matrices can be pre-calculated for various values of the anisotropy coefficient, in addition to various values of μ s . In this case, the calculation time will increase since the interpolation will become two dimensional.
Conclusion
We developed a method to quickly derive results equivalent to those of the cMCS for the calculation of reflectance from a layered medium and with a broad range of applicable parameters. Once the baseline data set is prepared, it can be applied to various multilayer models, including human skin. The difference between the reflectance calculated by our method and that of the cMCS was less than about 1% over the range of μ s ' and μ a of light to moderately pigmented skin. In addition, we successfully used the method to estimate the chromophore concentration from a spectrum; this required iterative calculations of the reflectance. We expect that this method can substitute for the cMCS when the total amount of reflection is the concern. With this fast and accurate method, the parameters to be fit in the inverse problem can include the thickness, the scattering coefficient and anisotropy factor, which are usually assumed to be constant due to limitations in computation time and resources. We discussed ways in which the processing time can be further shortened while still maintaining accuracy, and this should be further studied in future work. | 12,422.2 | 2014-11-01T00:00:00.000 | [
"Physics"
] |
Capacitor performance limitations in high power converter applications
High voltage low inductance capacitors are used in converters as HVDC-links, snubber circuits and sub model (MMC) capacitances. They facilitate the possibility of large peak currents under high frequent or transient voltage applications. On the other hand, using capacitors with larger equivalent series inductances include the risk of transient overvoltages, with a negative effect on life time and reliability of the capacitors. These allowable limits of such current and voltage peaks are decided by the ability of the converter components, including the capacitors, to withstand them over the expected life time. In this paper results are described from investigations on the electrical environment of these capacitors, including all the conditions they would be exposed to, thereby trying to find the tradeoffs needed to find a suitable capacitor. Different types of capacitors with the same voltage ratings and capacitances where investigated and compared a) on a component scale, characterizing the capacitors transient performance and b) as part of different converter applications, where the series inductance plays a role. In that way, better insight is achieved on how the capacitor construction can affect the total performance of the converter.
Introduction
Global wind power growth is foreseen to continue in the future with development of large-scale wind power plants (WPPs) located far offshore and with a need for HVDC as export connection. Integration of these WPPs to the onshore grids will develop from point-to-point connections to a transnational multi-terminal network where the transmission capacity which serves both to export the wind power and to facilitate power trading between countries. In such a situation, application of multi-terminal VSC-HVDC transmission is considered the favorable technological solution due to the multiple advantages it provides (active/reactive power control, long distance transmission, etc.). A control strategy which is capable of accommodating different dispatch schemes is however required [1]. Introducing such a system to the grid necessitates the investigation of its behavior in normal conditions but more importantly in anomalous conditions, when various types of faults occur. These faults cause situations where transients can affect the converter. To get a better understanding of the effects of these transients we need to better comprehend the converter components and how they can be characterized. In this paper the modular multilevel converter will be subject for our investigation but the same theory can be used in other topologies as well. The main components of a modular multilevel converter are the submodules, each of which consists of 2 IGBT's (T1 and T2) with a diode located in anti-parallel and a capacitor C as shown in figure 1. The submodule can attain two different states, being either turned on or turned off. The definition of a turned on sub module is when the T1 is on and current is being conducted through the submodule capacitor. Thereby the voltage across T2 will be equal to the capacitor voltage. When the submodule is turned off, T2 is conducting, and T1 has stopped conducting, therefore the current will be bypassing the submodule capacitor and the submodule will be seen as a short circuit. For this work, we will try to highlight the characteristics of the submodule capacitor C and the importance of having a correct representation of it in later work to get the correct influence of faults on the converter. The size of the capacitor is a very important factor in its performance and for selection of a suitable size, different aspects has to be considered. Switching actions in the converter unit will introduce a ripple in the direct voltage. In order to minimize the ripple in the dc voltage, large submodule capacitors are required. The capacitor also needs to be able to withstand the maximum voltage and current which might occur. However, application of large submodule capacitors results in slower changes of the dc voltage in respond to changes in power exchanged at the dc side of the converter. This will result in a slower discharging of the submodule capacitors if the dc voltage is reduced. On the other hand, application of a small submodule capacitor results in fast response to changes in instantaneous power exchanged but at the expense of larger ripple in the dc voltage and more capacitors are needed to accumulate for the submodule voltage. Thus, the total capacitance of the submodule capacitors can be approximated by [2].
Where V rip is the allowable peak to peak voltage ripple and I AVG is the average current conducting through the capacitor in half a period. The submodule capacitor cannot simply be modeled as an ideal capacitor, as this component besides the capacitance also includes some inductance known as, leakage inductance, parasitic inductance or as the Equivalent Series Inductance (ESL), which is mainly caused by the leads and internal connections used to connect the capacitor plates or foils to the outside environment. It is obvious that the ESL will first start to matter at high frequencies, in particular at the resonance frequency formed together with the capacitor. The resistance known as the Equivalent Series Resistance (ESR) covers the physical series resistance in the capacitor (e.g. the ohmic resistance of the leads and plates or foils). Including all parasitic components, the model of the submodule capacitor looks as seen in figure 2 [3]. Due to this reason and the fact that we want to include the effects of the capacitor in transient condition, we have chosen to perform a FRA (frequency responds analysis using a gain/phase analyzer) of these submodule capacitors [4]. From the results of the FRA, we use the Gaussian elimination theory to separate ESR, ESL and C in the important frequencies.
Gaussian elimination is the well-known method of solving a linear system Ax = b which consist for m equations for n unknowns [5]. The augmented matrix can be seen below in figure 3. Transformed into triangular form can be seen below.
Fig 4: Triangular form of the augmented matrix
The forward elimination will indicate if there is any solution for the system and if so, we use back substitution to find the solution for the particular system. This method is used in our work to separate the three individual components by finding three points from the measured impedance curve and assuming linearity between these points. Very small time steps might cause considerable variations within small frequency changes. For this reason, a reasonable approximation is applied to the measured data to get a smoother and more linear curve. Figure 5 shows a typical result, clearly illustrating the capacitive behavior at lower frequencies and the inductances dominating at higher frequencies. The Gain/phase analyzer gives a complex number for each frequency as modulus and argument ∠ . Using these three points we can setup three equations with three unknowns.
Having the augmented matrix, the inverse matrix A -1 is computed to extract the individual component values. It should be noticed from equation (6) that the capacitance will be computed as the inverse capacitance. Due to the fact that R, L and C are changing with changing frequency, the points need to be close enough to make sure that the change is very small. On the other hand, the change of impedance with frequency has to be sufficiently high in order to allow for solving the linear equation system.
Evaluating different capacitors
As mentioned before, the size of the capacitor will have great influence on its behavior for several reasons. From the resonance frequency 0 = 1 √ we know that as higher the capacitance as lower the resonance frequency. Nevertheless with smaller resonance frequencies, new challenges appear having in mind that relatively low frequency transients might occur. Thus, more distortion will arise at the converter legs when such transients are induced. In figure 6 we see the frequency responses of various capacitors with different capacitance and voltage ratings. As assumed, the resonance frequencies of the capacitor with higher capacitance can be found at lower frequencies. Nevertheless, taken a closer look at the results, we see that the resonance frequencies of capacitors with high voltage ratings and capacitance actually have a lower value than expected. This is due to the physical size of large capacitors. At larger physical sizes, most electrolytic capacitors are basically a large coil of flat wire, with a higher inductance than it would be if it was a flat construction. This inductance, along with the small amount of inductance from the wire leads, will make up the ESL of the capacitor and bigger capacitors usually mean more layers in both wound and stacked capacitors, resulting in an increase of the parasitic inductance. The enlarged ESL will consequently cause a low resonance frequency.
[6]. For further investigation of this behavior, three capacitors are chosen and through Gaussian elimination we separate the equivalent components to see how they are behaving separately. Looking at figure 7, it can be confirmed from the inductance behavior that as bigger the capacitors as larger is the parasitic inductance. This can also be seen from table 1 that the big capacitors have more than twice the equivalent inductance. From the upper graph we can see that the resistance at very low frequencies will fall, but from around 100Hz it will slowly start to increase, as expected due to the skin effect. We notice that at the low frequencies, we are not able to obtain correct values of the inductance. And at the high frequencies more reasonable values for the inductance can be extracted, whereas the capacitance values on the other hand becomes improbable. Around the resonance frequency we actually get acceptable values for both components. This phenomenon does not occur when the method is used together with mathematically generated impedances, which we used for verification of the method In Matlab we created the same type frequency response as assumed for a real capacitor. We varied the resistance with the frequency to represent skin effect, meanwhile we also varied the inductance since the skin effect will affect this as well, due to change of internal inductance. But as it is seen from figure 8 we were able to obtain all values within all frequencies with a very small error margin. This indicates that our method is working as expected. Nevertheless, we have to consider the accuracy of the measuring tool, we are using to obtain the impedance sweeps. It could be the bottleneck, since it has some limitations especially at the edges.
Conclusion
The work of this paper has presented a simple way of evaluating capacitor for converter usage. FRA measurement has been done for various types of capacitor to indicate which types that is best in this context. It has been shown that by using Gaussian elimination separation between Equivalent Series Inductance (ESL), and Equivalent Series resistance (ESR) and C is possible. This gives a good starting point in connection with converter design for a specific environment. The present work allows for improving component models and implementing the results in an EMTDC simulation tool to futher investigate the capacitor behavior in transient conditions. | 2,566.2 | 2018-02-16T00:00:00.000 | [
"Engineering",
"Physics"
] |
A brief history of evidence-based medicine
| INTRODUCTION: Evidence-based medicine is one of the most widespread trends in contemporaneous medical education. It proposes a scientific framework not only for medical training, but also for medical research and practice. However, knowledge about EBM roots and historical developments is not usual within Brazilian medical community. Indeed most common publications for non specialists medical readers, like handbooks and tutorials papers on EBM, are not sufficiently rich for providing historical knowledge. OBJECTIVES: To present a brief narrative of the historical development of evidence-based medicine. METHODS: Historiographical review essay. MATERIALS: Primary and secondary sources on the history of EBM. RESULTS: As EBM founder, David Sackett stated against clinical decisions based solely on physicians authority and intuition achieved by long term clinical experience and pathophysiological knowledge. Paradoxically, in his retiring letter, Sackett alleged that his own prestige and authority could retard the scientific advance of EBM. CONCLUSION: Since the early 2000’s, critical appraisal, systematic reviews and clinical practice guidelines has merged in a unified approach that characterizes current practice in EBM.
In 1964, a report from the Canadian government recommended the creation of a new medical school at McMaster University, Ontario, that should introduce a new approach to medical education, current medical school programs were evaluated out-of-date. This new approach was based in the introduction of clinical epidemiology and biostatistics in the medical program also the medical curriculum would be based on valid outcomes of medical research. David Sackett was the first director of the Department of Clinical Epidemiology and Biostatistics at the Medical School established in 1967. He claimed that clinicians should be trained to develop skills needed to ask epidemiological questions relevant to solve practical clinical problems. Since then, debates about Sackett's proposal have target the meaning and relevance of epidemiological knowledge in clinical practice, as well as the uncertainty of medical judgments based on medical authority 1 . The general scientific problem with which we are primarily concerned is that of testing a hypothesis that a certain treatment alters the natural history of a disease for the better. The particular problem is the value of various types of evidence in testing the hypothesis. The oldest, and probably still the commonest form of evidence proffered, is clinical opinion. This varies in value with the ability of the clinician and the width of his experience, but it value must be rated low, because there is no quantitative measurement, no attempt to discover what would have happened if the patients had had no treatment, and ever possibility of bias affecting the assessement of the result. It could be described as the simplest (worst) type of observational evidence 3 .
At that time a dilemma emerged: What are the rules of evidence that should be adopted as the basis for the clinical management of patients? Should only RCT -validated evidence be used to prevent or minimize the use of therapeutic resources innocuous or harmful to patients? Or should clinicians experiences also be admitted as a basis for maximizing the potential patients health benefits 8 ?
Yet in 1976, at Copenhagen, Hendrik Wulff had detailed the logical and probabilistic aspects involved in the application of RCT outcomes in clinical practice 9 .
He drew attention to the difference between therapeutic efficacy as measured by the statistical likelihood obtained from RCT and clinical effectiveness as measured by the subjective likelihood of the physician's belief in the cure of a particular patient, calculated by applying the Bayes theorem. Even patients with the same disease differ in a number of ways, so that it is not always rationally certain that the physician will base his belief (subjective probability) on the overall experience of a group of patients (statistical probability). In addition, the physician should assess to what extent it is appropriate to apply group experience to the individual 10 .
In the 1980s, clinical epidemiology spread internationally in medical education curricula, despite the difficulties faced due to the required mathematical and statistical knowledge and skills 11 . The initial stimulus for this diffusion came from the Rockefeller Foundation, which in 1978 funded the establishment of the International Clinical However, according to David Eddy 19 , there were actually two "evidence-based" approaches: one aimed developing guidelines (EBG) and another target the individual development of physicians (EBID). The latter was designed, developed, and disseminated by Sackett and his partners, while the other was his own work, following a line of research on health care costs that began in the 1980s with RAND Corporation: "In the 1980s a group at RAND began publishing studies showing that large proportions of procedures being performed by physicians were considered inappropriate even by the standards of their own experts 19 ." The RAND corporation was a center of health services expertise by the late 1970s, when neoliberal policies was introduced by Ronald Reagan in the United States 20 . This time, the focus of the political debate on health had shifted from poor people's access to health services to managing the costs of health services: In the Medicare program, as in American health care more generally, the concerns of policymakers soon shifted from access to cost (…) In time, then, the field of health services research turned its attention to the technical issues and quality concerns related to cost containment. An important point in this process was the RAND health insurance experiment 20 .
An important point in this process was the Rand health insurance experiment 20 . RAND had developed an institutional expertise in the application of quantitative methods to the education and health.
The impetus for the health insurance experiment came not so much from clinicians concerned about health outcomes, as from economists who focused their attention on the relationship between the cost of medical care and its consumption. By the time, the experiment was intended to anticipate the creation of a national health insurance, but it failed in its purpose. Nonetheless, outcomes of the experiment contributed to the big increase of cost sharing that occurred in the 1980s, and this tended to reduce services in an indiscriminate fashion with adverse effects on the health of vulnerable: (...) it tended to reduce services in an indiscriminate fashion -the good along with the bad. Furthermore, the experiment showed that cost-sharing had adverse effects on the health of vulnerable groups, such as low-income children, "just a catastrophic drop in the use of services, clearly services that were needed as well as services that weren't that you didn't see so much for kids with a higher income 20 ." David Eddy himself has been linked to the private health insurance industry from 1984 to 2005, as chief scientist for the Technology and Coverage Program and the Medical Advisory Panel of Blue Cross Blue Shield, a federation of health insurers, which serves more than 100 million Americans 21, 22 . In this context that Eddy wrote the first evidence-based guidelines for the American Cancer Society: First, there must be good evidence that each test or procedure recommended is medically effective in reducing morbidity or mortality; second, the medical benefits must outweigh the risks; third, the cost of each test or procedure must be reasonable compared to its expected benefits; and finally, the recommended actions must be practical and feasible 23 .
At that time there was indeed a concern with the relationship between the search for good evidence for the clinical effectiveness of the procedures with their cost and benefit efficiency. In his 2005 paper, David Eddy asked that, since the current definition of EBM includes EBID but not EBG, whether the definition of EBM should be expanded to include evidence-based guidelines and its related branches, instead of focusing only on physicians and their individual decisions, comprising a set of principles and methods to ensure that medical decisions, protocols, guidelines and other types of health policy would be based and consistent with good evidence of effectiveness and benefit 19 .
Indeed, in 1997, in the same year that David Sackett published his EBM handbook 24 , Muir Gray also published another EBM handbook by the same publisher, Churchill Livingstone, of the Elsevier (Elsevier Science) group 25 the first focused on individualized clinical practice and the second on health policies. In his book, Muir Gray includes rising costs and "delayed implementation of research results in practice" in the list of major convergent and common problems to the delivery of healthcare world-wide, so that the same solutions should be adopted, either in the post-industrial northern countries or in the "third world" countries, whose health systems should be restructured. In short, the solutions highlighted by Muir Gray focused on aspects such as cost control, healthcare purchasing, and clinical practice management: In 2017, in his commemorative paper on EBM's 25th Anniversary, Gordon Guyatt acknowledged the seminal roles played by David Sackett, Archie Chocrane and David Eddy in the early days of EBM, when they argued for critical appraisal, development of systematic reviews, and clinical practice guidelines, three domains that merged in the 2000s to characterize the current practice of EBM 28 .
Author contributions
Lapa TG is the first author of the article which is an extract from her thesis. Rocha MD supervised the research. Almeida N was coadvisor of the thesis, contributing mainly, but not exclusively, to the aspects of work related to Evidence-Based Medicine, including his vision as a model of contemporary education in the health sciences field. Mattedi A contributed to the methodological aspects related to the history of scientific controversies.
Conflicts of interests
No financial, legal or political competing interests with third parties (government, commercial, private foundation, etc.) were disclosed for any aspect of the submitted work (including but not limited to grants, data monitoring board, study design, manuscript preparation, statistical analysis, etc.). | 2,272.2 | 2019-10-21T00:00:00.000 | [
"Philosophy"
] |
Metalloproteinase 1 downregulation in neurofibromatosis 1: Therapeutic potential of antimalarial hydroxychloroquine and chloroquine
Neurofibromatosis type 1 is an autosomal dominant genetic disorder caused by mutation in the neurofibromin 1 (NF1) gene. Its hallmarks are cutaneous findings including neurofibromas, benign peripheral nerve sheath tumors. We analyzed the collagen and matrix metalloproteinase 1 (MMP1) expression in Neurofibromatosis 1 cutaneous neurofibroma and found excessive expression of collagen and reduced expression of MMP1. To identify new therapeutic drugs for neurofibroma, we analyzed phosphorylation of components of the Ras pathway, which underlies NF1 regulation, and applied treatments to block this pathway (PD184352, U0126, and rapamycin) and lysosomal processes (chloroquine (CQ), hydroxychloroquine (HCQ), and bafilomycin A (BafA)) in cultured Neurofibromatosis 1 fibroblasts. We found that downregulation of the MMP1 protein was a key abnormal feature in the neurofibromatosis 1 fibroblasts and that the decreased MMP1 was restored by the lysosomal blockers CQ and HCQ, but not by the blockers of the Ras pathway. Moreover, the MMP1-upregulating activity of those lysosomal blockers was dependent on aryl hydrocarbon receptor (AHR) activation and ERK phosphorylation. Our findings suggest that lysosomal blockers are potential candidates for the treatment of Neurofibromatosis 1 neurofibroma.
Introduction
Neurofibromatosis type 1 (Neurofibromatosis 1) is a genetic disorder that affects one in 2600 to 4500 live births 1,2 . Hallmarks of the disease are cutaneous findings including café au lait macules, skinfold freckling, and neurofibroma, a benign peripheral nerve sheath tumor [3][4][5] . Patients with neurofibromatosis 1 suffer from extensive extracutaneous lesions including optic glioma, Lisch nodule, scoliosis, bone involvement, and pseudoarthrosis. Neurofibromatosis 1 is also comorbid with neuronal complications such as learning difficulties, central nervous system tumors, and neurovascular diseases [3][4][5][6] . The clinical manifestations are variable, unpredictable, and potentially life-threatening. Malignant peripheral nerve sheath tumors are generally associated with a fatal outcome. They are also linked to disfigurement and social isolation, which cause deep psychological distress and reduce the quality of life of afflicted individuals [3][4][5][6] .
Cutaneous neurofibromas manifest as circumscribed tumors that are basically associated with nerves in the skin 7,8 . They can undergo a rapid initial proliferative phase, but then quickly become quiescent with extremely slow to no growth 3 . Multiple cutaneous and subcutaneous tumors adversely affect the quality of life 9 . Neurofibromatosis 1 results from an autosomal dominant loss of the neurofibromin 1 (NF1) gene [10][11][12] . NF1 is a Ras GTPase activating protein and thus facilitates Ras inactivation 13,14 . In NF1-insufficient cells, Ras activation is not inhibited by NF1, which in turn upregulates prosurvival signaling via the PI3K-mTOR axis as well as transcriptional/proliferative signaling via the RAF-MEK-ERK pathway 3,14,15 .
Histopathological analysis has revealed that the cellular and extracellular composition of cutaneous neurofibroma is diverse, including Schwann-like cells, fibroblasts, perineural cells, and collagen matrix 7,16,17 . The collagen production is increased in cutaneous neurofibroma and the major collagen is type 1 collagen (COL1A1) 16,18,19 . Both PI3K-mTOR and RAF-MEK-ERK cascades regulate cell proliferation, DNA synthesis, apoptosis, and COL1A1 synthesis 3,14,15,20 . Recent year, sirolimus, a specific mTOR inhibitor has been used for the treatment of neurofibromatosis 1 21,22 . Although topical and systemic sirolimus is very efficacious for another mTOR-activating inherited genetic disorder, tuberous sclerosis 23,24 , sirolimus failed to reduce the neurofibroma volume in progressing and non-progressing neurofibromas 21,22 . The antifibrotic pirfenidone inhibits COL1A1 production in fibroblasts 25 and is used for treating patients with idiopathic pulmonary fibrosis 26 . However, a clinical trial of pirfenidone failed to show its efficacy for neurofibromatosis 1 27 . These results indicate that a different approach may be necessary for this complex inherited disease.
Collagen deposition is regulated by the balancing of its production and degradation by matrix metalloproteinases (MMPs); MMP1 is the major enzyme degrading COL1A1 28 . However, few reports have demonstrated the expression of MMPs in neurofibromatosis 1. Walter et al. showed that increased stiffness of optic nerve tumor may be related to the downregulation of MMP2 in neurofibromatosis 1 29 . In addition, Muir reported that the expression of MMP1 and MMP-9 was increased in cultured cutaneous neurofibroma containing an abundance of Schwann cells 30 . However, to the best of our knowledge, no studies have focused on the expression of MMPs to treat cutaneous neurofibroma. In this study, we investigated the mRNA and protein expression of COL1A1 and MMP1 in dermal cell lines derived from neurofibromatosis 1 and healthy volunteers. We found that the downregulation of MMP1 protein was a key abnormal feature in the neurofibromatosis dermal cell lines and that this decrease in MMP1 was restored by the lysosomal blockers chloroquine (CQ) and hydroxychloroquine (HCQ) 31,32 . These antimalarial 33 and antilupus drugs 34,35 are thus potential candidates for the treatment of neurofibromatosis 1.
Study approval
This study was approved by the Ethics Committee of Kyushu University (#30-363 for immunohistological study and #24-132 for cell line establishment). Written informed consent was obtained from all of the volunteers. Punch biopsies were taken from a total of six donors, including three healthy controls and three patients diagnosed with neurofibromatosis type 1 (Supplemental Table 1) based on diagnostic criteria.
Cell culture
To establish the human primary fibroblast cell culture, dermal fibroblast cells were isolated from biopsied skin tissue and cultured as described previously 36 . A lack of contamination of Schwann-like cells was confirmed by S100A protein staining. Before the experiments, cells were trypsinized and allowed to adhere to the culture plates for 24 h. Then, the cells underwent each experiment as detailed below. Each phosphorylation inhibitor was mixed with HCQ or vehicle in culture medium and then the cell culture medium was changed depending on the experimental conditions.
NF1 genotyping
DNA was isolated from cultured fibroblast cells from a total of six donors with NucleoSpin Tissue (Qiagen, Hilden, Germany). DNA genotyping and data analysis were performed by Genewiz Inc. (South Plainfield, NJ, USA).
Small interfering RNA transfection
Small interfering RNA (siRNA) targeting NF1 (s221793) or AHR (s1200) and scrambled RNA (Silence Negative Control No. 1) were purchased from Thermo Fisher Scientific. siRNA was transfected into fibroblast cells with lipofectamine RNAi Max (Thermo Fisher Scientific), following the manufacturer's instructions.
Quantitative real-time polymerase chain reaction (qRT-PCR) Total RNA was isolated using RNase Mini Kit (Qiagen) and reverse-transcribed with Prime Script RT Reagent Kit (Takara Bio, Otsu, Japan), in accordance with the manufacturer's instructions. The qPCR reactions were performed with the CFX Connect System (Bio-Rad Laboratories, Hercules, CA, USA) using TB Green Premix Ex Taq (Takara Bio). Cycling conditions comprised 95°C for 30 s as the first step, followed by 40 cycles of qRT-PCR at 95°C for 5 s and 60°C for 20 s. mRNA expression was measured in triplicate wells and was normalized using β-actin as a housekeeping gene. The primer sequences are shown in Supplemental Table 2.
Western blotting analysis
Cells were rinsed with ice-cold PBS and lysed with RIPA buffer containing protease inhibitor cocktail (Sigma-Aldrich) and PhosSTOP (Roche Diagnostics, Rotkreuz, Switzerland). Extracted proteins were denatured by boiling at 96°C for 5 min with SDS-sample buffer containing 2-mercaptoethanol and loaded onto Blot 4-12% Bis-Tris Plus Gel (Thermo Fisher Scientific). The proteins transferred to a PVDF membrane (Merck Millipore, Burlington, MA, USA) were reacted with antibody diluted in Can Get Signal (Toyobo Co. Ltd., Osaka, Japan) and subjected to Super Signal West Pico (Thermo Fisher Scientific). Chemiluminescence was detected using ChemiDoc XRS (Bio-Rad) and densitometric analysis was performed.
Enzyme-linked immunosorbent assay (ELISA)
The total cell culture supernatant was collected and frozen immediately at −80°C until use. The concentration of secreted MMP1 was measured as manufacturer's protocol (Boster Biological Technology, CA, USA; or R&D Systems).
Immunohistofluorescence
Fibroblasts were seeded on µ-Slide 8-well chambers (ibidi, Gräfelfing, Germany) and harvested for 24 h. Treatment with 50 μM HCQ, 100 nM FICZ, or vehicle was applied for 6 h prior to immobilization with ice-cold acetone. Cells were blocked with 5% bovine serum albumin and reacted with primary antibodies at 4°C overnight. Then, cells were reacted with Alexa Fluor488 secondary antibodies in the dark. Chambers were covered with cell-mounting medium containing DAPI (Santa Cruz) and images were taken with EVOS Cell Imaging Systems (Thermo Fisher Scientific).
Immunohistochemistry
Skin biopsy samples were obtained from five patients and subjected to staining as per the protocol used at Kyushu University Hospital facilities as previously described 37 , with minor modification.
Cell proliferation and viability test
Fibroblasts were seeded in a 96-well plate and harvested for 48 h. Viable cells were measured using a CCK-8 kit (Dojindo, Tokyo, Japan). The absorbance at 450 nm was measured with a microplate reader (Bio-Rad) and the analysis was performed in quadruplicate. To make a calibration curve for cell counting, a sequence of numbers of cells were seeded in a 96-well plate and allowed to adhere to the plate for 2 h. A standard curve was established following the manufacturer's instructions. A calibration curve was made for each experiment. Cells at passages 3 to 4 were used to measure the cell proliferation rate.
Statistical analysis
An unpaired two-tailed t-test and Tukey's HSD test were applied as appropriate to evaluate statistical significance (*P < 0.05; **P < 0.01; ***P < 0.001). All analyses were performed using the JMP Pro software package (SAS Institute Japan Ltd., Tokyo, Japan).
Downregulated expression of MMP1 in neurofibromas of neurofibromatosis 1 patients
To evaluate MMP1 expression in neurofibromas, we first conducted immunohistochemical analysis in neurofibromas of five neurofibromatosis 1 patients. As neurofibroma is characterized by large numbers of S100 + Schwann-like cells and fibroblasts 7,16 , we utilized a specific antibody for S100 protein to clarify the lesional skin of neurofibroma. Azan-Mallory staining, which stains collagenous fibers deep blue and glial cells and neurons reddish purple, was also utilized. The lesional skin of the neurofibromas contained S100 + cells (Fig. 1b, j, stained brown) with neo-collagen accumulation, which were weakly stained blue with Azan-Mallory staining (Fig. 1c, k). The majority of stromal cells in the neurofibroma lesions were virtually negative for MMP1 staining (Fig. 1d, i). Even in the early-stage lesions of neurofibroma ( Fig. 1e-h), the stromal cells were MMP1-negative (Fig. 1h). We then counted the stromal MMP1-positive cells in the neurofibromas (lesional and perilesional areas) and five samples of normal healthy skin. Dermal vascular endothelial cells were MMP1-positive and served as a positive control (Fig. 1m). The proportions of MMP1positive stromal cells were 6.2 ± 1.4% (mean ± standard error), 3.4 ± 1.3%, and 0.3 ± 0.1% in normal control skin, perilesional area of neurofibromas, and lesional area of neurofibromas, respectively (Fig. 1n).
Downregulated expression of MMP1 in cultured fibroblast cells from neurofibromatosis 1 patients
To further characterize the biological response of neurofibroma cells, we established three primary dermal fibroblastic cell lines from neurofibromatosis 1 patients (NFFs) and three normal primary fibroblastic cell lines from healthy control skin (HEFs), as reported previously 36 (Supplementary Table 1). All three NFFs revealed stop codon mutations in the NF1 gene heterozygously (Supplementary Table 1). Other exonic mutations in NFFs (2034 G > A, 702 G > A) were also detected in the healthy donors. There was no difference in morphology or proliferative capacity between HEFs and NFFs (Supplementary Fig. S1). Although the mRNA expression of NF1 was comparable between HEFs and NFFs (Fig. 2a), the protein expression of NF1 was significantly downregulated in NFFs compared with that in HEFs (Fig. 2b, c). The protein expression of COL1A1 was comparable between HEFs and NFFs, while the expression of MMP1 protein was significantly decreased in NFFs compared with that in HEFs, as revealed by western blot analysis (Fig. 2b, c). In parallel with this, significant amounts of MMP1 protein were detected in the supernatants of HEFs, while NFFs did not release detectable amounts of it (Fig. 2d). Although the RAF-MEK-ERK and PI3K-AKT-mTOR cascades have been reported to be accelerated in neurofibromatosis 1 3,14,15 , we could not detect significant differences in the phosphorylation levels of RAF, MEK, ERK, and AKT between HEFs and NFFs ( Supplementary Fig. S2). These findings suggested that the cultured NFFs recapitulated the biological nature of stromal cells of neurofibromas, at least in terms of the NF1 and MMP1 downregulation.
Downregulated expression of MMP1 is restored by CQ and HCQ
To investigate the effects of the RAF-MEK-ERK and PI3K-AKT-mTOR axes on the MMP1 downregulation in NFFs, we treated NFFs with the ERK cascade inhibitors U0126 or PD184352 or rapamycin, mTOR inhibitor. Their inhibitory effects were confirmed by the findings that U0126 and PD184352 inhibited the phosphorylation of ERK ( Supplementary Fig. S3b, c), while rapamycin led to the accumulation of phosphorylated AKT ( Supplementary Fig. S3f, g). However, U0126, PD184352, and rapamycin could not restore the MMP1 downregulation, but instead exacerbated it (Supplementary Fig. S3b, d, g, h). COL1A1 protein expression was not altered by U0126 or PD184352 ( Supplementary Fig. S3b, e), and was decreased by rapamycin ( Supplementary Fig. S3f, i). These results indicated that the reported conventional pathogenic pathways were not involved in the MMP1 downregulation in NFFs.
As MMP1 is degraded in lysosomes 38 , we next examined the effects of the lysosomal inhibitors CQ, HCQ, and bafilomycin A (BafA) 31,32,34 on the MMP1 expression in HEFs and NFFs. The effects of CQ and HCQ were confirmed by them inducing numerous intracellular vesicles due to lysosomal swelling, compared with the findings in the vehicle control (Fig. 3a). Lysosomal swelling was only weakly observed in BafA-treated cells (Fig. 3a). Notably, CQ and HCQ significantly increased the mRNA (Fig. 3b) and protein ( Fig. 3c and Supplementary Fig. S4) levels of MMP1 compared with the findings for the vehicle control in both HEFs and NFFs. The MMP1 proteins upregulated by CQ and HCQ were actually released in the culture supernatants (Fig. 3c). HCQ is more applicable for clinical use than CQ because the major associated adverse event, retinopathy, occurs less with HCQ than with CQ 34 . Therefore, we mainly used HCQ rather than CQ in the following experiments.
CQ-and HCQ-mediated MMP1 upregulation is not related to NF1 protein level
As NFFs had reduced levels of NF1, we assumed that CQ and HCQ might increase these levels, subsequently restoring the MMP1 downregulation. Indeed, the expression levels of NF1 protein tended to be upregulated by CQ and HCQ in HEFs and NFFs, but this did not reach statistical significance (Fig. 4a, b). For further analysis, we used KYU168 and KYU101 as representatives of HEFs and NFFs, respectively, since KYU168 and KYU101 did not differ from other HEFs and NFFs in terms of the findings of morphological observation, proliferation analysis, and expression analysis of NF1 and MMP1 mRNA and proteins, while KYQ403 and KYQ404 did not proliferate upon more than six subcultures. To determine the relationship between NF1 and MMP1 levels, we measured the mRNA and protein levels of MMP1 in HEFs and NFFs transfected with NF1 siRNA or control scrambled siRNA. The transfection of NF1 siRNA successfully lowered the mRNA (Fig. 4c) and protein (Fig. 4d) levels of NF1 in both HEFs and NFFs. Unexpectedly, NF1 knockdown rather augmented the MMP1 expression in both HEFs and NFFs (Fig. 4d). Considering the reduced MMP1 expression in NF1-insufficient NFFs, NF1 deficiency did not directly cause the MMP1 downregulation. In accordance with this, HCQ-mediated upregulation of MMP1 was not affected by the transfection of NF1 siRNA (Fig. 4c, d). These results highlight the possibility that HCQ actively upregulates MMP1 irrespective of the cellular NF1 level.
HCQ-mediated MMP1 upregulation is dependent on AHR
AHR ligands such as 6-formylindolo[3,2b]carbazole (FICZ) are known to upregulate the mRNA and protein levels of MMP1 39 . When activated, cytoplasmic AHR is translocated into the nucleus 40,41 and upregulates the transcription of target genes such as CYP1B1 in fibroblasts 37,39 . We next examined whether HCQ serves as an AHR ligand. Compared with the predominantly cytoplasmic staining of AHR in the untreated control NFFs, HCQ appeared to induce the cytoplasmic-to-nuclear translocation of AHR (Fig. 5a). In addition, HCQ significantly augmented the CYP1B1 gene expression in both HEFs and NFFs, as did FICZ (positive control) (Fig. 5b). The HCQ-induced CYP1B1 upregulation was canceled in HEFs and NFFs transfected with AHR siRNA (see Fig. 5c, d, for knockdown efficiency), indicating that HCQ is an AHR ligand.
We next investigated the effects of AHR knockdown on the HCQ-induced MMP1 upregulation. The mRNA and protein expression of AHR was successfully decreased by transfection with AHR siRNA compared with that with control scrambled siRNA in both HEFs and NFFs (Fig. 5c, d). Notably, the HCQ-induced MMP1 upregulation was reduced in both HEFs and NFFs transfected with AHR siRNA (Fig. 5c, d). In addition, baseline MMP1 expression was also decreased by AHR siRNA transfection (Fig. 5c). The NF1 protein levels were not significantly affected by AHR knockdown (Fig. 5d).
HCQ-induced MMP1 upregulation is mediated by ERK pathway
The involvement of AHR activation and ERK pathway in fibroblasts 42 , dendritic cells 43 , adipocytes 44 , and keratinocytes 45 has been reported, including by our group. To further investigate the signaling pathway governing HCQinduced MMP1 upregulation, we treated the HEFs and NFFs with HCQ in the presence and absence of an ERK inhibitor (PD184352), AKT inhibitor (AKTI), p38 MAPK inhibitor (SB203580), or JNK inhibitor (SP600125). The upregulation of HCQ-induced MMP1 mRNA (Fig. 6a) and protein (Fig. 6b) was completely inhibited by the ERK inhibitor and partially by the JNK inhibitor in HEFs and NFFs. However, it was not inhibited by either the AKT inhibitor or the p38 MAPK inhibitor (Fig. 6a, b). Considering the robust inhibition of MMP1 expression by the ERK inhibitor, HCQ mainly signals through the AHR-ERK axis and upregulates MMP1 expression in both HEFs and NFFs. These results coincide with a previous report describing that FICZ-induced MMP1 upregulation is mediated by the AHR-ERK pathway 41 .
Discussion
The number of cutaneous neurofibromas generally increases with age in patients with neurofibromatosis 1 46,47 . According to an epidemiological study by Ehara Loss-of-function mutation of the NF1 gene is the major genetic cause of neurofibromatosis 1 [10][11][12] . Functional insufficiency of NF1 protein is known to lead to excessive activation of the PI3K-mTOR and RAF-MEK-ERK signaling pathways 3,[13][14][15] . Although these pathways are believed to induce the proliferation of neurofibroma cells and COL1A1 production, leading to neurofibroma formation, the therapeutic outcomes of the specific mTOR inhibitor sirolimus and the anti-collagenogenic pirfenidone in clinical trials have not been satisfactory 21,22 .
COL1A1 deposition is an integral part of neurofibroma formation 16,18 . As the anti-collagenogenic pirfenidone is not effective for treating cutaneous neurofibromas in neurofibromatosis 1 27 , we hypothesized that the collagenolytic process, instead of the collagenogenic process, may be disturbed in this disease. As MMP1 is the major enzyme degrading COL1A1 39,41,48 , we first focused on its immunohistological expression in cutaneous neurofibromas. Notably, the number of MMP1 + stromal cells was significantly reduced in the lesional area of cutaneous neurofibromas compared with that in the perilesional area of neuromas or in normal control skin (Fig. 1n). In vitro experiments revealed that the three NFFs obtained from three independent neurofibromatosis 1 patients exhibited point mutations of the NF1 gene, which result in reduced NF1 protein expression compared with that in HEFs established from healthy volunteers. Two NFF cell lines, KYU403 and KYU404, have the same point mutation of 5905 C > T (Supplemental Table 1) showing limited proliferative ability, suggesting that 5905 C > T is a crucial mutation affecting long-term cell mortality. Similar to neurofibromas in vivo, MMP1 levels were significantly reduced in NFFs compared with those in HEFs in vitro (Fig. 2c, d). The COL1A1 production in NFFs was comparable with that in HEFs (Fig. 2c, d).
Agents with the potential to restore MMP1 downregulation may have therapeutic value in neurofibromatosis 1. However, specific inhibitors against the PI3K-mTOR (rapamycin) and RAF-MEK-ERK (U0126 and PD184352) axes failed to restore the reduced MMP1 expression in NFFs (Supplemental Fig. S3). This result might explain why the effect of oral rapamycin treatment on neurofibromatosis was modest in a clinical study 22 . In addition, this result is partially consistent with a previous report describing that MEK-ERK activation is involved in the induction of MMP1 expression in human breast adenocarcinoma cell lines 49 . As the degradation of MMP1 occurs in the lysosomes, we next examined the effects of CQ and HCQ, which are lysosomal inhibitors on NFFs 31,32,34 . These antimalarial drugs accumulate in the lysosomes and inhibit the endocytotic, phagocytotic, and autophagocytotic processes by increasing the pH, which prevents the activity of lysosomal enzymes 34 . Notably, CQ and HCQ raised the mRNA and protein levels of MMP1 and accelerated its release in the culture supernatants in HEFs and NFFs ( Fig. 3 and Supplemental Fig. S4). As the NF1 knockdown by NF1 siRNA did not influence the baseline and HCQmediated upregulation of MMP1 expression in both HEFs and NFFs, this MMP1-upregulating effect by HCQ operates irrespective of the intracellular NF1 level.
Since induction of the mRNA and protein expression of MMP1 is highly regulated by AHR-ERK signaling 39,41 , we next investigated the possibility that HCQ may activate AHR. The results showed that this was indeed the case. HCQ induced the nuclear translocation of AHR and induced transcription of the AHR target gene CYP1B1. In addition, AHR knockdown reduced the HCQ-mediated MMP1 upregulation.
Based on these findings, we propose the following hypothesis. In healthy stromal cells (Fig. 7a), the mRNA and protein expression of MMP1 is mainly dependent on the AHR-ERK pathway. Some MMP1 proteins are degraded in the lysosomes, while some of them are released into the stroma and degrade collagens. In neurofibromatosis 1 (Fig. 7a), the mRNA and protein expression of MMP1 is markedly downregulated by an unknown mechanism(s). Its downregulation is not directly linked to a decreased level of NF1 proteins. The antimalarial drugs HCQ and CQ activate the AHR-ERK pathway and enhance the mRNA and protein expression of MMP1 (Fig. 7c). In addition, HCQ and CQ inhibit the lysosomal degradation process, which further increases the level of MMP1 protein. An excess of MMP1 protein is released even in neurofibromatosis 1 cells, and may restore the collagen-degrading capacity and be useful for the treatment of neurofibromas.
To date there are no effective therapeutic drugs available to modulate the progression of neurofibroma for adult neurofibromatosis 1 patients. As we show here, CQ and HCQ can be promising drugs for neurofibroma to decrease the collagen component, which is overexpressed in cutaneous and non-cutaneous lesions, through an increase in MMP1. However, there is concern about adverse effects associated with the treatment of CQ and HCQ 50,51 . Retinopathy and corneal deposits are widely known adverse effects of these drugs. Although corneal deposits can be reversed by discontinuing these drugs, retinopathy can cause irreversible vision loss 51 . However, retinopathy can be prevented by observing the maximum daily dosage based on ideal body weight. Additionally, ongoing monitoring is important to find this symptom at the earliest point of potential damage. Considering that neurofibromatosis 1 patients suffer from cutaneous and non-cutaneous neurofibroma, which decreases the quality of life, it is worth pursuing the use of CQ and HCQ to improve the outcome of those patients.
In conclusion, the mRNA and protein expression of MMP1 is markedly reduced in stromal cells in neurofibromatosis 1. This feature, though not directly related to the decrease in NF1 proteins, may be involved in the collagen accumulation in neurofibroma. The antimalarial drugs HCQ, CQ are feasible options to restore the MMP1 production and release in stromal cells in neurofibromatosis 1 by dual mechanisms, one by activating the AHR-ERK-MMP1 pathway and the other by inhibiting the lysosomal degradation of MMP1 proteins. The AHR-activating antimalarial drugs are potentially applicable for treating the devastating cutaneous neurofibromas in neurofibromatosis 1. | 5,509 | 2021-05-19T00:00:00.000 | [
"Biology",
"Medicine",
"Chemistry"
] |
Present-day thermal and water activity environment of the Mars Sample Return collection
The Mars Sample Return mission intends to retrieve a sealed collection of rocks, regolith, and atmosphere sampled from Jezero Crater, Mars, by the NASA Perseverance rover mission. For all life-related research, it is necessary to evaluate water availability in the samples and on Mars. Within the first Martian year, Perseverance has acquired an estimated total mass of 355 g of rocks and regolith, and 38 μmoles of Martian atmospheric gas. Using in-situ observations acquired by the Perseverance rover, we show that the present-day environmental conditions at Jezero allow for the hydration of sulfates, chlorides, and perchlorates and the occasional formation of frost as well as a diurnal atmospheric-surface water exchange of 0.5–10 g water per m2 (assuming a well-mixed atmosphere). At night, when the temperature drops below 190 K, the surface water activity can exceed 0.5, the lowest limit for cell reproduction. During the day, when the temperature is above the cell replication limit of 245 K, water activity is less than 0.02. The environmental conditions at the surface of Jezero Crater, where these samples were acquired, are incompatible with the cell replication limits currently known on Earth.
www.nature.com/scientificreports/collection of samples 4 .After the first Martian year of surface operation, 21 of these tubes were sealed as part of the "Crater Floor Campaign" (which ended on sol 380, where a "sol" is one rotation of Mars, i.e., a Martian day) and the "Delta Front Campaign" (which began on sol 415 and ended on sol 707, around mid-February 2023).Most samples were collected in pairs so that one sample from each pair was deposited on the ground forming the Sample Depot or First Cache at Three Forks 5 .The second sample in the pair was retained in the rover main collection.As the rover continues its exploration route towards the top of the delta fan and crater rim (Fig. 1), the sample cache increases in size and diversity with new added samples.The rover collection will be delivered in the future to the MSR sample receiving lander, while the Sample Depot at Three Forks would be used only if the rover failed before delivering its samples to the vehicle that will bring the samples to Earth.Upon reception on Earth of the sample collection, one of the first investigations to be implemented will relate to sample safety assessment and the search for Martian life in biocontainment 2,6,7 .
For planetary protection and life assessment purposes, there is a need to determine first the potential habitability of Jezero Crater's surface and the collection of samples that will be brought to Earth.Water is a requirement for known Earth life.On Earth, water activity, a w , is a measure of how much water (H 2 O) is free, unbound, and available for microorganisms to use for growth, and thus the habitability of an environment is restricted by the thermodynamic availability of water (i.e. the water activity, a w ) 8,9 .The currently accepted lowest documented limit for life is a w = 0.585 10 .This low level of water activity allows the germination of the xerophilic, osmophilic and halophilic fungus Aspergillus penicillioides.The present lower temperature limit for cell division is 255 K (− 18 °C) as reported by Collins and Buick 11 in experiments with the psychrotrophic pink yeast Rhodotorula glutinis.For planetary protection purposes, some margin is added to this limit, and it is assumed that cell replication needs water activity a w > 0.5 and temperatures T > 245 K (− 28 °C) 12,13 .These physical parameters are commonly used to assess at a planetary scale the habitability of a region and to define the planetary protection protocols and restrictions that should be applied to prevent forward contamination associated with space exploration missions 14,15 .To determine the potential present-day habitability of the surface of Jezero Crater, we will analyse these two environmental parameters: temperature and water activity and the possible interaction of atmospheric water (H 2 O) with salts.Similar analysis has been done previously at a planetary scale using global circulation models 16,17 and at a local scale using in-situ environmental measurements at Gale Crater 18 and Phoenix landing site 19,20 .
Salts were found at Jezero Crater in the abrasion patches associated with each sample 4 .Hygroscopic salts can absorb atmospheric water vapor (H 2 O molecules in gas state) to form liquid solutions (brines) in a process called deliquescence 21 .Additionally, salts in contact with the atmosphere can hydrate (solid-state hydration) and dehydrate, capturing and releasing H 2 O molecules.The plausible existence of brines or salt hydrates on the surface or subsurface has several implications for Mars's past and current habitability.Experiments in simulation chambers have shown that for certain temperature and a w conditions, Mg, Ca, and Na perchlorates and sulfates can hydrate or deliquesce, forming stable liquid brines under present-day Martian conditions [22][23][24] .The Planetary Instrument for X-Ray Lithochemistry (PIXL) and the Scanning Habitable Environments with Raman and Luminescence for Organics and Chemicals (SHERLOC) instruments have investigated the abrasion patches and found hygroscopic and deliquescent salts such as Mg, Fe (hydrated) and Ca sulfates (anhydrite mostly), chlorides and perchlorates (Initial Reports-PDS; [25][26][27][28] ).Also, the SuperCam (SCAM) instrument found that the visible/near infrared (VISIR) spectra of the abraded patches in the rocks of some of the sample pairs (the ones named Roubion, Montdenier, and Montagnac) are consistent with a mixture of hydrated Mg-sulfates, whereas SCAM Raman and Laser induced breakdown spectroscopy (LIBS) and SHERLOC detected anhydrous Na perchlorate 25,26,29 .Previous Mars exploration missions have detected Mg-and Ca-perchlorates at the Phoenix 30,31 and Mars Science Laboratory 32 landing sites.Amongst the salts found at Jezero, and on Mars, calcium perchlorate is the deliquescent salt that has the lowest eutectic point (198 K) 16,33 , and thus, this is the lowest temperature limit for liquid water (brine) stability of single component brines on present-day Mars.Sulfate signatures were detected in the SCAM VISIR spectra of the abraded patch of the sample named Bellegarde 26,29 as well as in the Hogwallow Flats region explored in the Delta Front Campaign, which showed a hydrated sulfate-cemented siltstone 34 .Also, PIXL and SHERLOC detected sulfates in these environments.The presence of these different types of salts suggests that Jezero Crater was exposed to episodic water events, with different salt solutes that precipitated during evaporation 28,[35][36][37] .Previous in-situ research by the Curiosity rover at Gale Crater has shown that sulfates are the main carrier of soil hydration 38 , which is consistent with orbital observations at the planetary scale 39 .
To characterize the near-surface water cycle at Jezero and the habitability of the Martian rocks that have been sampled, we need to quantify the amount of water that is available daily for exchange with outcrops and regolith, evaluate the potential hydration state of the salts that have been found on Mars and at Jezero and estimate the moles of H 2 O in the headspace gas of the sealed samples using the Mars Environmental Dynamics Analyzer (MEDA) instrument observations 40,41 , see Supporting Information A.
Results
The collection of samples acquired during the first Martian year and the environmental conditions during the sealing are summarized in Table 1.
The annual and diurnal variation of the water vapor volume mixing ratio (VMR) at Jezero crater is shown in Fig. 2 using MEDA observations 42 .Daytime MEDA Relative Humidity (RH) measurements are too low (i.e., ≤ 2%, the RH uncertainty) and thus cannot be used to estimate VMR with sufficient accuracy.MEDA relative humidity and pressure measurements at 1.45 m above the surface suggest a strong diurnal and seasonal variability of the water VMR, see Fig. 2-Top.The water volume mixing ratio peaks at Ls = 150°, at the end of the northern hemisphere summer after the release of water vapor from the northern polar cap.Predawn MEDA measurements (when the confidence in VMR retrieval is higher) have been used to estimate the (total column) night-time precipitable amount of water.The results are compared with the daytime zonally averaged orbital observations provided by the Thermal Emission Spectrometer (TES) onboard the Mars Global Surveyor orbiter for this region in Fig. 2-Bottom.There is coherence in the seasonal behavior, the zonally averaged orbital daytime observations and the in-situ nighttime observation differ by a factor of 2-3.According to MEDA in-situ nighttime measurements, the greatest amount of nighttime precipitable water is around 10 pr-um at Jezero crater, and was reached around Ls = 150°, during the northern hemisphere summer, around the sampling time of Robine.A precipitable micrometer (pr-μm, which equals 1 g of H 2 O per m 2 ) is the thickness of the water layer that would Table 1.Summary of acquired samples, sealing sol and Local Mean Solar Time (LMST), solar longitude (Ls), ambient temperature (Ta) at 0.84 m above the ground and pressure (P) provided by MEDA, estimated sample length (L), estimated rock volume (V), estimated rock mass (M) assuming a sample density of 2.6 g/cm 3 , estimated headspace gas volume (G), estimated total number of moles of gas (n) (micro-mol), Single Column Model (SCM)-derived H 2 O VMR at the time of sealing at 0.84 m above the ground, H 2 O partial pressure and derived number of moles (nano-mol) of H 2 O.The samples left on the ground at Three Forks as part of the First Sample Depot are shadowed in colour.WB# refers to witness tube assemblies, as described in 59 .www.nature.com/scientificreports/An example of the amplitude of the diurnal variability of the near-surface H 2 O content is illustrated in detail in Fig. 3.Here we compare the nighttime H 2 O VMR values of several consecutive sols (sols 293 to 303, around the sampling sol of Robine at Ls = 146° at the end of the northern hemisphere summer) with the results of the Single Column Model (SCM).The SCM provides an estimate of the diurnal H 2 O VMR and can also be used to extrapolate the VMR value at the height of the sealing station (around 0.84 m, where two other MEDA temperature sensors are).The corresponding air temperature measurements at 1.45 m, through day and night, are also included for completeness.This example shows, for instance, a diurnal variability of H 2 O VMR of a factor of 5 or more; in this case, the H 2 O VMR ranges between 40 and 240 ppm.The lowest ground temperatures are reached just before sunrise; at this moment, the relative humidity of the ground peaks, and sometimes frost conditions can be met when saturation is reached.This is confirmed by measurements and models (see Supporting Information B).
M2020 sample Sealing sol and time (LMST) Ls Ta (K) P (Pa) L (cm) V (cc) M (g) G (cc) Gas n (µmol) H
On the surface of Mars, there is a strong anti-correlation between water activity and temperature, as illustrated in Fig. 4. All other factors being equal, for the same amount of water VMR, the relative humidity increases with decreasing temperature.Although MEDA surface measurements suggest a factor 5 reduction of the water VMR at night-time, the large temperature decrement overcomes this and results in an increased night-time relative humidity (and water activity).Figure 4 shows the pairs of (simultaneous) derived groundwater activity and measured ground temperature (with accuracy 0.75 K) as measured by MEDA instrument throughout the night during one full Martian year at the base of Jezero crater.This analysis is shown in the Supporting Information E, divided into four seasons.The values are compared with the known phase and hydration state changes of some of the salts reported in the abraded patches.The deliquescence curve for calcium perchlorate (the salt found on Mars with the lowest eutectic temperature, 198 K) is also included for reference.
Once the samples are sealed, they may experience changes in water activity caused by exposure to different thermal environments (either on the surface of Mars, within the rover, during the launch, cruise, entry, descent and landing phases, or during storage on Earth).For illustration we have modelled a simplified, T/ a w cycle for the gas space of a sealed sample (Fig. 5) assuming a range of possible temperature changes experienced by the samples on Mars, on the rover or on its way to Earth.We assume that the water VMR is constant in the tube and equal to that in the atmosphere when the samples were sealed.We take this assumption because the type and amount of salts captured within the bulk of the 3-6 cm deep drilled core is not exactly known.Therefore, it is not possible to accurately simulate how much captured water will be released from the core salts into the headspace gas when the sample tubes are heated.We compare the isobaric lines, for the higher and lower partial pressure reported in Table 1, with the eutectic points of different salts of relevance to Mars, which may be within the sampled rocks.All isobars pass below the eutectic points of these salts, suggesting that if there are no additional water sources in the rock samples, no pure salt would deliquesce (although mixtures of salts may behave differently).
Discussion
Within the first Martian year, Perseverance has acquired an estimated total mass of 355 g of rocks and regolith, and 38 μmole of Martian atmospheric gas (Table 1).A preliminary MSR study estimated that the atmospheric sample needed to implement volatile studies should be at least 19 μmole 43 , ideally within one single dedicated For illustration, the environmental data are overlayed with the hydration lines of calcium and magnesium sulfates, and calcium perchlorate deliquescence and efflorescence lines.The water activity a w is derived assuming equilibrium, from the relative humidity (RH), with respect to liquid, as a w = RH/100, All data points to the left of the ice saturation line (RH ice = 100%) are saturated with respect to ice and may allow frost formation 70 .The Deliquescence RH (DRH) and hydration state lines of some perchlorates and sulfate salts are included for reference 19,72 .
Figure 5. Modelled thermal-water activity curves experienced by the samples within the sealed tubes.The H 2 O partial pressure isobars (i.e., constant water vapor pressure) for the higher and lower partial pressure reported in Table 1 are compared with the eutectic points of different salts of relevance to Mars, which may be within the sampled rocks (colored symbols), the temperature-dependent deliquescence relative humidity (DRH) for calcium perchlorate (red line), and the ice liquidus line (i.e., equilibrium between water ice and liquid brine; light yellow) 17,70,73 .For comparison, the isobar for the H 2 O partial pressure values that are expected at polar regions, i.e. 0.4 Pa and 1.4 Pa 19 , is also included.
tube.The First Sample Cache, which constitutes a contingency collection formed by a set of 10 sample tubes, contains a total of 21 μmole of gas and 158 g of rock mass.The amount of gas available at the First Sample Depot meets the requirement of gas amount proposed by Swindle et al. 43 , although the gas is distributed within the headspace of different sample tubes, the witness tubes and in one dedicated "atmospheric" sample (Roubion).The water content in the sealed gas varies from sample to sample, depending on the sealing time and season.
The analysis of atmospheric data from one full Martian year suggests that the surface at Jezero crater can act as a water sink at night, with most of this water released back into the atmosphere after sunrise.The combined analysis of orbital and in-situ measurements suggests that there is a strong diurnal cycle whereby the near-surface water VMR changes by a factor of 3-5, which agrees with previous observations by Curiosity at Gale Crater, Mars 44 .Comparing day-time orbital and night-time surface observations, and assuming that the entire atmosphere participates in the interchange, we conclude that the maximum amount of water potentially available for this daily interchange is around 10 pr-µm, although a value near 0.5 pr-µm is more likely since models indicate that only the lowest ~ 200 m of the atmosphere directly exchanges with the surface on a diurnal timescale 45,46 , see Supporting Information D. Notice that this assumes a well-mixed atmosphere up to a certain height.This means that the diurnal cycle of water may thus allow for a daily transfer of about 0.5 g of water per m 2 (assuming H 2 O is well-mixed within the lower 200 m) with an upper limit of as much as 10 g m −2 (assuming H 2 O is well-mixed up to the scale height).Previous analysis of the vertical profile at arctic Martian regions suggests that during spring and summer, a large percentage of the water column (> 25% and up to nearly 100%) was confined below ~ 2.5 km 47 .These results are comparable to those provided by the REMS instrument package on the Curiosity rover at Gale crater 24 and are consistent with previous research based on orbital and in-situ observations and modelling 44,[48][49][50][51][52][53][54] .We conclude that similarly to what happens on other sites on Mars 55 , there is a strong rock and regolith-atmosphere exchange mechanism on Mars 56 , likely owing to the combination of adsorption-desorption of water on the regolith grain surfaces and to hydration-dehydration of salts.
The present-day surface water activity and temperature cycle at the surface in Jezero does not allow the formation of deliquescent brines (although it may happen in the subsurface, should kinetics allow).During some periods of the year, the surface relative humidity is saturated with respect to ice, and frost can be transiently stable for some hours of the day when the ground temperature is below 185 K.The present-day surface environment at Jezero allows hydration and dehydration of different forms of salts on a diurnal and seasonal basis, as illustrated in Fig. 4. Our analysis suggests that the daytime environmental conditions allow for MgSO 4 .4H 2 O stability.Indeed, the analysis of PIXL and SHERLOC data of the abraded patches has found hydration (3-5 waters) in association with the Mg sulfate salts 27 , which is in line with the analysis of Fig. 4. The regolith at Jezero crater has been investigated by the Planetary Instrument for X-ray Lithochemistry (PIXL) and SuperCam LIBS and VISIR instruments 56 .Their analysis has demonstrated that the top surface of soils, which is the part in direct contact with the atmosphere, is enriched in water and S and Cl salts that form a crust.Some targets showed a strong correlation between S, Mg, and H, suggesting the presence of Mg sulfates, which are likely hydrated.Note that the crust hydration signature is seen even during daytime when the ambient relative humidity and water activity are below 0.02, which indicates that water is not released immediately to the atmosphere due to the slow kinetics of dehydration.
The sustained hydration/dehydration cycle of salts at Jezero, within the rock matrix, exposed to this environment for millions of years may have induced the formation of voids and cracks in the rocks and may have contributed to their mechanical erosion and disaggregation 35 .Salt hydration and dehydration can indeed cause substantial volume expansion; for example, magnesium sulfate can increase its volume by up to 70% 57 , generating substantial stresses and weakening the rock 58 .Interestingly, the first abraded patch (Roubion sample), showed voids of millimetre to centimetre size, which were not visible on the rock surface.The composition analysis of Roubion abraded patch revealed that Ca-and Mg-sulfates, Ca-phosphates, and halite were present in significant concentration.In this rock, Na-perchlorates constituted more than 60% percentage out of the total SHERLOC mineral detections 25 .The sample from Roubion rock completely disintegrated during drilling, suggesting that due to this environmental cycle salt-rich samples may be fragile and disaggregate during their future mechanical manipulation on Earth.
Documenting the water content is important for sample integrity to estimate what may happen to the samples on their way to and during manipulation on Earth.When the samples are sealed, they will equilibrate over time with their headspace gas.The hydration state of the samples within its sealed capsule depends on the temperature during storage in the rover, or on the surface, or during cruise, or entry or final storage on Earth.Most of these temperatures will have to be measured, inferred, or modelled.For instance, once on the surface of Mars, the tubes may potentially, repeatedly, be heated ocationally to up to 300 K for years.Also, their minimum night-time temperatures will presumably be similar to the surrounding regolith (about 180 K), see Supporting Information C. The sample tubes are coated in alumina (white) and titanium nitride (golden parts) 59 .These coatings can interact with the incident solar radiation during the day absorbing radiation, and at night with the atmosphere above emitting infrared radiation, resulting in local temperatures that may differ slightly from the one of the natural bedrocks and regolith Martian surface, see Supporting Information C. As for the samples within the rover they will be exposed to a different thermal history.For illustration we have modelled a simplified, T/ a w cycle for the gas space of a sealed sample (Fig. 5).At first order, assuming equilibrium and a well-mixed atmosphere, all the isobars pass underneath the eutectic points of single salts relevant to Mars.
Based on the currently recognized limits of known life forms on Earth, cell replication requires temperatures above 245 K (− 28 °C), and -simultaneously-water activity above 0.5 12 .During all seasons, the water activity at the ground surface at Jezero crater can frequently go above the limit for terrestrial cell reproduction of 0.5, but this happens only at night, when the temperature at the surface drops below 190 K (Fig. 5).Therefore, the present-day Mars surface conditions at Jezero crater are very different from the known, tolerated limits for cell replication on Earth.The limits used as reference for Planetary Protection Policies are documented in laboratory growth studies that confirmed cell reproduction.There are extremely arid subsurface natural environments on Earth, e.g., the Atacama Desert's Maria Elena South region, where, at a depth of a few dm's, the water activity is constantly of the order of 0.14 (i.e., 14% RH).It has been shown that in this subsurface hyper arid environment, there still is as much microbial diversity as at the surface where the mean water activity value is 0.27 60 .However, in this region but the temperature never reaches 245 K.The environmental conditions at Jezero crater are inadequate for deliquescence but allow for hydration of Ca and Mg sulfates, among other salts.On Earth, some recent studies used gypsum (CaSO 4 •2H 2 O) samples collected in the Atacama Desert as a substrate for culture experiments with a cyanobacteria strain.This research demonstrated that cyanobacteria could extract water of hydrated salts from the rock, inducing a phase transformation from gypsum to anhydrite (CaSO 4 ), which may enable these microorganisms to sustain life in this extremely arid environment 61 .The validity of these results has been questioned 62 , which suggests that the existence of water extraction mechanisms from salts and dry rocks across other organisms needs to be further investigated to understand better the limits of life on Earth and Mars 63 .
Based on the state-of-the-art research of the limits of life tolerance on Earth, we conclude that the samples' environmental conditions at Jezero Crater are incompatible with the known cell replication requirements.If future research of life on Earth demonstrates low-temperature cell replication using the water of hydrated sulfates or water adsorbed to rock grains, then the habitability of the Martian sample collection should be reassessed, as day-time temperatures at Jezero are compatible with cell replication.
Methods
Once a sampling target was identified during the rover's surface operations, a 5 cm diameter patch was abraded within a few tens of cm of the desired sample targets, within the same lithology, to remove surface dust and coatings.In this abraded patch, which was taken as proxy for the sample, detailed images of rock textures and maps of elemental composition, mineralogy and organic molecule distribution were acquired with the rover instruments.Samples were acquired with drills and were afterwards sealed at the rover sealing station.Prior to sealing, the length of the solid cores is estimated by Perseverance using a volume probe 59 .Each tube has an internal volume of 12 cm 3 (with a tube section of 1.4103 cm 2 ).Witness tubes are assumed to have only half of their internal volume available for gas.The Initial Reports have documented all the details of sampling acquisition and instrument observation interpretation 35 (2023).
Table 1 indicates the sealing sol (starting on the first day of Perseverance on Mars operations) for each sample.The measured sample length and MEDA atmospheric temperature at 0.84 m above the surface (Ta) (which is comparable to the height of the sealing station) and atmospheric pressure (Pa) (see supporting information A), are used to calculate the total mass of rock (M), assuming a sample density of 2.6 g/cm 3 (the same one used in the Initial Reports-PDS), and the estimated partial pressure of water and number of moles of gas (n) in the headspace above the solid sample.Local Mean Standard Time (LMST) indicates the time when the sealing was activated.The solar Longitude (Ls) marks the passage of time within a Mars year and the changes through seasons.
For consistency, in the mass calculation of Table 1 we have applied to all samples the same density used in the Sample Reports (2.6 g/cm 3 ).But the actual density of each sample may vary significantly.For instance, the bulk density of regolith granular material on Mars has been estimated to range between ∼1 and 1.8 g/cm 364 ,the density of the bedrock at Jezero through the traverse of the rover has been estimated, based on RIMFAX radar measurements, to vary between 3 and 3.4 g/cm 365 whereas using SuperCam mineral abundances, the densities of some of the targeted rocks on the crater floor have been inferred to vary between 3.1 and 3.7 g/cm 366 .As for other rock types, the density of sedimentary rocks in Gale crater have been calculated to be of the order of 2.3 ± 0.130 g/ cm 367 .We use a single-density value of 2.6 g/cm 3 for all samples, which is an average of the densities of these three rock types (dense bedrock 3.7 g/cm 3 , sedimentary 2.3 g/cm 3 and regolith 1.8 g/cm 3 ).
The environmental information at the time of sealing is recorded by the Mars Environmental Dynamics Analyzer (MEDA) instrument package (MEDA Data; 40 ).During the sample sealing process, each tube was heated up to 40 °C (313 K) for a short period of time (minutes) as recorded by the PRT temperature sensors at the time of sealing.This does not translate to heating the sample itself to such temperature, but it is considered an upper temperature limit that the samples should not exceed.The actual temperature inside the sample tube during sealing is likely between MEDA ambient temperatures and the Platinum Resistance Thermometer (PRT) measurements.MEDA also measured the ambient pressure and temperatures (for more information on the measurement cadence, see Supporting Information A).The sample length probe is used to estimate the rock volume, and the remaining headspace volume is occupied by Martian atmosphere gas, then the temperature and pressure provided by MEDA, are used to calculate the number of moles of the sealed headspace gas.All this information is included in two main products that are uploaded to the NASA Planetary Data System (PDS): (1) the Sample Dossier, that contains all observations from the instrument payloads at the sampling site, along with relevant rover ancillary data; (2) and the Initial Report, which is an extended description of the observations of each sample prepared by the Science Team within a few weeks of sample acquisition (K.A. Water activity is defined as the equilibrium fugacity of water vapor over a solution (f) relative to the fugacity of water vapor over pure water (f 0 ) (a w = f/f 0 ).At low pressures, such as on Mars, fugacities are well approximated by partial vapor pressures, leading to the more common expression a w = e/ e s,w (T g ), where e s,w is the saturation vapor pressure over liquid water, which is equivalent to the equilibrium relative humidity (RH) divided by 100 (RH/100 = a w ) 9 .We use MEDA's Relative Humidity Sensor (HS) and Thermal Infrared Sensor (TIRS) to derive the water activity at the ground and to measure ground temperature 42,68 .The HS measures the relative humidity (RH) with respect to ice at 1.45 m with an uncertainty of 2%.For a detailed explanation of the RH, the retrieval procedures and error sources see 69 , and the measurements acquired during the first 410 sols of operations 42 .The
Figure 1 .
Figure 1.(Left) Perseverance's traverse during the first 766 sols, from the landing site, through the Crater Floor and Delta Front campaign, and towards the western delta of Jezero crater, Mars.The white line indicates the rover traverse, green dots mark the deployment sites of the First Cache, and red crosses mark the sampling sites (including the sample sealed on sol 749, acquired above the delta after the construction of the sample depot).Credit: CAMP and MRO HiRISE, The University of Arizona.(Right) Annotated landscape of the Sample Depot at Three Forks, as seen by Perseverance, with the different sealed tubes.Credits: NASA/JPL-Caltech/ASU/MSSS.
Figure 2 .
Figure 2. (Top) Annual (sol number and Ls) and night-time (LMST) variation of the Water Volume Mixing Ratio (VMR), with error bars, at Jezero crater during the first Martian year provided by the MEDA instrument at 1.45 m above the surface.Daytime relative humidity measurements (marked in gray) fall below the 2% accuracy of the MEDA relative humidity sensor and the VMR cannot be estimated.The spring equinox starts at L s = 0°, the summer solstice at L s = 90°, the autumnal equinox at L s = 180°, and the winter solstice at L s = 270°.(Bottom) Total column of H 2 O abundance (in precipitable microns): TES zonally-averaged orbiter data for MY24 to MY27 (daytime, ~ 14 LMST) compared with MEDA (pre-dawn) in-situ surface measurements (lower data set) at Jezero Crater.For orbital data, the error bars are the 1-sigma standard deviation on the average that is plotted.MEDA error bars are derived from the MEDA reported uncertainty value in the relative humidity (RH) measurements and in the humidity sensor board temperature.
Figure 3 .
Figure 3. Near-surface diurnal cycle of water Volume Mixing Ratio (VMR) and air temperature (T) as a function of LMST during the sols around the sampling time of Robine.Single-column model (SCM) VMR results-dark and light blue lines-at 1.45 m and 0.84 m, respectively, are compared to MEDA values (including the uncertainty in H 2 O VMR retrieval) at 1.45 m for sols 285 to 305 (Ls = 139°-149°).The SCM air temperature estimate-black line-for the same period compared with the Air Temperature Sensor (ATS) observations at 0.84 m (with 300 s moving average).The time of sealing is marked with a vertical dashed black line, whereas sunset and sunrise times are marked with a blue and orange line, respectively.
Figure 4 .
Figure 4. Diurnal variation, as a function of LMST, of the derived surface water activity concerning liquid (with a w error bars) and measured ground temperature provided by MEDA during one full Martian year.For illustration, the environmental data are overlayed with the hydration lines of calcium and magnesium sulfates, and calcium perchlorate deliquescence and efflorescence lines.The water activity a w is derived assuming equilibrium, from the relative humidity (RH), with respect to liquid, as a w = RH/100, All data points to the left of the ice saturation line (RH ice = 100%) are saturated with respect to ice and may allow frost formation70 .The Deliquescence RH (DRH) and hydration state lines of some perchlorates and sulfate salts are included for reference19,72 . | 7,399 | 2024-03-26T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Coronal Mini-jets in an Activated Solar Tornado-like Prominence
High-resolution observations from the $Interface~Region~Imaging~Spectrometer$ ($IRIS$) reveal the existence of a particular type of small solar jets, which arose singly or in clusters from a tornado-like prominence suspended in the corona. In this study, we perform a detailed statistical analysis of 43 selected mini-jets in the tornado event. Our results show that the mini-jets typically have: (1) a projected length of 1.0-6.0 Mm, (2) a width of 0.2-1.0 Mm, (3) a lifetime of 10-50 s, (4) a velocity of 100-350 km s$^{-1}$, and (5) an acceleration of 3-20 km s$^{-2}$. Based on spectral diagnostics and EM-Loci analysis, these jets seem to be multi-thermal small-scale plasma ejections with an estimated average electron density of $\sim$2.4 $\times$ 10$^{10}$ cm$^{-3}$ and an approximate mean temperature of $\sim$2.6 $\times$ 10$^{5}$ K. Their mean kinetic energy density, thermal energy density and dissipated magnetic field strength are roughly estimated to be $\sim$9 erg cm$^{-3}$, 3 erg cm$^{-3}$, and 16 G, respectively. The accelerations of the mini-jets, the UV and EUV brightenings at the footpoints of some mini-jets, and the activation of the host prominence suggest that the tornado mini-jets are probably created by fine-scale external or internal magnetic reconnections (a) between the prominence field and the enveloping or background field or (b) between twisted or braided flux tubes within the prominence. The observations provide insight into the geometry of such reconnection events in the corona and have implications for the structure of the prominence magnetic field and the instability that is responsible for the eruption of prominences and coronal mass ejections.
INTRODUCTION
Solar jets are transient collimated plasma ejections in the solar atmosphere (Roy 1973). They are thought to be ejected along open magnetic fields or the legs of large-scale magnetic loops (e.g., Shibata et al. 1994a;Liu et al. 2005). As space-borne instruments have evolved since the 1980's, the observations of dynamic solar events have been extended from H α and radio to UV, EUV, and X-ray wavebands (e.g., Schmahl 1981;Schmieder et al. 1988;Alexander, & Fletcher 1999;Zhang et al. 2000;Cirtain et al. 2007;Jiang et al. 2007;Chen et al. 2008;Tian et al. 2011;Joshi et al. 2018;Zhang, & Ni 2019). According to relevant studies (e.g., Shimojo et al. 1996;Savcheva et al. 2007), large-scale solar jets can extend to lengths of ∼10 5 km and widths of ∼10 4 km; they have typical speeds on the order of a few × 10 2 km s −1 and lifetimes ranging from several minutes to a few hours.
Allowing for a high degree of correlation between jets and photospheric magnetic flux activity, such as flux emergence and cancellation (e.g., Roy 1973;Golub et al. 1981;Chae et al. 1999;Liu, & Kurokawa 2004;Jiang et al. 2007;Chen et al. 2008;Yang et al. 2011), many authors have been inclined to believe that jets result from magnetic reconnection between potential or twisted magnetic loops and ambient open fields (e.g., Heyvaerts et al. 1977;Forbes & Priest 1984;Shibata & Uchida 1986;Canfield et al. 1996;Patsourakos et al. 2008;Kamio et al. 2010;Pariat et al. 2010;Yang et al. 2018;Li 2019). In contrast to this sort of "standard" jet, another type termed "blowout" jet was proposed by Moore et al. (2010), in which jets are associated with eruptions of miniature filaments. Sterling et al. (2015) further found that a minifilament eruption could be found in each of 20 randomly selected X-ray jets formed in polar coronal holes. Up to the present, a substantial amount of observations (e.g., Hong et al. 2011Hong et al. , 2016Shen et al. 2012Shen et al. , 2017Young, & Muglach 2014;Lee et al. 2015;Li et al. 2015;Sterling et al. 2016;Kumar et al. 2018) and numerical simulations (e.g., Archontis, & Hood 2013;Pariat et al. 2015Pariat et al. , 2016Wyper et al. 2018;Meyer et al. 2019) have shown that the blowout eruption of a small-scale sheared-core magnetic arcade can play an important role in producing a solar jet. It is also worth noting that Li et al. (2019) reported some jet-like features, which were rooted in the ribbons of an X-class flare and might be caused by chromospheric evaporation.
Even though magnetic reconnection seems to be necessary for the occurrence of most solar jets, the ways reconnection occurs during jet formation may be remarkably different from each other, thus leading to a diversity of jet morphology. A multitude of studies have mentioned the spinning motion of jets (e.g., Liu et al. 2009;Chen et al. 2012;Hong et al. 2013;Shen et al. 2011;Schmieder et al. 2013;Zhang, & Ji 2014;Liu et al. 2018;Lu et al. 2019), which is generally considered to be a result of relaxation of magnetic twist through reconnection (e.g., Canfield et al. 1996;Fang et al. 2014) or the conversion of mutual magnetic helicity into self-helicity during three-dimensional reconnection (Priest et al. 2016). A rare event of coronal twin jets was presented by Liu et al. (2016). Hong et al. (2019) found that a solar jet was accompanied by oscillatory reconnection. Shibata et al. (1994b) categorized jets as anemone type or two-sidedloop type, which is associated with relatively vertical or horizontal overlying coronal field configurations, respectively. Recently, Zheng et al. (2018) provided an example of a two-sided-loop jet related to ejected plasmoids and twisted overlying fields. Sterling et al. (2019), Shen et al. (2019), and Yang et al. (2019) further found that two-sided-loop jets can also be driven by eruptions of mini-filaments below overlying large magnetic loops.
Besides large EUV or X-ray coronal jets, high-resolution observations have revealed that small-scale jet activity takes place more frequently than large jets (e.g., De Pontieu et al. 2004;Shibata et al. 2007;Tian et al. 2014a;Young et al. 2018). They are ubiquitous in the lower solar atmosphere, such as spicules observed at the limb (De Pontieu et al. 2007), chromospheric anemone jets outside active regions (Shibata et al. 2007;Nishizuka et al. 2011), penumbral microjets in sunspots (Katsukawa et al. 2007;Esteban Pozuelo et al. 2019), transition region network jets (Tian et al. 2014a;Kayshap et al. 2018;Chen et al. 2019), and intermittent jets from light bridges of sunspots (Hou et al. 2017;Tian et al. 2018). Small-scale jets are usually one or two orders of magnitude smaller than large jets and have a shorter life span varying from dozens of seconds to several minutes. In terms of dynamics, there seem to be two kinds of small jet, which have a speed of ∼50 km s −1 and ∼150 km s −1 , respectively. De Pontieu et al. (2007) first proposed that the two types of small jets or spicules dominating the solar chromosphere are separately driven by shock waves (Type-I) and magnetic reconnection (Type-II). Two similar sorts of small jet were also found from sunspot light bridges by Hou et al. (2017) and Tian et al. (2018).
Up to now, the triggering mechanism of small jets has not been fully understood. Many models were devoted to interpreting their formation. Judge et al. (2011) suggested that some populations of spicules and fibrils correspond to warps in two-dimensional sheet-like structures. Takasao et al. (2013) found that slow-mode shock waves generated by magnetic reconnection in the chromosphere and photosphere play key roles in accelerating chromospheric jets. Cranmer & Woolsey (2015) modeled spicules as narrow, intermittent extensions of the chromosphere using the output of a time-dependent simulation of reduced magnetohydrodynamic (MHD) turbulence. The MHD simulations performed by Martínez-Sykora et al. (2017) andDe Pontieu et al. (2017) revealed a novel driving mechanism for spicules in which ambipolar diffusion resulting from ion-neutral interactions plays a dominant role. Tian et al. (2018) studied the finescale jets from sunspot light bridges. The inverted Y-shape structure of the jets they observed does not seem to be easily explained by non-reconnection models. Recently, Samanta et al. (2019) detected flux emergence and/or flux cancellation around the spicule footpoint region and conjectured that this supports the formation of spicules from reconnection. Their observations do not exclude other formation mechanisms of small jets (e.g., Martínez-Sykora et al. 2017).
Recently, three-dimensional MHD and radiative MHD numerical experiments have shown how flux emergence can drive the formation of jets in the low solar atmosphere. Raouafi et al. (2016) In this study, we consider a particular type of small-scale jet, which was first mentioned by Chen et al. (2017). Different from the usual jets previously reported, these small jets did not emanate from the photosphere or chromosphere, but directly appeared in a tornado-like prominence suspended in the corona. This appears to be a very rare phenomenon. The formation and disintegration mechanism of such prominences has been investigated by Chen et al. (2017). Here, we focus on statistical information about the dynamical and energetic characteristics of these unusual coronal mini-jets and their possible triggering mechanism. In the next section, we describe the observational data. This is followed by a detailed statistical investigation of the dynamical and energetic properties of the mini-jets. Finally, we summarize and discuss the results. Pesnell et al. 2012), which supplies us with full-disk intensity images up to 0.5 R ⊙ above the solar limb with 0. ′′ 6 pixel size and 12 s cadence in 7 EUV channels centered at 304Å (He II, 0.05 MK), 131Å (Fe VIII, 0.4 MK and Fe XXI, 11 MK), 171Å (Fe IX, 0.6 MK), 193Å (Fe XII, 1.3 MK and Fe XXIV, 20 MK), 211Å (Fe XIV, 2 MK), 335Å (Fe XVI, 2.5 MK), and 94Å (Fe XVIII, 7 MK), respectively. One longitudinal magnetogram with a 0. ′′ 5 plate scale from the Helioseismic and Magnetic Imager (HMI; Schou et al. 2012) on board SDO was utilized to show the active region AR 12297 as the background of the magnetic field lines from a potential field source surface extrapolation (PFSS; e.g., Schatten et al. 1969).
RESULTS
During 2015 March 19-20, two tornado-like prominences successively formed and developed near active region AR 12297 (∼S16W79). In the early evolution process of the first tornado, a multitude of small-scale jet-like structures (mini-jets) seem to be rooted in and were ejected from the thread structures of the activated tornado. We selected 43 mini-jets (J1-J43) in total, which took place during the period of 09:17-09:40 UT and clearly showed their collimated structures and dynamical evolutions in the high-resolution IRIS 1330Å SJI images (see the online animated version of Figure 1). We marked their footpoint positions with a circle (J1-J35), triangle (J36-J38), and diamond (J39-J43) in the SJI image taken at 09:21:18 UT (Figure 1(a)). Unlike the flows along the threads of a prominence (e.g., Chen et al. 2016), these jets were expelled approximately perpendicular to the local prominence's axes, as indicated by the arrows in Figure 1(a). Another remarkable feature is that the jets sometimes appeared in clusters happening almost simultaneously and being very close to each other in space with approximately parallel ejection directions. The evolutions of several groups of clustered mini-jets are presented in the SJI 1330Å images in the middle (J3-J6) and bottom (J23-26) panels of Figure 1. Two AIA 171Å images are also given in Figure 1(b4) and (c4) to show the eight mini-jets in the EUV line. It can be seen that the spatial scales of these jets are so small that some of them, such as J5-J6 in the panel (b4) and J25-J26 in the panel (c4), can be hardly distinguished from each other in the 171Å images.
Characteristics in Time, Space and Dynamics
Based on the IRIS 1330Å SJI data, we characterized 43 mini-jets with a statistical analysis of their temporal and spatial scales and dynamics, including the projected length (l), width (w), velocity (v j ), acceleration (a), lifetime (τ ) etc. The results are listed in the left columns of Table 1. The lengths of the jets are defined as the distances between their footpoints and the farthest top edges as measured in the directions of jet propagation (see the dotted line in Figure 1(b3)). Assuming that the mini-jets moved along the magnetic flux tubes, it is reasonable to conjecture that they have a cylindrical structure. We measured their widths at their midpoints, as denoted by the distance between the two short lines in Figure 1(c2). The jet lifetimes are on the order of tens of seconds, which is not much longer than the temporal resolutions (∼10 s) of the SJI and AIA observations. Sometimes, it is hard to track the entire evolution of the jets, as they may appear and/or disappear during the gap between two successive intensity images. We approximately calculated the velocities of the mini-jets by dividing their lengths by the corresponding time lags and further derived the accelerations from the velocities and the time lags under the assumption of a zero initial speed. Figures 2(a)-(e) present the distributions of the l, w, τ , v j , and a, respectively. It can be seen that most apparent velocities are less than 350 km s −1 , while accelerations are typically less than 20 km s −2 . The dashed lines in Figures 2(a)-(e) indicate the mean values of l, w, τ , v j , and a, which are 3.4±0.2 Mm, 0.7±0.2 Mm, 31±7 s, 220±10 km s −1 , and 15±1 km s −2 , respectively.
Electron Densities and Temperatures
Unfortunately, all of the mini-jets in our study were missed by the IRIS spectrometer slit. Thus, we cannot directly measure the electron densities (n e ) of the mini-jets by using the intensity ratio of the O IV 1401Å and 1399Å line pair. Here, we provide a rough method for the diagnosis of n e . We found that some places scanned by the slit have similar 1330Å intensities (I) to those of the jets. Based on the assumption that they may have similar values of n e , we first derived the electron densities of the scanned regions from the IRIS spectral data, which are shown by the plus signs in Figure 3(a). It can be seen that n e increases with the enhancement of I at first and then keeps stable when I exceeds ∼2300 DN (DN is data number). We performed a quadratic-polynomial fitting to the data with I in the range [320, 3500] DN. The fitting result is indicated by the red curve in Figure 3(a), which seems to fit the data well when I is below 2300 DN. The relationship between n e and I within this range can be expressed by Then, we calculated n e for each mini-jet according to their individual 1330Å intensity and Equation (1) (see the eighth column of Table 1 and Figure 3(b)). It should be noted that this method only provides a very rough estimate of the density as we assume that the O IV densities are somehow related to C II emission, which can be invalid for various reasons, e.g., C II emission can be optically thick, the filling factors of O IV and C II emission can be different, the plasma seen in C II and O IV can be unrelated, etc. The distribution of n e is also displayed by the histogram in Figure 2(f). Our results show that most electron densities range from 1.1±0.4 to 3.7±1.2 × 10 10 cm −3 , apart from three values for J18, J20, and J21, namely, 13±4, 7.9±2.5, and 10±3 × 10 10 cm −3 , respectively. The average n e is 2.4±0.8 × 10 10 cm −3 . The AIA provided a good temporal coverage for the tornado event (see Appendix A and Figure 9). However, due to small scales and/or weak intensities, some mini-jets (e.g. J14, J15, J19, J27, J28, J36, and J37) are hard to observe in the hot EUV lines, especially in AIA 335Å and 94Å. Most of jets can be detected simultaneously in multiple AIA channels and they evolved roughly identically. Given the significant response around 10 5.5 K (Martínez-Sykora et al. 2011) for the hot AIA EUV wavebands, it is likely that the mini-jets are cool structures. Similar situations have been discussed by Winebarger et al. (2013) and Tian et al. (2014b), when they analyzed the temperatures of the inter-moss loops and penumbral bright dots, respectively. Since the typical method of differential emission measure (DEM) analysis is not sufficiently reliable for determining the temperature due to the poor discrimination at the low temperatures in the AIA channels (e.g., Del Zanna et al. 2011;Testa et al. 2012), we also applied the EM-Loci technique (e.g., Del Zanna et al. 2002) to determine the likely temperatures of the jets. The EM-Loci curves of each mini-jet (except for J14, J15, J19, J27, J28, J36, and J37) were obtained by dividing the AIA background-subtracted intensities by the temperature response functions. J5 and J24 can be observed in the AIA 131, 193, 171, 211, and 335Å channels. Their EM-Loci curves are presented in Figure 3(c) and (d), respectively. As indicated by the black boxes in the panels, there are many crossings of the curves at the low temperatures around 10 5.45 K, suggesting this is the most likely temperature of J5 and J24. The centers of the two boxes correspond to log temperatures of 5.46 (J5) and 5.43 (J24), respectively. Similarly, possible temperatures of the other jets were determined using this method and given in the ninth column of Table 1. As for J14, J15, J19, J27, J28, J36, and J37, we simply take the mean temperature (10 5.42 ≈ 2.6±0.1 × 10 5 K) of the other jets as theirs. It is worth pointing out that the mini-jets are most likely multi-thermal. The EM-loci method may just help estimate an approximate temperature of the jets.
Energetic Characteristics
Considering a model in which the mini-jets are cylinders of fully-ionized ideal gas, we calculated their kinetic and thermal energy densities (E k and E t ) from the estimated densities and temperatures according to the following equations.
Here, ρ is the mass density, m p is the proton mass, and k is the Boltzmann constant. It should be noted that v j is the jet's apparent velocity in the plane of the sky. Thus, Equation (2) only gives lower limits on E k . Our calculations show that E k mainly varies in the range of 1 -25 erg cm −3 with a mean value of ∼9±3 erg cm −3 , while E t mostly ranges from 1 to 5 erg cm −3 with an average of ∼3±1 erg cm −3 . As for some mini-jets, obvious SJI 1330Å and AIA EUV brightenings can be detected at their footpoints, implying a likely energy release by magnetic reconnection during the jet formation. Omitting the other energies, such as gravitational potential energy and radiation energy, we took the sum of E k and E t as the dissipated magnetic energy density E m (Priest 2014). Then, we can estimate the dissipated magnetic field strength (B) according to the formula Note that B here is not the actual magnetic field in the jets but represents the amount of magnetic field that is converted into accelerating and heating the jet. The values and distributions of E k , E t , E m , and B are presented in the last few columns of Table 1 and Figure 4, respectively. Among the 43 mini-jets, J18 is a special one with higher level of energies and field strength, which seems to be associated with its much larger electron density. The mean E m and B are 12±3 erg cm −3 and 16±2 G, respectively. On average, E k is three or four times larger than E t , so that much more magnetic energy was converted into kinetic energy than heat. Figure 5(a) and (b) separately present the variation of E m and B with E k . Simple linear relationships seem to exist between their logarithms. According to our fitting, as indicated by the red line in Figure 5(a), the relation between E m and E k is The fit for B in Figure 5(b) yields a power law with one half of the index in Equation (5) because B is proportional to the square root of E m .
Characteristic Velocities and Pressures
On the basis of the above results, it is of interest to calculate some typical velocities and pressures associated with the mini-jet activity and analyze their likely relationships. These parameters include the Alfvén speed (v a ), sound speed (c s ), gas pressure (P t ), magnetic pressure (P m ), and total pressure imposed on the jet (P j ). The formulae for the calculations of v a and c s can be expressed as where γ is the heat capacity ratio. The values of v a and c s for each jet are presented and comparisons made with the jet's apparent velocity (v j ) in Figure 5(c). It can be seen that v a seems to be greater than v j , but their differences become smaller as v j increases. As for c s , it keeps stable with lower values because of its simple form that depends only on the temperature T . Based on the assumption of E m = E k + E t , the quantitative relationship between v a , v j , and c s is derived as v 2 a = v 2 j + 1.8c 2 s The respective definitions of P j , P m , and P t are as follows: where F is the force accelerating the jet and a, M , w, and S are the acceleration, mass, width, and cross sectional area of the jet, respectively. Figure 5(d) exhibits and compares the three pressure values. Basically, P j is larger than P m and P t . Their mean values are 19±6, 12±3, and 1.8±0.6 dyn cm −2 , respectively.
Potential Field Source Surface Extrapolation
A PFSS extrapolation for Carrington rotation 2161 reveals that there existed many loop structures overlying the active region AR 12297, as shown in Figure 6. In space, these magnetic loops seem to cross the prominence at locations, where the mini-jets occurred. More interestingly, it can be found that the ejection directions of the jets (indicated by the arrows) are similar to the orientations of the crossed loops. According to the results from PFSS, the mean background field strength of AR 12297 at the altitude of the tornado is ∼12 G, which is roughly compatible with our calculation of the dissipated magnetic field strength ∼16 G. These results suggest that the interaction between the tornado-like prominence and the background field ("external reconnection") is one of the possible reasons for the production of the mini-jets. On the other hand, it is well known that a prominence may be contained within a large-scale twisted flux tube (e.g., Mackay et al. 2010). The reconnection of this enveloping field (closely enveloping the prominence) with itself, including the field threading the erupting prominence, ("internal reconnection") may also produce the mini-jets. Unfortunately, this event occurred near the solar limb and so nonlinear force-free field extrapolations cannot be employed to help clarify the spatial relationship of the prominence field to its surrounding non-potential field (i.e., the conjectured flux-rope envelope).
SUMMARY AND DISCUSSION
High resolution observations from IRIS SJI and SDO AIA clearly reveal that many single or clustered mini-jets were launched from a tornado-like prominence, which have been rarely reported before. According to their evolution in IRIS SJI far-UV and AIA EUV channels, the mini-jets are probably small-scale plasma ejections. Their average electron density is roughly estimated to be ∼2.4 × 10 10 cm −3 , similar to that of a typical prominence. They are likely multi-thermal structures with an approximate mean temperature of ∼2.6 × 10 5 K. It has been suggested that some small solar jets can be heated to ∼10 5 K, such as type II spicules and the transition-region network jets reported by De Pontieu et al. (2007) and Tian et al. (2014a), respectively. However, chromospheric jets outside or in the penumbra of sunspots studied by Shibata et al. (2007) and Katsukawa et al. (2007) seem to possess a much lower temperature (∼10 4 K). The spatial and temporal scales of mini-jets are similar to other small solar jets (see Section 1). They are mostly a few thousands kilometers long, several hundred kilometers wide, and have a short duration of tens of seconds. The apparent speed of mini-jets can reach 470 km s −1 , with most between 100-350 km s −1 , which seems to be more dynamic than other small jets, especially Type-I spicules (De Pontieu et al. 2007) or surges ) and chromospheric anemone jets (Shibata et al. 2007), possibly due to the differences in the local plasma environment.
Indeed, the birth place of mini-jets is quite different from other small jets. They originate from the body of a tornado-like prominence suspended at an altitude of ∼30-50 Mm in the corona. The other jets including large-scale EUV or X-ray jets reported formerly are basically rooted in the lower solar atmosphere, where the Alfvén velocity is typically lower and photospheric flux emergence and cancellation may drive fast reconnection between closed and open fields (e.g., Wang, & Shi 1993;Canfield et al. 1996;Pariat et al. 2010;Chen et al. 2012) or activate the eruption of a mini-filament (e.g., Moore et al. 2010;Hong et al. 2011;Shen et al. 2012;Sterling et al. 2015). The coronal mini-jets presented here have a different origin. They take place when a tornado prominence has been disturbed and distendeds outwards (see Chen et al. 2017). At this time, magnetic reconnection is likely to occur between the prominence field and the surrounding field. The local magnetic energy may be dissipated and converted into heat and kinetic energy by reconnection. Consequently, the heated prominence material is ejected along the newly-formed fields by enhanced gas pressure and magnetic tension of the reconnected fields. The schematic diagrams in Figure 7 display such a scenario, suggesting a possible formation mechanism for the mini-jets.
One must be aware that the prominence may not be in close contact with the background field, but rather be enveloped by a flux rope. This is the case in flux rope models for prominences, which place the prominence material in field line dips under the rope axis (especially for quiescent prominences) or in highly sheared, very flat field around the axis of a so-called hollow-core flux rope (especially for active-region prominences; e.g., Bobra et al. 2008). Enveloping field may have a much smaller flux content, or be largely absent, in the alternative group of models, which assume that the prominence material resides on long flat field lines in a highly sheared arcade (or, equivalently, in the upper part of a very weakly twisted flux rope). For such relatively simple (smooth) models of prominences in active regions (hollow-core flux rope or highly sheared arcade), the enveloping field is nearly parallel to the field that threads the prominence in the immediate vicinity of the prominence material, and makes a gradual transition to the background field further out. The scenario sketched in Figure 7 thus requires that the enveloping field be reconnected away before mini-jets that follow the direction of the background field can form. Such reconnection can indeed occur, especially in the case of confined eruptions, when the background field strongly resists the rising flux rope. A striking example is the confined filament eruption described in Ji et al. (2003) and Alexander et al. (2006), which showed heated filament plasma draining back to the solar surface from the top of the halted filament along previously invisible paths. The numerical modeling of the event (Török & Kliem 2005;Hassanin & Kliem 2016) demonstrated that the whole flux rope can reconnect with the overlying background field and that the draining paths followed the background field after the reconnection. The new field connections became visible only after the flux threading the filament began to reconnect, so that the filament material traced them. Different from that case, a complete reconnection of the erupting flux does not happen in the event investigated here, since most of the original prominence threads are not destroyed.
Alternatively, considering a possibly high degree of complexity of a tornado prominence's field structure, the coronal mini-jets may be created by many small-scale internal reconnections between nearby threads, which convert magnetic energy into the heating and acceleration of small jets. This may also be implicated in the eruptive instability of a prominence or coronal mass ejection. The threads may either be braided around one another and start reconnecting when the braiding becomes too great or they may each be internally twisted (Figure 8). In both cases, reconnection in one of the threads may start an avalanche of reconnections in the other threads. The reason that the jets are ejected roughly perpendicular to the overall prominence flux rope is that the fibrils are weakly twisted or braided, so that it is the transverse components of the magnetic field in the threads that are reconnected rather than the axial component directed along the flux rope. Reconnection of many small twisted threads and has been modelled numerically by Hood et al. (2016) and Reid et al. (2020), building on earlier numerical MHD models for the formation of many fine-scale currents by kink instability (Browning et al. 2008;Hood et al. 2009). In practice the structure will be much more complex than indicated in Figure 8, as can be seen in the computations of Hood et al. (2016). On the other hand, braiding has been modelled numerically by, for instance, Wilmot-Smith et al. (2010 and Pontin et al. (2011). The twisting or braiding of individual threads would naturally be produced by photospheric motions in the photospheric magnetic carpet of the many internal intense flux tubes that produce the magnetic field of a huge prominence flux rope. The advantage of an explanation in terms of internal reconnection of prominence threads is that it explains in a natural way the fine-scale nature of the mini-jets, their appearance as a cluster, and their direction perpendicular to the prominence.
In our observations, brightenings appeared at the footpoints of some mini-jets and most of the jets were also brightened along their whole lengths, compared to the threads in the swirling prominence. It is hard to believe that these brightenings resulted from plasma density enhancements by material accumulations. In addition, the acceleration of mini-jets can be easily detected (see the online animated version of Figure 1). Such observations support a reconnection explanation for mini-jets' formation. EUV and/or microwave brightenings have been found inside erupting filaments, as reported by Schrijver et al. (2008) and Huang et al. (2019), which suggest the occurrences of local magnetic energy release by many small-scale internal or external reconnections of a prominence flux rope. However, no obvious plasma ejections in the form of mini-jets were observed in these events. Huang et al. (2018) found that some jet threads appeared along a large-scale loop in the course of the eruption of a spiral filament. They found that magnetic reconnections probably occurred at the footpoints of the jets and accelerated them similar to our event. Recently, Chitta et al. (2019) reported hot spicules with much lower speed launched from a quiescent turbulent cool prominence, which seem to be generated instead by turbulent motions.
According to the external reconnection explanation for mini-jets, bi-directional reconnection outflows should be formed along not only the background or enveloping fields but also the tornado fields, as indicated by the red arrows in Figure 7(b). In several jet cases, such as J41-J43, we indeed observed some bright flows out of the jet footpoints along the prominence's threads. However, most of the mini-jets were found to be directed almost perpendicular to the prominence axis (likely along the background or enveloping field). This may be associated with the gas or magnetic pressure difference between the background or enveloping field and tornado field. The inflating jet plasma tends to move toward the weaker gas or magnetic pressure region (background or enveloping field), as found in MHD simulations of asymmetric magnetic reconnection (Cassak & Shay 2007;Murphy et al. 2012). Additionally, any jet component along the prominence threads would be less visible than a component along the background or enveloping field if the threads point more perpendicularly to the sky plane than the latter field. This is quite likely from the geometry of the prominence, which partly drained to foot points behind the limb.
So far, there are very few reports about coronal reconnection mini-jets and so they are worth exploring in more detail in future, in particular with high-resolution observations. They are associated with active region prominences, especially when activated (Chen et al. 2017) or even erupting , and so it will be worth determining whether they also take place in erupting quiescent prominences. In addition, nonlinear force-free (e.g., Mackay & van Ballegooijen 2006;Wiegelmann et al. 2006Wiegelmann et al. , 2012Mackay & Yeates 2012) or other non-potential (e.g., Zhu et al. 2017) field extrapolations can help clarify the nature of the tornado magnetic fields and their spatial relationship to the overlying magnetic arcade. Numerical simulation studies of such jets will also provide us with better understanding of these small-scale plasma ejections. From a wider point of view, they suggest that solar activities over widely different scales are often coupled together. Detailed investigations of their association would help a more comprehensive understanding of solar activity.
We thank Prof. Hui Tian of Peking University for insightful suggestions and informative discussions. IRIS is a NASA small explorer mission developed and operated by LMSAL with mission operations executed at NASA Ames Research center and major contributions to downlink communications funded by ESA and the Norwegian Space Centre. The SDO data are courtesy of NASA, the SDO/AIA, and SDO/HMI science teams. This work is supported by NSFC (11790304, 11790301, 11533008, 11941003, 11790300, 41331068, 11673034, 11673035, 11773039, 11973057), the B-type Strategic Priority Program of the Chinese Academy of Sciences, Grant No. XDB41000000 and Key Programs of the Chinese Academy of Sciences (QYZDJ-SSW-SLH050). 131,193,171,211,304, and 335Å channels. It runs from 09:10 UT to 10:00 UT, including all of the mini-jets listed in Table 1 . The formation of mini-jets by reconnection between the background or enveloping field and the tornado field. The "X" symbols denote the spots where the magnetic reconnections take place between the fields of tornado and background. 131,193,171,211,304, and 335Å channels runs from 09:10 UT to 10:00 UT, including all of the mini-jets listed in Table 1. | 7,994 | 2020-06-15T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
A longitudinal study of the arterio-venous fistula maturation of a single patient over 15 weeks
Arterio-venous fistula creation is the preferred vascular access for haemodialysis, but has a large failure rate in the maturation period. Previous research, considering the remodelling mechanisms for failure-to-mature patients, has been limited by obtaining the patient-specific boundary conditions at only a few points in the patient history. Here, a non-invasive imaging system was used to reconstruct the three-dimensional vasculature, and computational fluid dynamics was used to analyse the haemodynamics for one patient over 15 weeks. The analysis suggested evidence of a control mechanism, which adjusts the lumen diameter to keep the wall shear stress near constant in the proximal regions of the vein and artery. Additionally, the vein and artery were shown to remodel at different growth rates, and the blood flow rate also saw the largest increase within the first week. Wall shear stress at time of creation may be a useful indicator for successful AVF maturation.
Introduction
There are over 2 million end-stage kidney disease (ESKD) patients who require kidney replacement therapy worldwide, and this is estimated to rise to over 5 million by 2030 (Liyanage et al. 2015). Haemodialysis, used by the majority (60-70%) of ESKD patients, requires vascular access, where blood can be taken out of the body (via cannulation) and pumped through an external dialysis machine, which filters the blood of waste and excess fluid before being returned. Vascular access is typically created via an arterio-venous fistula (AVF).
The AVF has among the highest failure rates of any elective surgical procedure that patients undergo, with recent clinical studies showing 25-60 failure usually occurs due to insufficient dilation of the vessel and/or stenosis, which results in inadequate blood flow rates capable of haemodialysis. High flow rates are associated with successful AVF maturation, and there is also an increase to cardiac output and total redistribution of blood flow; in a typical successful maturation, the arterial flow increases approximately tenfold (25-270 mL/min) within the first day and further increases to approximately 570 mL/min over the next 4-8 weeks (Dixon 2006).
For a typical successful maturation, there is a large increase in flow in the first few days post-creation (Dixon 2006) which is mostly attributed to the large dilation in the vein (approximately 60% increase in diameter), but also to the arterial dilation (approximately 20% increase in diameter).
A sustained increase in flow will lead to an increase in vessel diameter, regulated by the endothelial cells which line the vessel walls. If the endothelial cells sense an increase in wall shear stress (WSS), the vessel remodels outward to 1 3 lower the WSS back to a baseline level (Girerd et al. 1996), yet different baseline levels for WSS exist in various parts of the human body and between different species of animals (Cheng et al. 2007). The direction of WSS corresponds to the local, near-wall flow direction, where is the fluid viscosity and is the wall shear rate. Wall shear rate is defined as the difference between adjacent mesh-point velocities, divided by the distance between them.
Studies to monitor newly created AVFs have reported elevated WSS in the artery when compared to baseline (precreation) values (Ene-Iordache et al. 2003). It would follow that the artery will increase in lumen diameter to lower the WSS back to baseline, which has been a hypothesis of previous AVF studies (Javid Mahmoudzadeh Akherat et al. 2017). It is generally accepted that arterial vasculature seeks to maintain constant WSS at a preferred, homeostatic value (Humphrey 2008), with the presence of a control mechanism demonstrated by Le Noble et al. (2005). This control mechanism is dependent on a 'set point' in which any deviation will result in remodelling of the artery; an increase in WSS from the set point will result in outward remodelling to reduce the WSS back to the set point (Langille 1996). The set point varies in different parts of the arterial tree due to pulsatility in flow waveforms from reflections, and this naturally increases or decreases the mean WSS. However, Ene-Iordache et al. (2003) has shown that the mean WSS remains elevated from the baseline in the radial artery for the entire maturation period, despite the diameter increasing.
Patient-specific AVF longitudinal studies are limited. In 2013, Sigovan et al. (2013) published a longitudinal AVF study, using computational fluid dynamics (CFD), on humans using non-contrast MRI 1 techniques and quantified the 3D geometric and haemodynamic changes. Significant geometrical and haemodynamic changes between the first scan at five days and the second scan at one month (for three patients) were noted, but a clear relationship between WSS and vascular remodelling was not identified. In a similar study, He et al. (2013) could not distinguish a defined relationship between disturbed flow and lumen changes. Bozzetto et al. (2018) scanned one AVF at 1 and 6 weeks post-creation using contrast-free MRI. They showed a general outward remodelling in the vein and proximal artery, while the distal artery remained the same.
Temporal data are needed to determine the relationship between haemodynamics and vascular remodelling, yet establishing a trend with minimal data points is challenging. A limitation of previous longitudinal studies is obtaining (1) WSS = ⋅̇p atient-specific data across a number of time points. Using our previously described system (Colley et al. 2018), we here outline a study in which one patient was scanned weekly, for 15 weeks. The relationship between WSS and outward remodelling was investigated, in addition to the WSS metrics, and due to the fifteen-weekly scans, we were able to further explore the vascular remodelling stability for this patient.
Scanning system
A scanning and processing procedure was developed by our team specifically for geometric tracking and CFD modelling of AVFs, using a freehand ultrasound set-up combining B-mode scanning with 3D probe motion tracking as previously described (Carroll et al. 2020;Colley et al. 2018). A 3D tracking camera is mounted on the ultrasound, in line of sight with the patient as the scanner moves the tracked probe over the scan target location, sweeping along the vasculature to create a high-density stack of B-mode frames containing the lumen geometry. This stack is converted into a continuous volume as a 3D voxel grid, filling gaps between frames, with the vasculature geometry isolated through segmentation. Geometric calibration was verified through scanning a cross-wire phantom; comparing measurements of the scan with the actual measured values showed a mean error of 2.5%. Transient flow waveforms are recorded at the boundaries, synchronised with electrocardiography (ECG) and automatically digitised, forming realistic boundary conditions for the CFD models.
Volume flow rate waveforms were measured at the proximal artery (PA) and distal artery (DA) boundaries of the geometric scan sweep, using transient centreline peak velocity detected with spectral Doppler ultrasound. These measurements using the Mindray L14-6NS probe are taken as far proximal and distal from the anastomosis as feasible for each patient case. As measurements at each location were taken sequentially, temporal re-synchronisation was required. Three-lead ECG was overlaid in real time while recording the Doppler spectra at each boundary location. Multiple periods of the peak velocity waveforms were captured, using the R peak location of the QRS complex from the ECG waveform (corresponding to ventricular contraction), to globally define endpoints for each period. Transient flow rates Q(t) were determined from centreline velocity V p (t) and measurements of cross-sectional diameters D, using the relationship Q(t) = u t ⋅ D 2 8 , where u t is the peak velocity as a function of time of the cardiac cycle and D is the diameter which is measured from the ultrasound B-mode image; this assumption is an accepted limitation of the model.
Computational modelling
Patient-specific boundary conditions are obtained to numerically solve the pressure and velocity at discretised cells inside the geometrical domain. The governing incompressible Navier-Stokes equations were solved using finite volume code in FLUENT 16.2 (ANSYS Inc.), where the SIMPLE algorithm is used to solve for the pressure-velocity coupling. A second-order upwind scheme spatially discretised the momentum and pressure variables, and temporal discretisation was achieved using a second-order implicit scheme. Throughout this work, blood is assumed to be an incompressible fluid with a constant density of 1060 kg/m 3 and treated as a non-Newtonian fluid, and the Carreau model is employed to describe the viscosity behaviour: where Cho and Kensey (1991) ̇ is the shear rate, is a time constant = 3.313, 0 is the viscosity at zero shear rate = 0.056 Pa ⋅ s and ∞C is the viscosity at infinite shear rate = 0.00345 Pa ⋅ s.
The turbulence is computed using the k-shear stress transport (SST) model; peak Reynolds numbers (estimated for each geometry) are in the range of 500-2200, yet it has been previously reported that there is transitional to turbulent flow present in the venous swing segment (the region just past the anastomosis) of the AVF (Bozzetto et al. 2016;Browne et al. 2015;Ene-Iordache et al. 2015a). This model has been shown to have good accuracy for transitional flows and flows near the wall in carotid artery bifurcations (Tan et al. 2008), aneurysms (Tan et al. 2009) and blood damage analysis (Goubergrits et al. 2016), which have similar wall shear stress analyses as AVFs. The vessel walls are set as rigid with a no-slip condition. A rigid-wall assumption is acceptable for identifying key features in an AVF, but with the expectation that the values of wall shear stress may be over-estimated by up to 10% in the proximal regions (McGah et al. 2014).
Each patient AVF case has a time-varying velocity profile that is set at the proximal inflow artery and outflow artery, which are measured via pulsed-wave Doppler at the specified locations each week. The proximal outflow vein is set with a 0-Pa pressure outlet. The simulation is calculated for three cardiac cycles with a time step of 0.001s, which was verified with a time-step independence study on two cases, resulting in a maximum WSS error of 0.3%. Each simulation was computed on a cluster, running on 64 CPUs at a speed of 2.2 GHz each. On the last cycle, the full transient solution was saved every ten time steps for flow analysis, as in previous similar work (Fulker et al. 2018).
A surface model for each scan is produced and the mesh created using ICEM (ANSYS Inc); each unstructured tetrahedral mesh is generated with five high-density prism boundary layers placed at the wall to resolve the near-wall velocity gradients such that y + < 1 to ensure wall modelling accuracy and then converted to polyhedral elements. An example of the mesh is shown in Fig. 1.
To verify grid independence, the grid convergence index (GCI) (Roache 1993) is calculated. The flow is computed for three meshes, with a grid ratio of more than 1.1 so that discretisation error can be differentiated from other error sources. Similar node spacings and grid densities were generated in all geometries, and the volume cell size was reduced systematically to produce three different meshes for each geometry. Note that the total number of cells differs between the patients due to the length of segment captured; however, the total mesh size ranged from 600K elements to 1.2M elements. The grid refinement ratio (r) is defined for unstructured grids as r ij = (N j ∕N i ) (1∕D) , where D is the dimension of the flow domain, which in this case is three. The discretisation error is estimated for time-averaged wall shear stress ( = TAWSS), in four different geometrical locations (L1, L2, L3, and L4) for two different patient-specific geometries, where TAWSS is the average of the wall shear stress over the cycle. Discretisation errors in the calculated TAWSS at L = 1:4 are 1.494%, 0.228%, 2.743% and 0.182%, respectively, for the fine-grid solution. Therefore, a medium grid was chosen. For each scan, the centrelines through the vasculature are generated and the anastomosis is defined as the intersection of these centrelines, and this point is used as a global reference for each scan and assumed to not change during the maturation period. Ku et al. (1985) provided evidence that atherosclerosis lesions occur where the flow near to the wall is oscillatory in behaviour; oscillatory shear index (OSI) was formed to describe this flow behaviour. OSI was later modified (He and Ku 1996) for use in three-dimensional flows, as shown by Eq. 3:
Wall shear stress metrics
where ⃗ w represents the instantaneous WSS vector, t is the time and T is the duration of the cardiac cycle. OSI represents the cyclic departure of WSS from its predominant axial direction; it is a dimensionless change in WSS direction ranging from 0 to 0.5, with the denominator being the time-averaged wall shear stress (TAWSS). As a time-averaged RANS model is used here, only the variation of the mean flow is taken into account rather than any small-scale turbulent fluctuations. Multiple theories exist that correlate a (WSS) disturbance metric to the aetiology of neointimal hyperplasia (NIH). The low/oscillatory WSS theory has been suggested to correlate with future sites of stenosis due to NIH (Ene-Iordache et al. 2015b), whereas some authors have suggested that high WSS (Carroll et al. 2011) is correlated with development of disease.
Further research by Peiffer et al. (2013) investigated multidirectional flow, with the hypothesis that WSS components acting transversely to the mean vector are pro-atherogenic. They devised a new metric (transWSS), which is described by Eq. 1.6, and showed lesion prevalence correlated strongly with transWSS and no correlation supported the low/oscillatory WSS theory. However, it should be noted that transWSS does not completely characterise WSS behaviour, and OSI is still needed to distinguish purely forward flow and pulsatile flow with reversal. The mesh is generated with tetrahedral elements and then converted to polyhedral elements. The cross section of the polyhedral mesh at the vein, artery and anastomosis displays the mesh resolution at the core and boundary layers; element height starts at 0.025 mm near the wall. TAWSS values at locations 1-4 are used to test for grid independence
Patient information
Patients were recruited at Prince of Wales Hospital, Sydney, Australia, with approval from the Human Research Ethics Committee (HREC ref: 15/063), as part of an going study . A 56-year-old male patient, scheduled for fistula creation surgery (and not currently on dialysis), agreed to attend the clinic on a weekly basis to enable scans to be conducted. The patient underwent a pre-scan before surgery, and subsequently, a radio-cephalic fistula was formed surgically where the radial artery and cephalic vein were dissected off their bed and anastomosed to each other in an end-to-side configuration. Pre-operatively, the cephalic vein had an average diameter size of 1 mm and more than 2 mm when a proximal venous tourniquet was applied.
The 3D freehand ultrasound system (Colley et al. 2018) was used to obtain the lumen geometry as shown in Fig. 2. The AVF is defined for five different regions (A-E): swing segment of the distal vein, the proximal vein, inflow artery distal, inflow artery proximal and the outflow artery proximal.
Geometric changes
The vein and artery of the patient were scanned before surgery and then at seven days post-surgery. A reconstruction of the geometry and the cross-sectional area comparison is shown in Fig. 3. There is an immediate increase in cross-sectional area of both vein and artery. The cross-sectional area of the vein has increased by 1050% (from 2.7 to 31.05 mm 2 ) in the useable segment (for dialysis cannulation), while the juxta-anastomosis region (where the vein meets the artery) has tapering due to the fixed size from the sutures. The proximal artery increased 280% (from 5.1 to 17.08 mm 2 ) in the proximal location, but similar to the vein, the cross-sectional area is lower in the distal region.
A comparison of the geometric changes in both the vein and artery is shown in Fig. 4 over 15 weeks. The cross-sectional area is calculated at 1-mm intervals along the centreline of the vasculature. A greyscale is used to represent each of the weeks, where week 1 corresponds to lighter colours and week 15 is darker. There is an immediate increase in the artery cross-sectional area from week 1 to week 2 in the proximal region, which then steadily increases as the weeks progress.
A volume of each of the segments (A to D) is calculated for each week. The volume is calculated by integrating the cross-sectional areas along the centreline. Lines of best fit are provided to demonstrate the trend across weeks 1-15.
While the artery continues to remodel over time, the vein has little change in comparison: 200% between the first and last weeks in the proximal artery segment (D) compared with 20% in the proximal vein segment (B).
Blood flow rates
Transient flow rates were obtained via Doppler ultrasound at the inflow artery, outflow artery and outflow vein, and the measured waveforms at each week are shown in Fig. 5. As the outflow artery has retrograde flow, the outflow vein flow is a composition of the inflow artery and outflow artery. The outflow artery still consists of the initial peak velocity as it travels through the ulnar artery and around the palmar arch of the hand, but is then rapidly decelerated when the flow collides with the inflow artery flow near to the anastomosis. Pre-creation, the inflow artery waveform was highly pulsatile, as is expected in distal vasculature. Once the AVF is made, the waveform has one major flow peak (from the inflow artery) and then a smaller second reflection peak from the outflow artery. The mean flow rate at the inflow artery, represented by black bars, is 10 mL/min at pre-creation and increases to a mean flow rate of 360 mL/min at week 1. At pre-creation, there were no other branches in the radial artery and hence the inflow is equal to the outflow.
As the maturation progresses, there is an increase to the flow rate in the inflow artery. The outflow artery stays approximately constant, and hence, the outflow vein increases at approximately the same flow rate as the inflow artery. After week 10, there are significant fluctuations (of the order of 500 mL/min) in the inflow artery flow rate.
The mean flow rates of the three boundaries at each week are shown in Fig. 6. While the mean flow rate of the artery inflow and vein outflow increases over the duration of the maturation, the artery outflow stays approximately constant. The outflow artery accounts for approximately 20% of the flow in the vein at the beginning of the maturation, but this proportion decreases as time progresses.
The outflow vein has a mean flow rate of 520 mL/min at week 1, with a diameter of over 6 mm. By week 2, the flow rate has increased to 780 mL/min which is within the requirements for dialysis purposes. From the fitted curve, the average flow in the vein continues to increase over the weeks, approximating 900 mL/min at week 15.
There were large changes in mean blood flow rate between week 1 (367 mL/min at the inflow artery) and week week post-surgery. The cephalic vein (blue) is surgically attached to the radial artery (red) to create an AVF (green). A comparison at baseline to 1 week post-surgery shows the large difference in cross-sectional areas by flow-induced remodelling. For the baseline artery and vein, the cross-sectional areas are taken from the approximate location where the AVF was created 2 (630 mL/min at the inflow artery), and hence, the velocity streamlines are shown for these two time points in Fig. 7. At week 2, the inflow artery blood flow rate accounts for a larger proportion of the venous flow rate, than at week 1. This is due to the outflow artery having approximately the same flow rate between the weeks (157 mL/min at week 1 and 150 mL/min at week 2).
At week 1, there is a high-velocity jet which is skewed to the outer wall of the venous swing segment, whereas there appears to be flow re-circulation and separation near to the inner wall, which does not reattach until further downstream in the vein. The Reynolds number at the inflow artery has an approximate peak value of 730 indicating laminar flow, but the peak Reynolds number at the venous swing segment is much higher (approximately 1700). At week 2, the inflow artery has a peak Reynolds number of 1000, but within the swing segment the peak Reynolds number is approximately 2250. This indicates the flow is within the transitional regime, yet, the pulsatile nature of the blood flow is known to have an influence on the regime threshold and transitional flow may occur at a lower Reynolds number.
The flow at week two appears to be more disturbed through the anastomosis and venous swing segment, with much higher-velocity jets due to the increased flow, but very little diameter change when compared with week 1.There is also a larger re-circulation zone present in the floor of the anastomosis, and throughout the swing segment.
Sites of disturbed flow have been observed to coincide with disease development in which the magnitude and multi-directionality of the WSS can disrupt endothelial function. To quantify the disturbance seen in the maturation, WSS metrics are computed in the following section.
Wall shear stress metrics
The 3D contours of both time-averaged wall shear stress (TAWSS) and the oscillatory shear index (OSI) are shown for each week in Fig. 8. High TAWSS is found in the vein swing segment each week and lesser values in the distal inflow artery. These high values are due to high shear rates as the cross-sectional area decreases. In the vein swing segment, values of TAWSS are unstable week to week, as the velocity jet through the anastomosis is skewed to the outer wall, but varies in location due to the different magnitudes of incoming flow each week. Relatively lower TAWSS is found in the proximal artery and vein regions.
High values of OSI occur in the swing segment, and just after as the vein straightens out, with the majority found on one side of the vessel due to the out-of-plane curvature of the vein and artery. The contour patterns vary a small amount, but remain in approximately the same locations week to week.
The multi-directional flow near the wall was quantified using the transWSS metric, and the contours are shown in Fig. 9. The highest values (>3 Pa) are in the anastomosis region, due to the collision of the inflow and outflow artery flows. There are also places of mid-range (0.6-1.2 Pa) and high transWSS in the venous swing segment, but these reduce downstream. The regions of high multi-directional WSS are in similar locations week to week and do not appear to change in area size significantly.
Vascular remodelling
Further analysis is required so that the TAWSS can be quantified and a point-to-point comparison can be made at each week, as shown in Fig. 10.
There is a large increase in TAWSS from baseline (average 0.4 Pa) to week 1 (average 2.5 Pa) in the artery, and this value remains elevated in the subsequent weeks. In the proximal segment, further from the anastomosis, the TAWSS is lower, with still some variation within the nodes due to curvature of the vessel. There are localised regions of high TAWSS (more than 15 Pa), but the high values do not stay in the same location week to week and remain within 40 mm of the anastomosis. Similar to the artery, the proximal vein segment shows variation of TAWSS within the nodes, but remaining elevated week to week. There are also much larger stresses on the wall in the swing segment, covering a large proportion of the surface area, especially less than 20 mm away from the anastomosis. These values settle by 40 mm along the centreline in most weeks.
There are significant increases in the cross-sectional area for the artery, particularly further than 40 mm away from the anastomosis. In the subsequent weeks, the proximal artery remodels from an average of 24-34 mm 2 , but clearly shows localised, rather than uniform, remodelling. The outward remodelling progressively increases closer to the anastomosis for the duration of the 15 weeks.
The vein remodels outward in the first 3 weeks in the proximal region, but then little remodelling is seen for the duration of the maturation. Little to no remodelling is seen in the venous region between the anastomosis and 20 mm away, and then, there is a defined transition, where the crosssectional area is relatively larger.
Discussion
The results for this patient show that the majority of the geometric remodelling occurs for both the vein and artery within the first 2 weeks. Within the first week, the vein has increased by 1050% in the useable segment, and the artery increases by 280%, when compared with the baseline (precreation) values. The blood flow rate also has the largest increase within the first week, showing an increase from a mean blood flow rate of 10 mL/min to 360 mL/min in the inflow artery. The outflow vein has a mean flow rate of 520 mL/min at week 1 and has increased to 780 mL/min by week 2. By clinical definition of 'The rule of sixes' (the vein is more than 6 mm in diameter, less than 6 mm from the skin surface, and more than 600 mL/min of flow), this patient would be deemed a successful maturation by week two.
The flow showed much higher fluctuations week to week than the geometry. High flow disturbances were found in the venous swing segment of this patient at each week, caused by the inflow artery flow colliding with the retrograde flow in the outflow artery. As the anastomosis configuration tapers in the venous swing segment, there is evidence of a velocity jet phenomena present through this region, as well as recirculating flow. The variation in flow rates from week to week is unlikely to be entirely the result of the Doppler measurement error; this patient was concurrently being evaluated for a kidney transplant, in which the examinations occasionally took place prior to our scheduled scanning session. These examinations started in the second half of the longitudinal study and may contribute to the outliers seen in the data, particularly, with Doppler ultrasound and are shown for the inflow artery, outflow artery and outflow vein. At pre-creation, the flow rate in the inflow artery is pulsatile with a mean flow rate of 10 mL/min. There is an apparent increase in the flow rate for the inflow artery and outflow vein, but the outflow artery stays approximately constant. It should be noted that after week 10, there are larger fluctuations in the flow rates. Each flow rate is normalised with the cardiac cycle, where T represents the cycle length in seconds (s) ◂ for example, the data seen in week 13. Additionally, ESKD patients are often given medications for the management of kidney disease which would affect peripheral resistance and the blood flow waveform.
Little to no remodelling is seen in the venous region between the anastomosis and 20 mm into the vein, where there is a defined transition and the cross-sectional area is relatively larger. In appearance, the vein has heterogeneous Fig. 6 Mean flow rates. The mean flow rate is shown for the inflow artery, outflow artery and outflow vein and lines of best fit added to demonstrate the trend. There is a 36-fold increase from pre-creation to week 1 in the inflow artery (10 mL/min to 360 mL/min). The outflow artery has retrograde flow which remains approximately constant, and accounts for approximately 20% of the flow in the vein Fig. 7 Velocity streamlines at systole for week 1 and week 2. The velocity streamlines are shown for week 1 and week 2, where the majority of the remodelling happens in the maturation. There is a large increase in flow between the 2 weeks. Note that the flow in the outflow artery is retrograde Fig. 8 TAWSS and OSI contours-weekly changes. Time-averaged wall shear stress (TAWSS) is shown on the left and the oscillatory shear index (OSI) on the right, displaying both the medial and lateral view. High TAWSS is seen in the vein swing segment and anastomo-sis region. High OSI values are seen on one side of the AVF, typically in the vein swing segment and just after. Note that the geometry is not to scale variation of the area along the segment, which corresponded to high and disturbed flow. In the artery, there are lower cross-sectional area variations along the investigated segment, but current results do not demonstrate homogeneous remodelling, as reported by other studies (Sigovan et al. 2013). The vein and artery are seen to remodel outward at different rates. In the proximal segments, the artery continues to remodel over the fifteen weeks (200% change in volume), but the vein has little change in comparison (20% change in volume).
WSS results for this patient show that values are not restored to the pre-creation baseline set point as previously suggested (Javid Mahmoudzadeh Akherat et al. 2017), but fluctuate around a new, higher value. This demonstrates a possible control mechanism in which the diameter is adjusted to maintain WSS within a narrow range of the new set point. The results suggest that at time of AVF creation, the vasculature behaves similar to arteriogenesis, as similar shear rates patterns were observed in early stage arterial system development (Le Noble et al. 2005). Fluctuations of shear were also observed in a previous study (Le Noble et al. 2005) and had 'constancy' around a value.
Wall shear stress at time of creation could be a possible predictor for AVF success or failure. TAWSS values for the patient in this study do not change significantly from the measured value in week 1, despite large increases in flow and cross-sectional area. The TAWSS measured in the proximal inflow artery fluctuates around 3 Pa with a range of 1.5 Pa, which is elevated from the baseline level of approximately 0.4 Pa. These values agree with other WSS measurements in human radio-cephalic AVFs (Ene-Iordache et al. 2003), which also show elevated (from baseline) WSS during the maturation period.
To quantify the flow disturbance effect on WSS, OSI and transWSS were calculated at each week. The results show that high values of OSI and transWSS are confined to the anastomosis and swing segment region, which agrees with other patient-specific CFD studies (Ene-Iordache et al. 2015b). As the majority of these studies were taken at single-time points, there were not the data to support the theory that disease developed at these locations. It was found that in this patient continued exposure to flow disturbances, in the form of WSS metrics (OSI and transWSS), but this did not lead to severe NIH or disrupt the VA patency, at 15 weeks. To explore the relationship between outward remodelling and TAWSS, the TAWSS is spatially averaged over the various segments (A to E) as shown in Fig. 11. Immediately, it is apparent that the AVF remodels at different rates in different regions, likely due to proximity to the anastomosis. Even though there are different rates of outward remodelling throughout the vein and artery, TAWSS either increases or decreases, but rather fluctuates around a value.
In the proximal outflow vein (segment B), there is an increasing trend in the cross-sectional area, but with a small rate of change. The spatially averaged TAWSS fluctuates around 3 Pa with a range of 1.5 Pa. The proximal inflow artery (segment D) has the same TAWSS trend (constancy of 3 Pa with a range of 1.5 Pa), but has a much larger rate of change in the cross-sectional area over the weeks.
There is a larger range of temporal fluctuation (week to week) of TAWSS in regions near to the anastomosis. In the distal region of the inflow artery (segment C), the TAWSS is much more elevated than the proximal region of the artery with an average difference of 3.5 Pa. Despite the large temporal fluctuations, they appear to still fluctuate around a value of 7.5 Pa. The cross-sectional area rate of change in the distal region has a much different curve than the proximal region. For segment C, there is an increasing trend which continues to increase in rate over the weeks, whereas for segment D, there is an increasing trend which decreases in rate over the weeks. The venous swing segment (segment A) has the highest temporal WSS fluctuations and the highest magnitude of all segments, averaging approximately 9 Pa with a range of 6 Pa. The cross-sectional area increases slightly over the weeks.
Little remodelling is seen in segment E, due to retrograde flow in the outflow artery remaining almost constant week to week. The TAWSS in this segment has a temporally average value of 2 Pa and has the smallest fluctuations of all segments. The area also has the smallest fluctuations, but appears to increase (marginally), as the weeks progress.
Conclusion
The geometric and haemodynamic timeline was established for a successful AVF maturation, with data taken weekly for a duration of 15 weeks. The use of a large temporal data set highlighted the variance found in AVF remodelling, particularly in the week-to-week measurement of the blood flow rates, and the potential for misleading results from only using a limited number of time points when scanning an AVF during maturation.
It was found that the largest changes occurred within the first two weeks of the creation, but it was noted that there were still outward remodelling and flow changes in the later weeks. A key finding was that the vein and the artery remodel at separate rates to each other, but also at different rates based on proximity to the anastomosis. The inflow artery in the proximal location and distal location had much different rates, where the proximal region seemed to converge towards a value and the distal location was increasing in trend.
This study provides further evidence that wall shear stress at time of creation could be a useful predictor for AVF success or failure. The WSS values for this patient did not change significantly from the measured value in week 1. In addition, the TAWSS did not restore to baseline (precreation) values and remained elevated at a new set-point level, which suggests that the set-point level for TAWSS is able to adapt after an AVF creation. The constancy of WSS was more evident in the proximal regions. However, there were higher TAWSS fluctuations throughout the cardiac cycle and temporally (week to week) in regions near (less than 40 mm away) to the anastomosis for both the vein and artery indicating instability in this region.
Disturbance of flow was confined to regions near to the anastomosis and in the venous swing segment. It was found that in this patient continued exposure to flow disturbances, in the form of WSS metrics, was not detrimental to the success of the maturation.
Funding Open Access funding enabled and organized by CAUL and its Member Institutions. Part of this work was completed during the Ph.D. studies of E. Colley and J. Carroll, who were supported by Australian Postgraduate Award scholarships.
Conflict of interest
The authors have no disclosures. All procedures followed were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2000 (5). Informed consent was obtained from all patients for being included in the study.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will Fig. 11 Vascular remodelling. Various segments of the AVF are shown and their associated changes in the spatially averaged crosssectional area and time-averaged wall shear stress (TAWSS). Segments further from the anastomosis (b, d) have less weekly fluctuations in TAWSS and have constancy of approximately 3 Pa. Segments closer to the anastomosis (a, c) have a much higher baseline value and larger fluctuations. Trends are shown via lines of best fit through the data points ◂ 1 3 need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 8,577.8 | 2022-05-25T00:00:00.000 | [
"Medicine",
"Engineering"
] |
PHRF1 promotes the class switch recombination of IgA in CH12F3-2A cells
PHRF1 is an E3 ligase that promotes TGF-β signaling by ubiquitinating a homeodomain repressor TG-interacting factor (TGIF). The suppression of PHRF1 activity by PML-RARα facilitates the progression of acute promyelocytic leukemia (APL). PHRF1 also contributes to non-homologous end-joining in response to DNA damage by linking H3K36me3 and NBS1 with DNA repair machinery. However, its role in class switch recombination (CSR) is not well understood. In this study, we report the importance of PHRF1 in IgA switching in CH12F3-2A cells and CD19-Cre mice. Our studies revealed that Crispr-Cas9 mediated PHRF1 knockout and shRNA-silenced CH12F3-2A cells reduced IgA production, as well as decreased the amounts of PARP1, NELF-A, and NELF-D. The introduction of PARP1 could partially restore IgA production in PHRF1 knockout cells. Intriguingly, IgA, as well as IgG1, IgG2a, and IgG3, switchings were not significantly decreased in PHRF1 deficient splenic B lymphocytes isolated from CD19-Cre mice. The levels of PARP1 and NELF-D were not decreased in PHRF1-depleted primary splenic B cells. Overall, our findings suggest that PHRF1 may modulate IgA switching in CH12F3-2A cells.
Introduction
An effective immune response requests the appropriate subtypes of antibodies. Class switch recombination (CSR) is responsible for changing the heavy chain isotype from IgM to IgG, IgE, or IgA in B lymphocytes. The constant regions of the immunoglobulin heavy chain (C H genes) are preceded by a 1-to 10-kb repetitive DNA element of switch (S) regions, except the C δ region [1][2][3]. During CSR, the germline transcription from two S regions yields ssDNA substrates for activation-induced cytidine deaminase (AID) to produce high densities of deoxyuracils in both DNA strands. The AID recruitment to the S regions is mediated by SPT5 when RNA polymerase II (RNAPII) is stalled in the S regions, possibly due to the secondary DNA conformations [4][5][6][7][8][9]. Subsequently, multiple nicks were generated on the non-template and template strands and double strand breaks (DSBs) in the S regions are connected by canonical non-homologous end joining (c-NHEJ) or microhomology-mediated end joining (MMEJ), which is mainly dependent on the length of junctional microhomology (MH) sequences. c-NHEJ requires Ku70-Ku80 heterodimer to recognize DSBs and recruitment of DNA-PKcs for downstream signaling. By contrast, MMEJ is initiated by PARP-1 to bind DSBs and then recruits MRN and CtIP for end-resection [10,11]. This DSB repair converts IgM to other isotypes of immunoglobulins. A couple of factors involved in DNA damage response and double-strand break repair affect CSR in vivo, including PARP1/2, MRN, ATM, H2AX, RNF8, RNF168, and 53BP1 [12][13][14][15][16][17][18][19][20][21]. As for the evaluation of CSR in vitro, murine CH12F3-2A lymphoma cells [22] and primary splenic B cells have been proven to proceed with consistent switching in vitro, which could change IgM to other immunoglobulins in response to a variety of stimulations, such as CD40L, IL-4, TGF-β, and LPS. PHRF1 (PHD and RING finger domain protein 1) is an E3 ligase containing a plant homeodomain (PHD) that binds methylated histones and a RING domain that ubiquitinates substrates. The C-terminus of PHRF1 harbors an SRI (Set2 Rpb1 Interacting) domain which is projected to interact with the phosphorylated C-terminal domain (CTD) of Rpb1 [23]. Initial reports regarding PHRF1's function revealed its role in modulating TGF-β signaling in APL development. PHRF1 ubiquitinates TGIF to ensure redistribution of cPML (the cytoplasmic variant of promyelocytic leukemia protein) to the cytoplasm, where Smad2 is phosphorylated in TGF-β signaling. Aberrant PML-RARα fusion protein interferes with PHRF1's binding to TGIF and prevents the TGIF breakdown by PHRF1 [24,25]. We focused on a distinctive function of PHRF1 in modulating non-homologous end-joining (NHEJ) in which PHRF1 mediates H3K36 trimethylation (H3K36me3) and NBS1, a component of MRE11/RAD50/NBS1 (MRN) complex, to maintain genomic integrity [26]. Our recent data reveals that PHRF1 is associated with the phosphorylated C-terminal repeat domain of Rpb1, the large subunit of RNA polymerase II (RNAPII), through its SRI domain. PHRF1 binds to the proximal region adjacent to the transcription start site of ZEB1 and promotes the expression of Zeb1 and cell invasion in lung cancer A549 cells [27].
The involvement of PHRF1 in NHEJ and its interaction with Rpb1 prompted us to investigate whether CSR was affected in the absence of PHRF1. To investigate the impact of PHRF1 on CSR, we knocked out the expression of PHRF1 by Crispr-Cas9 editing and measured IgA switching in CH12F3-2A cells. Also, we determined the switching efficacy of immunoglobulins (Igs) in CD19-Cre mice. Interestingly, it turned out that PHRF1 deficiency influenced IgA switching in CH12F3-2A cells but not in mice.
Immunoblotting
Cell extracts were solubilized in RIPA buffer and immunoblotted by various antibodies. Antibodies used in Fig 3 were listed in S1 Table in S1 File. The anti-PHRF1 monoclonal antibody has been described previously [26].
sgRNA-mediated PHRF1 deletion
PHRF1 knockout CH12F3-2A cells were carried out by Cas9 RNP nucleofection. Briefly, Cas9 recombinant protein was produced as previously described [28,29]. Two sgRNA oligos were designed to delete the exon 2 of mPHRF1, which contains the ATG translation start site. The sequences were sgRNA#1, 5'-TAATACGACTCACTATAGGTCATCCATGGCTGCACATG TTTTAGAGCTATGCTGGAAACAGCATAGCAAGTTAAA -3' and sgRNA#2, 5'-TAATAC GACTCACTATAGTGACATTTAAGCTCCCAAGGTTTTAGAGCTATGCTGGAAACAGCATAGCA AGTTAAA-3'. Cas9 RNP complexes were assembled right before nucleofection by mixing equal volumes of 40 μM of Cas9 protein and 48 μM of sgRNAs at a molar ratio of 1:1.2 and incubating at 37˚C for 15 min. Nucleofection reaction consisted of 1 x 10 6 of CH12F3-2A cells in 20 μl of nucleofection buffer, 2 μl for two sets of Cas9 RNP (equivalent to 40 pmol). The nucleofection mixtures were transferred into a 16-well strip in Lonza 4D Nucleofector for nucleofection. The program was set at the pulse code CA-137. After nucleofection, the cells were transferred to the culture plate with a complete culture medium after nucleofection. Subsequently, a serial dilution to obtain complete knockout clones was conducted. Candidate clones were first screened by PCR and then verified by Western blotting.
Quantitative Real-Time PCR (RT-qPCR)
Total RNA was prepared using TRIzol reagent (Life Technologies, Waltham, MA) with three duplicates. Reverse transcription would be performed using the 2x one-tube RT mix (Bioman, Taipei, Taiwan). The qPCR analysis was performed using 2 x qPCR master mixes (Bioman, Taipei, Taiwan) on the CFX384 Touch™ following the manufacturer's protocol. The reaction included 5 μl of cDNA (5μg) and 1 μM of indicated primers in a final volume of 20 μl master mix. The following thermal profile was used: an initial 30 s denaturation step at 95˚C, followed by 40 cycles respectively at 95˚C for 15 s, 55˚C for 15 s, and 72˚C for 20 s. The data were analyzed using 7500 software v.2.0.1 (Applied Biosystems, Foster City, CA, USA). Gene expression levels would be normalized with an endogenous control glyceraldehyde-3-phosphate dehydrogenase (GAPDH) mRNA. Primer sequences were listed in S2 Table in S1 File.
Electroporation
CH12F3-2A cells were diluted to 5 x 10 6 cells/ml in BTXPRESS electroporation solution (BTX, Cambridge, UK) with 20 μg/ml plasmid DNA. Approximate 250 μl of DNA mixture was transferred to one cuvette (BTX, 2 mm gap) and proceeded electroporation with BTX Gemini SC2 Twin Wave Cuvette Electroporator. The program was set with voltage 260, capacitance 950 μF, resistance 50 ohms. After electroporation, cells were transferred to complete growth medium and incubated in 37˚C incubator for 48 hr.
Mice
All experimental procedures were carried out by protocol #17-02-1051 approved by Institutional Animal Care and Use Committees at Academia Sinica. PHRF1 fl/fl mice have been described previously [30] and were bred with CD19-Cre (The Jackson Laboratory, Bar Harbor, ME). CD19 cre/+ PHRF1 fl/+ mice were then crossed with PHRF1 fl/fl mice to produce CD19 cre/ + PHRF1 fl/fl and control littermates, such as PHRF1 fl/fl , for experimental uses. Mice were reared in the animal facility under a 12-h light/dark cycle with free access to food and water and maintained under pathogen-free conditions at a 12-h day-night cycle. To alleviate their suffering, mice were housed and minimized the number of animals in appropriate cages. Control and CD19 Cre/+ PHRF1 fl/fl mice were sacrificed using carbon dioxide inhalation. The animals were monitored closely during the procedure to ensure that they were not experiencing any pain or discomfort. To relieve their pain or distressing, inhalant anesthetics, such as isoflurane, might be used.
Statistical analysis
Analysis was carried out using GraphPad Prism 6 software. All values were expressed as mean ± SD. The paired Student's t-test (two-tailed) was used to calculate the statistical significance of differences between groups. The p < 0.05 was considered statistically significant.
PHRF1 ablation reduced IgA CSR in CH12F3-2A cells
To decipher the impact of PHRF1 deficiency on CSR, we knocked out PHRF1 gene by deleting the exon 2 containing the ATG codon (a.a. 1-32) using CRISPR-Cas9 gene editing in CH12F3-2A cells. Genotyping results revealed that exon 2 of the PHRF1 gene was deleted in two independent clones (KO#1 and KO#2) (Fig 1A). KO#1 and KO#2 clones did not have the expression of PHRF1 through immunoblotting analysis (Fig 1B). To investigate the effect of PHRF1 on CSR, control and PHRF1 KO cells were stimulated with CD40L, IL-4, and TGF-β (CIT) for three days to induce IgA switching, and then IgA level was measured using flow cytometry. As expected, the proportion of IgM switching to IgA was remarkably reduced in PHRF1-depleted CH12F3-2A cells compared with control cells (Fig 1C). When full-length PHRF1 was reintroduced into PHRF1 KO cells, IgA production was restored (Fig 1D), further confirming that PHRF1 was essential for IgA switching in CH12F3-2A cells.
To rule out the off-target effects of CRISPR-Cas9 editing as the cause of the observed phenotype, an element of shRNA targeting a.a. 95-102 to silence PHRF1's expression was conducted (S1A Fig in S1 File). We found a similar reduction in IgM shifting to IgA as seen in PHRF1 KO cells (S1B Fig in S1 File).
PHRF1 deficiency did not affect cell proliferation and germline transcription
To avoid the possibility that IgA reduction was caused by defective cell proliferation, we measured the cell density using carboxyfluorescein diacetate succinimidyl diester (CFSE) dilution assay. Control and PHRF1 deficient cells showed a similar CFSE intensity post 6, 24, and 48 h labeling (S2A Fig in S1 File). Additionally, PHRF1-depleted CH12F3-2A cells were left untreated or treated with LPS, LPS+IL4, or anti-CD40 Ab+IL4 for four days, and then total cell numbers were calculated. The result showed similar cell expansion in control and PHRF1 deficient CH12F3-2A cells upon different stimulations (S2B Fig in S1 File). Additionally, the time course of CSR at 24 h, 48h and 72 h post CIT stimulation revealed that PHRF1 KO cells could not undergo CSR in each round of cell division (S2C Fig in S1 File), further supporting the importance of PHRF1 in CSR progression.
Germline transcripts (GLTs), initiating from the upstream I promoters (I) and proceeding through the switch region and C H exons, facilitate the cytosine deamination by AID that is required for CSR [3]. To measure the expression level of GLTs, a quantitative RT-PCR (qRT-PCR) analysis was conducted. Comparable results of Iμ and Iα GLTs either untreated or post-CIT stimulation were found in control and PHRF1 depleted CH12F3-2A (S3 Fig in S1 File), indicating that inactivation of PHRF1 did not affect the expression of germline transcripts to reduce IgA production.
PHRF1 deficiency did not significantly affect the microhomology of Sμ-Sα junction
Microhomology (MH) could be used as a bridge to align the breaking ends, in which longer MHs (2-20 nucleotides) are favorable for MMEJ [10,11,31]. To address the repair choice upon PHRF1 deficiency, we analyzed MH of Sμ-Sα joining in control and PHRF1 depleted CH12F3-2A cells post CIT stimulation. We conducted the nested PCR to amplify the joining junctions of Sμ and Sα. Each of the clones containing junction fragments from control (n = 41) and PHRF1 (n = 39) depleted cells was sequenced. The alignment of MHs was listed in S4 Fig in S1 File. Analyses of CSR junctions revealed that most of the junction sequences in control cells had a mean overlap length 2.6 bp at the junction. Similarly, PHRF1-deficient clones had a the mean overlap length 2.7 bp at the junction (Fig 2A). This suggests that IgA switching might not utilize a longer MH in the absence of PHRF1. MH levels � 5-10 bps were similar in control and PHRF1-depleted cells (Fig 2B), indicating that other end-joining mechanisms might be involved in the absence of PHRF1.
PHRF1 depletion reduced the expression levels of PARP1 and NELFs
We examined the protein levels of factors participating in transcriptional regulation or DNA damage repair by immunoblotting analysis. Reduced IgA switching might not be the consequence of the aberrant TGF-β signaling, since phospho-Smad2 on S465/S467 was unchanged (Fig 3A, right panel). Most of the DNA damage-related factors, except PARP1, and the level of γ-H2AX were also unchanged ( Fig 3A, right panel), indicating that DNA damage response or repair proteins were not affected in PHRF1-depleted cells. Instead, the expression levels of NELF-A, NELF-D, and H3K36me2/me3 were reduced in PHRF1 KO cells (Fig 3A, left panel). Quantitative results from three independent experiments confirmed that PARP1, NELF-D, and H3K36me3 were significantly decreased in PHRF1 KO cells (Fig 3B). Additionally, the phosphorylation status of Rpb1's CTD represents the different phases of initiation, pausing, elongation, and termination in RNA transcription. Phosphorylation on S5, a marker for transcription initiation, remained unchanged. By contrast, the phosphorylation level of Rpb1's CTD on S2, a signature at the transcription elongation, was reduced in PHRF1-depleted CH12F3-2A cells (Fig 3C), indicating that the absence of PHRF1 might alter the phosphorylation signature on the CTD region to stall Rpb1 at transcription elongation. We introduced PARP1 and NELF-D into PHRF1 KO#1 cells by electroporation and confirmed their expressions by immunoblotting. Flow cytometry result showed that PARP1 could significantly elevate IgA production, while NELF-D was unable to increase IgA production in KO#1 cells, strongly suggesting that PARP1 might be the main downstream target by PHRF1 depletion (Fig 3D). To gain more information regarding the global landscape of gene expression in the absence of PHRF1, RNA-seq analysis was conducted. Approximately 750 differentially expressed genes (DEGs) were obtained, in which fold change > 2 (up-regulated) or < 0.5 (down-regulated) and PPEE < 0.05 were considered as statistically significant. Subsequently, these DEGs were subjected to the Gene Ontology (GO) analysis. The result showed that several distinctive GO categories could be grouped by correlated DEGs, including the positive regulation of RNA polymerase II (Fig 4A). Among these DEGs, top 50 up-regulated and down-regulated genes were clustered for heat map analysis (Fig 4B) and factors involved in the positive regulation of RNAPII transcription were also subjected for heat map analysis (Fig 4C). To confirm RNAseq results, RT-qPCR was carried out. The mRNA levels of genes involved in the positive RNAPII regulation, such as Lef1, Trp73, Trp53inp1 in Fig 4C, and PARP1, NELF-A, NELF-D in Fig 3A, were genuinely decreased by RT-qPCR analyses (Fig 4D). We also measured the mRNA levels of SetD2, Amyd2, and Amyd5, which are responsible for the methylation of H36K36me2/3. However, the mRNA levels of these histone methyltransferases were not changed in RT-qPCR analysis (Fig 4D). Furthermore, to address whether PHRF1 deficiency affected TGF-β signaling, we clustered the entire components or FC > 2 in TGF-β signaling (KEGG#04350). These two heatmaps were shown in S5 Fig in S1 File and the P-value of TGFβ signaling was 0.56 in the pathway enrichment analysis, suggesting that PHRF1 KO might not affect TGF-β signaling.
Generation of PHRF1 knockout in CD19-Cre mice
To evaluate the impact of PHRF1 deficiency on CSR in vivo, we inactivated the expression of PHRF1 by crossing PHRF1 fl/fl mice with CD19-Cre transgenic mice to disrupt PHRF1's expression in B lymphocytes. PHRF1 fl/fl mice harboring two loxP elements to flank the exon 2 to 9 (a.a. 1-343) of the murine PHRF1 gene were described previously (S6 Fig in S1 File) [30]. We took advantage of CD19-driven Cre recombinase specifically expressed in B cell progenitors [32] to knockout PHRF1's expression in the B lymphocytes. As a result, two functional domains, E3 Ring domain and PHD domain (a.a. 109-153 and a.a. 188-232, respectively), were deleted by Cre recombinase in the lineage of B cells. Cd19 Cre/+ PHRF1 fl/fl pulps were viable without noticeably developmental defects.
To assess whether PHRF1 deficiency affected B cell development, we examined the distribution of B cell subsets isolated from the bone marrow and spleen using different surface markers of lymphocytes. B cell lineage, including Pre-pro-B cells, Pro-B and Pre-B, could be distinguished based on their differential expression of CD43, B220, CD24, and BP1 in the bone marrow. Immature and mature B cells could be segregated based on the differential expression of IgM and IgD. The result showed that control and Cd19 Cre/+ PHRF1 fl/fl B cells exhibited a similar proportion of surface markers in the bone marrow and spleen (S7 Fig in S1 File), indicating that PHRF1 ablation did not interfere with the production of B cells in Cd19 Cre/+ PHRF1 fl/fl mice.
PHRF1 deficiency did not affect the CSR of immunoglobulins in CD19-Cre mice
To determine any CSR defect in PHRF1-deficient B cells, resting B cells were isolated from the spleen using anti-CD43 microbeads and IgM was induced to switch to other Ig isotypes ex vivo and then determined by flow cytometry. Unexpectedly, CSR to surface IgG1, IgG2a, IgG3, and IgA was statistically unchanged in the splenic B cells of control and Cd19 Cre/+ PHRF1 fl/fl mice (Fig 5A). We harvested splenic B cell extracts from 7-week littermates for the immunoblot analysis. Contrary to CH12F3-2A cells, the expression levels of PARP1, NELF-A, NELF-D, and H3K36me2/me3 were not remarkably decreased in primary B cells derived from Cd19 Cre/ + PHRF1 fl/fl mice (Fig 5B). We determined the cell numbers in control and PHRF1 deficient B lymphocytes. Similar numbers of nucleated cells (2.1 x 10 7 versus 2.0 x 10 7 cells/ml) were found in control and Cd19 Cre/+ PHRF1 fl/fl mice (Fig 5C). To monitor cell proliferation in cultured splenic B cells, CFSE labeling was conducted to quantify cell division. After three days in culture, most of the cells in all stimulated B cells exhibited no comparable difference between control and Cd19 Cre/+ PHRF1 fl/fl B cells (Fig 5D). Furthermore, by stimulating primary B cells in varying concentrations of TGF-β (1, 2, and 5 μg/ml), we measured the proliferation level using CFSE staining (S8A Fig in S1 File) and CSR to IgA in primary B cells (S8B Fig in S1 File). The results showed that PHRF1-KO B cells were not responsive to TGF-β stimulation. Finally, Iμ and Iα GLTs for IgA induction were also comparable in control and Fig in S1 File). Taken together, PHRF1 deficiency might not be able to affect the CSR of immunoglobulins in primary splenic B cells.
Discussion
In light of impaired NHEJ in U2OS and HEK293 cells due to PHRF1 silencing [26], we intended to explore whether PHRF1 ablation could affect CSR in vivo. As expected, the inactivation of PHRF1 reduced IgA switching and the expression of PARP1, NELF-A, and NELF-D was remarkably reduced in CH12F3-2A cells, leading to our assumption that PHRF1 depletion might alter the CSR process in mice. Although we carried out five different methods to induce CSR in primary B cells [33][34][35][36][37], PHRF1 ablation was not able to affect the switching of IgM to IgG1, IgG2a, IgG3, and IgA in CD19-Cre mice. This suggests that there may be other compensatory mechanisms in place in vivo that are able to offset the loss of PHRF1. The molecular basis for these compensatory mechanisms is not yet clear and may be an interesting area in animals. Stalled RNAPII at the switch region facilitates the AID targeting to Ig locus. A recent study suggest that AID is recruited to the S regions by Spt5 when RNAPII is stalled [4]. Several components of RNAPII "stalling" machinery, including those associated with the C-terminal repeat domain (CTD) of Rpb1, play critical roles in generating diversified antibodies during CSR [38][39][40]. RNAPII pauses after transcribing 20-40 nucleotides due to DRB sensitivityinducing factors (DSIF, Spt4 and Spt5) and negative elongation factors (NELF-A to -E). Subsequently, positive transcription elongation factor b (P-TEFb) phosphorylates CTD of Rpb1 on S2 and CDK9 phosphorylates DSIF, leading to the dissociation of NELFs from RNAPII and transcription elongation to proceed [41]. Although our results were not sufficient to conclude that AID targeting was directly affected by the absence of PHRF1, the reduced phosphorylation of Rpb1 on S2 in the CTD and decreased levels of NELF-A and NELF-D might indirectly disturb the interaction between AID and Spt5, leading to the interference in AID targeting to the switch region. Therefore, the absence of PHRF1 may result in aberrant transcription progress and a defective CSR for IgA switching.
PARP-1 is involved in MMEJ for binding to DSBs by competing with Ku70-Ku80 and facilitating the recruitment of end-resection factors [10]. However, the precise mechanism by which PHRF1 deficiency affects the efficiency of CSR is not fully understood. Unlike DSBs repaired by c-NHEJ, DSBs repaired by MMEJ need more MHs between the donor and acceptor switch regions. c-NHEJ generally utilizes 0-2 bp MHs; by contrast, MMEJ tends to use more than 2-20 bp MHs. Our data showed a lower expression level of PARP1 in CH12F3-2A cells; therefore, PHRF1 absence may compromise MMEJ efficiency for IgA CSR. Although the switch junctions in PHRF1 deficiency were not biased toward a longer microhomology, other end-joining mechanisms might be active in the absence of PHRF1.
In summary, the present data not only corresponded to our previous report that PHRF1 silencing affects NHEJ, but indicated the fact that the absence of PHRF1 impairs CSR, at least in CH12F3-2A cells. We also found that PARP1 was important for IgA production in PHRF1-depleted CH12F3-2A cells. Additionally, decreased NELF-A and NELF-D may affect the transcription elongation favorable for IgA switching. While the exact molecular mechanism is not clear; we provide evidence to show that PHRF1 does play a role of CSR in CH12F3-2A cells. | 5,087.6 | 2023-08-04T00:00:00.000 | [
"Medicine",
"Biology"
] |
Direct Measurement of the Topological Charge in Elliptical Beams Using Diffraction by a Triangular Aperture
We introduce a simple method to characterize the topological charge associated with the orbital angular momentum of a m-order elliptic light beam. This method consists in the observation of the far field pattern of the beam carrying orbital angular momentum, diffracted from a triangular aperture. We show numerically and experimentally, for Mathieu, Ince–Gaussian, and vortex Hermite–Gaussian beams, that only isosceles triangular apertures allow us to determine in a precise and direct way, the magnitude m of the order and the number and sign of unitary topological charges of isolated vortices inside the core of these beams.
Light beams possessing orbital angular momentum (OAM) have been extensively studied since its first demonstration in 1992 1,2 . Laguerre-Gauss 3 and Bessel beams 4 are examples of beams carrying OAM. They can be decomposed in terms of orthogonal components, and it is possible to construct a geometric representation equivalent to the Poincaré sphere for the polarization 5 . These beams have found applications in optical tweezers 6 , singular optical lattice generation 7 , atom traps 8 , transfer of OAM to microparticles 9 , nanostructures and atoms 10 , and for shaping Bose-Einstein condensates 11 . Another important application is the preparation of photons entangled in their orbital angular momentum (OAM) degree of freedom 2,12 , which are candidates for implementing high performance quantum communication 13 .
Elliptical vortex beams (EVBs) have also received considerable attention in recent years [14][15][16][17][18] . This type of beam has an elliptical symmetry which is stable on propagation and it is also promising for all previous applications of circular OAM beams, for instance, the EVBs have been applied in optical trapping and manipulation of particles 19,20 , quantum information 21,22 , and beam shaping in nonlinear media 23,24 . Several EVBs were investigated earlier, including Mathieu 14,15 , helical Ince-Gaussian (HIG) 16 , vortex Hermite-Gaussian (VHG) beams 17 , and elliptic perfect optical vortices 18 . Other works have investigated simple ways of producing EVBs 25,26 . However, the diffraction of these beams by apertures has not been extensively investigated, except a method for measuring the orbital angular momentum of elliptical vortex beams by using a slit hexagon aperture 27 .
We contribute to this type of study, by showing that the order m of an EVB can be determined by inspection of the diffraction pattern from an isosceles triangular aperture. It is known that the topological charge (TC) of circular beams can be determined by interferometric [28][29][30] and diffractive 31-37 methods. For Laguerre-Gaussian and Bessel beams, the sign and magnitude of the topological charge can be determined by diffraction through an equilateral triangular aperture 38 . We extend this method for EVBs by changing from an equilateral to an isosceles triangular aperture.
We demonstrate that the order m and the beam wavefront helicity sign can be obtained from the diffraction pattern in an unambiguous and direct way up to m = 10. The procedure is only reliable for isosceles triangular apertures. We discuss a practical method to design the most appropriated triangular aperture for this task.
Results
The theoretical approach for this diffraction problem consists in calculating the far field pattern by a triangular aperture. In order to do that, we use the Fraunhofer integral given by 39 where E(x, y, z) gives the electric field amplitude at the transverse position with coordinates (x, y) in the plane situated at a distance z from the diffraction screen. λ is the wavelength in the vacuum, k is the wavevector and E 0 is a constant. As we are interested in the transverse intensity distributions at a fixed plane placed at the position z = z 0 , far enough from the aperture, we can use the scale transformations K x = k.x/z 0 , K y = k.y/z 0 , and omit the term outside the integral. Thus, the Fraunhofer integral becomes a Fourier transform, where E(x′, y′, 0) is the product between the incident field and the aperture function. Due to the elliptical symmetry of the EVBs, the appropriated aperture must have an isosceles triangular shape and it should be placed in the beam as described in more detail below. The longer axis of the aperture should lie along the longer axis of the beam. For integer and circular OAM beams, the integral in Eq. (2) for the triangular aperture can be analytically evaluated 40 . However, for EVBs, analytical solutions were not yet derived. Therefore, we will solve these integrals numerically.
In Fig. 1 we show in red, an isosceles triangle representing the aperture inscribed in an ellipse representing the shape of the beam. In order to design the optimal triangle, we need to measure the intensity transverse profile of the beam at the position where the aperture will be placed. From the intensity pattern, we obtain the dimensions of the semi-minor axis a 1 and semi-major axis b 1 , which are the distances from the center to the intensity global maxima in the x and y directions, respectively. This provides us with the equation for the ellipse shown in Fig. 1 In order to obtain the coordinates (−x, y) and (x, y) for the vertices of the triangle, we apply the transformation = − . to obtain = − ′ y b/2 1 and x a 3 /2 1 = ′ . This procedure was developed in order to maximize the visibility of the diffraction features that contain the information about the topological charge, as a function of the relative sizes of beam and aperture. When the elliptical beam tends to a circular one, the optimal aperture tends to the equilateral triangle.
In this work, we approach three types of elliptical beams. The helical Mathieu beams are solutions of the Helmholtz equation in the elliptic cylindrical coordinates and can be constructed from a linear combination of even and odd Mathieu functions as 14 The HIG modes are solutions of the paraxial wave equation (PWE), also in the elliptic cylindrical coordinates, and can be expressed as a superposition of even and odd Ince-Gaussian modes (IGMs) 16 , where ξ and η are the radial and the angular elliptic variables, respectively, and ε is the ellipticity parameter. The parameter p is related to the number of rings which is given by the relation p m 1 ( )/2 + − , and m gives the overall topological charge. Figure 2 shows the theoretical transverse intensity (left column), phase (center column), and Fraunhofer diffraction patterns (right column) for EVBs. Figure 2(a)-(c) correspond to a Mathieu beam with m = 3 and q = 2, Fig. 2(d)-(f) correspond to a HIG beam with m = 3, p = 3 and ε = 1, Fig. 2(g)-(i) correspond to a HIG beam with m = 3, p = 5 and ε = 1, and Fig. 2(j)-(l) correspond to a VHG beam with m = 3 and a = 0.80. For all of these modes, the sign of m does not determine the helicity, or the sense of rotation of the wavefront. For Mathieu and HIG beams the helicity depends on the sign of the imaginary term in Eqs (3) and (4), while for VHG beams it depends if the parameter a in Eq. (5) is bigger or smaller than 1. In Fig. 2(e) and (h), the effect of changing the sign in Eq. (4) is illustrated. In Fig. 2(e) the sense of increasing phase is clockwise, while in Fig. 2(h) it is counterclockwise. The phase distribution maps in Fig. 2(b),(e),(h) and (k), also illustrate the fact that the mth-order EVB with nonzero eccentricity contains m in-line vortices, each one with unitary topological charge of the same sign such that the modulus of the total charge is m.
The patterns 2(c), 2(f), 2(i), and 2(l), resulting from diffraction through an isosceles triangular aperture allow us to determine m and the sign of the unitary vortices at the beam core. The number of bright spots is directly related to m, and the sign is given by the orientation of the pattern. According to the simulations, a safe region for the method to work is in the range of 0 < e ≤ 0.8 and m ≤ 10. Out of these limits the patterns are not truncated and we cannot properly count the number of spots anymore.
In Fig. 3 we show the numerically computed diffraction patterns for the mth-order input Mathieu beams. Comparing the diffraction patterns for different values of m, it is possible to establish a rule to determine the order of the beams, in the same way as for the HIG and VHG beams. We can observe that the value of m is directly related to the first order external diffraction lobes (maxima) formed on the sides of the triangle. The total charge is given by m = N -1, where N is the number of lobes on anyone of the sides of the triangle. This is valid for all EVBs studied here. Figure 4 illustrates the effect of changing the sign of the imaginary part in Eq. (3) for a Mathieu beam, with clockwise rotation (plus sign in Eq. (3)), in Fig. 4(a) and (c), and counterclockwise rotation (minus sign in Eq. (3)), in Fig. 4
(b) and (d).
So far, we have shown numerically that the diffraction pattern through an isosceles triangular aperture determines the total topological charge m, and the helicity of Mathieu, HIG and VHG beams in a clear and unambiguous way. In order to obtain this result, we have analyzed other geometries for the diffraction aperture like a lozenge for instance, but our studies demonstrated that the isosceles triangle is the most appropriated geometry. This is similar to what happens for circular beams, for which other geometries like a square aperture, for instance, can determine the modulus of the topological charge but not the sign 25 . In this last case, only the equilateral triangle can provide the complete information about m and its sign 38 .
We have performed an experiment, in order to confirm our numerical results. Figure 5 shows the sketch of the experimental setup, which is described in detail in section "Methods". We have diffracted Mathieu, HIG and VHG beams through an isosceles triangular aperture, and demonstrated the validity of our method to determine the order m of EVB beams.
In Fig. 6, we show the experimental results. The triangular structures are the diffraction patterns and each side of the triangles has m + 1 bright spots as theoretically predicted. These results confirm our numerical results and demonstrate the use of diffraction patterns by triangular aperture to determine the order of EVBs. They also confirm that the information about the helicity of the wavefront is given by the orientation of the triangular pattern. Different from the traditional circular modes, e.g. Laguerre Gauss and Bessel beams 38 , the sense of wavefront rotation is not determined by the sign of m. We have found a very good agreement between theory and experiment.
Conclusion
In summary, we have numerically and experimentally demonstrated a technique that allows us to determine the order of an EVB in an unambiguous way. We have also presented a recipe to design the optimal triangular aperture for this measurement. This non-interferometric technique requires only simple measurements of intensity patterns. The value of m is determined by counting the number of lobes in anyone of the sides of the triangular diffraction pattern. The sense of wavefront rotation can also be determined by the orientation of the diffraction pattern.
Methods
The experimental setup is shown in Fig. 5. Different orders and types of EVBs are generated from an initial Gaussian mode of an Argon Laser operating at 514 nm. The beam is expanded by a factor of about 17, using lenses L1, with focal length f 1 = 30 mm, and L2, with focal length f 2 = 500 mm. The expanded beam illuminates a computer-generated hologram 42 displayed in a spatial light modulator (SLM) (Hamamatsu Model X10468-01). The 50/50 beam splitter (BS) in between L1 and L2 is used to allow normal incidence in the SLM. For each type of EVB there is a corresponding type of hologram in the SLM. The reflected beam from the SLM is focused by lens L2 in the plane of the spatial filter (SF) after reflection by the BS. The spatial filtering selects the desired diffraction order from the SLM. Lens L3, with focal length f 3 = 300 mm, collimates the beam again, which is incident on the isosceles triangular aperture (AP). It is mounted in a xyz translation stage for precise alignment with respect to the light beam. Finally, lens L4, with focal length f 4 = 200 mm, is used to implement the optical Fourier Transform of the field in the aperture plane onto the CCD detection plane. This is the physical realization of the integration in Eq. (2). The transverse intensity patterns corresponding to the Fraunhofer diffraction are registered. | 3,046.4 | 2018-04-23T00:00:00.000 | [
"Physics"
] |
Probiotic and synbiotic therapy in critical illness: a systematic review and meta-analysis
Background Critical illness is characterized by a loss of commensal flora and an overgrowth of potentially pathogenic bacteria, leading to a high susceptibility to nosocomial infections. Probiotics are living non-pathogenic microorganisms, which may protect the gut barrier, attenuate pathogen overgrowth, decrease bacterial translocation and prevent infection. The purpose of this updated systematic review is to evaluate the overall efficacy of probiotics and synbiotic mixtures on clinical outcomes in critical illness. Methods Computerized databases from 1980 to 2016 were searched. Randomized controlled trials (RCT) evaluating clinical outcomes associated with probiotic therapy as a single strategy or in combination with prebiotic fiber (synbiotics). Overall number of new infections was the primary outcome; secondary outcomes included mortality, ICU and hospital length of stay (LOS), and diarrhea. Subgroup analyses were performed to elucidate the role of other key factors such as probiotic type and patient mortality risk on the effect of probiotics on outcomes. Results Thirty trials that enrolled 2972 patients were identified for analysis. Probiotics were associated with a significant reduction in infections (risk ratio 0.80, 95 % confidence interval (CI) 0.68, 0.95, P = 0.009; heterogeneity I2 = 36 %, P = 0.09). Further, a significant reduction in the incidence of ventilator-associated pneumonia (VAP) was found (risk ratio 0.74, 95 % CI 0.61, 0. 90, P = 0.002; I2 = 19 %). No effect on mortality, LOS or diarrhea was observed. Subgroup analysis indicated that the greatest improvement in the outcome of infections was in critically ill patients receiving probiotics alone versus synbiotic mixtures, although limited synbiotic trial data currently exists. Conclusion Probiotics show promise in reducing infections, including VAP in critical illness. Currently, clinical heterogeneity and potential publication bias reduce strong clinical recommendations and indicate further high quality clinical trials are needed to conclusively prove these benefits.
Background
Critical illness is characterized by a loss of commensal flora and an overgrowth of potentially pathogenic bacteria, leading to a high susceptibility to acquired nosocomial infections [1,2]. Further, sepsis following infection is still a leading cause of death worldwide [3]. The U.S. Centers for Disease Control indicates death rates from critical illness/sepsis have increased at a rate greater than any other common cause of mortality in the last year for which data were available [4]. Thus, therapies to reduce the risk and incidence of infection and sepsis in critical illness are urgently needed.
According to the World Health Organization and the Food and Agriculture Organization, probiotics are living non-pathogenic microorganisms, which have demonstrated well-documented beneficial health effects administered in optimum amounts in the prevention and treatment of several disease states [5]. So far, several mechanisms by which probiotics may exert beneficial effects have been described, including modification of the gut flora by inducing host cell antimicrobial peptides, release of antimicrobial factors, suppression of the immune cell proliferation, stimulation of mucus and IgA production, anti-oxidative activity, inhibition of epithelial cell nuclear factor kappa B activation, and other potentially vital gut epithelial barrier protective effects [6][7][8].
As the gut is hypothesized to play a central role in the progression of critical illness, sepsis and multiple organ dysfunction syndrome [9], maintenance of the gut barrier and a healthy gut microbiome, potentially via reintroduction of commensal bacteria (probiotic therapy), may be essential to optimizing outcomes in critically ill patients.
According to current literature, the efficacy of probiotics in the prevention of infectious complications has been extensively evaluated in many animal studies and clinical trials in heterogenous intensive care unit (ICU) patient populations. These studies suggest that probiotics may reduce the incidence of infection, particularly ventilator-associated pneumonia (VAP) [10], which is a common serious complication in intubated, mechanically ventilated patients [11]. Nonetheless, the effect of probiotics on the prevention of VAP still remains controversial and inconclusive [12][13][14][15][16][17]. In fact, its effect depends on the patient population and the probiotic strain studied. Despite the outcome benefits of probiotics therapy, recent guidelines have been unable to make a definitive recommendation for the routine use of probiotics in ICU patients. To date, these guidelines have suggested the use of probiotic therapy in select medical and surgical patient populations in whom trials have documented safety and clinical benefits [18,19].
Over the last few years, several systematic reviews and meta-analyses have evaluated the effects of probiotics in critically ill patients [12][13][14][15][16][17]. In 2012, after aggregating 11 trials that reported on infections [14], we demonstrated that probiotics may reduce infections, including the incidence of VAP, although the effect on VAP was not statistically significant given the available data. Moreover, probiotics were associated with a trend toward reduced ICU mortality, but did not influence hospital mortality. Since our last systematic review and meta-analyses, seven new trials of probiotic therapy have been published [20][21][22][23][24][25][26]. Further, to date, no recent meta-analysis has examined the effect of probiotic versus synbiotic (probiotic and prebiotic fiber) therapy. Finally, a Canadian survey [27] on the use of probiotics as a prophylactic strategy for VAP showed that most Canadian ICU pharmacists have used probiotics at least once, although routine use is considered controversial and considerable practice variability exists. Thus, any increased understanding that the newly published trials can yield will be vital to clarifying clinical probiotic use in the ICU and areas in need of future research focus.
Therefore, as probiotic use in the ICU remains widespread and controversial, current guidelines are not conclusive, and with a significant number of new trials of probiotic use published recently we conducted a comprehensive systematic review and meta-analysis of probiotic and synbiotic use in critically ill patients. Our aim was to elucidate the overall efficacy of probiotics, as a single strategy or in combination with fiber therapy (synbiotics) on relevant clinical outcomes, particularly infection and VAP, in adult critically ill patients.
Search strategy and study identification
A literature search was conducted in MEDLINE, Embase, CINAHL, the Cochrane Central Register of Controlled Trials and the Cochrane Database of Systematic Reviews to identify all relevant randomized controlled trials (RCTs) published between 1980 and April 2016. The literature search used broad search terms containing "randomized," "clinical trial," "nutrition support," "enteral nutrition", "probiotics," and "synbiotics". No language restrictions were applied. Personal files and reference lists of relevant review articles were also reviewed.
Eligibility criteria
We included trials with the following characteristics: 1. Type of study: randomized controlled parallel group trials 2. Population: adult (≥18 years of age) critically ill patients. If the study population was unclear, we considered a mortality rate higher than 5 % in the control group to be consistent with critical illness 3. Intervention: Probiotics alone or associated with prebiotics (synbiotics) compared to a placebo 4. Outcomes: pre-specified clinical outcomes in ICU patients such as infectious complications, VAP, mortality, ICU and hospital length of stay (LOS), and diarrhea We excluded trials that reported only nutrition, biochemical, metabolic, or immunologic outcomes. Data published in abstract form were included only if additional information about the study design was obtained from the authors. The methodological quality of the included trials was assessed in duplicate by two reviewers independently using a data abstraction form with a scoring system from 0 to 14 according to the following criteria: 1. The extent to which randomization was concealed 2. Blinding 3. Analysis based on the intention-to-treat (ITT) principle 4. Comparability of groups at baseline 5. Extent of follow up 6. Description of treatment protocol 7. Co-interventions 8. Definition of clinical outcomes Consensus between both reviewers on the individual scores of each of the categories was obtained. We attempted to contact the authors of included studies and requested additional information not contained in published articles. We designated studies as level I if all of the following criteria were fulfilled: concealed randomization, blinded outcome adjudication and an ITT analysis, all which are the strongest methodological tools to reduce bias. A study was considered as level II if any one of the above-described characteristics were unfulfilled.
Data synthesis
All analyses, except the test for asymmetry, were conducted using RevMan 5.3 (Cochrane IMS, Oxford, UK) with a random effects model. We combined data from all trials to estimate the overall weighted mean difference (WMD) with 95 % confidence intervals for LOS data the pooled risk ratio (RR) with 95 % confidence intervals (CIs) for the incidence of infections and mortality, and diarrhea. WMDs were estimated by the inverse variance approach and pooled RRs were calculated using the Mantel-Haenszel estimator. The random effects model of DerSimonian and Laird was used to estimate variances for the Mantel-Haenszel and inverse variance estimators [28]. RRs were undefined and excluded for studies with no event in either arm. Heterogeneity was tested by a weighted Mantel-Haenszel χ 2 test and quantified by the I 2 statistic as implemented in RevMan. Differences between subgroups were analyzed using the test of subgroup differences described by Deeks et al., and the results expressed using the P values. We considered P <0.05 to be statistically significant and P <0.10 as an indicator of trends. Funnel plots were used to assess the possibility of publication bias and the Egger regression test was used to measure funnel plot asymmetry [29].
Clinical outcomes
Overall infections were the primary outcome for this meta-analysis. Secondary outcomes were VAP, mortality, ICU and hospital LOS, and finally diarrhea. We used definitions of infections as defined by the authors in their original articles. From all trials, we combined hospital mortality where reported. Mortality specified at either 28 days or 90 days was not considered as ICU or hospital mortality, respectively. Nonetheless, if the mortality time frame was not specified as either ICU or hospital, it was presumed to be the later.
Subgroup analysis
We utilized predefined subgroup analyses to assess a number of possible influences on the effect of probiotic supplementation on clinical outcomes, and thus to explore the possible causes of heterogeneity. On the basis that the higher the daily dose the greater the effect, we first examined trials that administered a high dose of probiotics defined as >5 × 10 9 colony-forming units (CFU)/day vs. lower dose probiotics defined as <5 × 10 9 CFU/day. Second, we compared the results of RCTs that administered Lactobacillus plantarum as probiotic therapy vs. no L. Plantarum, and compared trials using Lactobacillus rhamnosus strain GG (LGG) vs. those administering other non-LGG strains.
Moreover, based on a larger treatment effect in those more seriously ill patients with higher risk of death, we compared studies including patients with higher mortality vs. lower mortality. Mortality was considered to be high or low based on whether it was greater or less than the median control group mortality of all the trials. Trials of higher quality, defined as those with a methodological score equal to or higher than the median quality score, may demonstrate a lower treatment effect.
Study identification and selection
A total of 79 relevant citations were identified from the search of computerized bibliographic databases and a review of reference lists from related articles. Of these, we excluded 49 due to the following reasons: 21 trials did not include ICU patients (mostly surgical patients); 12 articles were systematic reviews and meta-analyses; 4 trials were published as an abstract and we were unable to obtain the data from the authors to complete our data abstraction process; 5 articles were duplicates of included trials; 3 studies did not evaluate clinical outcomes; 2 trials tested multiple interventions; 1 study was not a RCT, and finally 1 study administered probiotics as oral swabs.
Finally, 30 RCTs [10, 20-26, 30-51] met our inclusion criteria and were included, covering a total of 2972 patients (see Tables 1 and 2). The reviewers reached 100 % agreement on the inclusion of the trials. The mean methodological score of all trials was 9, whereas the median value was 9.5 on a maximum of 14 (range 5-13). Randomization was concealed in 9/30 trials (30 %), ITT analysis was performed in 18/30 trials (60 %), and double blinding was done in 20/30 of the studies (67 %). There were five level-I studies and 25 level-II studies. The details of the methodological quality of the individual trials are shown in Table 1.
Primary outcome: infections Overall effect on new infections
Aggregating the results of the 14 trials reporting overall infections, probiotics were associated with a significant Fig. 1).
Overall effect on hospital length of stay
Aggregating the data from the nine RCTs that reported hospital LOS, there were no significant differences between the groups (WMD -0.58, 95 % CI -3.66, 2.50, P = 0.71; I 2 = 74 %, P < 0.00001).
Antibiotic days
When we aggregated the data of four trials reporting on antibiotic days, we found that probiotics were significantly associated with a reduction in the duration of antibiotic therapy (WMD -1.12, 95 % CI -1.72, -0.51, P = 0.0003; I 2 = 32 %, P = 0.22; Fig. 5).
Subgroup analysis Probiotics daily dose
There were similar rates of infectious complications in RCTs using high-dose probiotic therapy (n = 8 trials)
Higher vs. lower mortality
The median hospital mortality rate of all the trials (or ICU mortality when hospital mortality was not reported) in the control group was 19 %. After aggregating nine studies with a higher mortality rate, probiotics significantly reduced the incidence of infections (RR 0.74; 95 % CI 0.57, 0.96; P = 0.02; I 2 = 58 %, P = 0.01) (Fig. 7). However, probiotics did not have an effect on infections in the five studies with lower mortality (RR 0.85; 95 % CI 0.66, 1.11; P = 0.24; I 2 = 23 %, P = 0.27). The test for subgroup differences was not significant (P = 0.43) (Fig. 7).
Publication bias
There was indication that potential publication bias influenced the observed aggregated results. In fact, funnel plots were created for each study outcome and the tests
Discussion
To date, our systematic review and meta-analysis is the largest and most updated evaluation of the overall effects of probiotics in the critically ill. It is also the first to include an analysis of symbiotic (probiotic/fiber combinations). Based on the analysis of 30 trials enrolling 2972 patients we demonstrated that probiotics are associated with a significant reduction in ICU-acquired infections, including VAP, which is the most common infectious complication in the critically ill. This significant effect on VAP is a new finding from our previous systematic reviews. Further, the beneficial effect of probiotics on reduction of infections is stronger with the publication of the new trials and the data no longer show a statistically significant effect of heterogeneity on this endpoint. Despite the probiotic effect of reducing infectious complications, this therapy did not influence ICU or hospital mortality, although none of the trials were powered to detect an effect on mortality. Overall, there was a tendency towards a reduction in ICU LOS and probiotic therapy did not influence other clinical endpoints such as hospital LOS, and diarrhea. Statistical and clinical heterogeneity was observed for some endpoints, although this was significant for the key endpoints of infectious complications and VAP. In addition, publication bias for overall infections and hospital LOS means that larger, well-powered, and more definitive clinical trials are urgently needed aimed to avoid these biases. Moreover, subgroup analysis showed that those trials with lower methodological quality exhibit the best treatment effect, which is another issue indicating that larger, welldesigned studies are needed. Again, with the exception of four trials, most of the included studies (n = 14) that reported mortality had small sample sizes, and hence were underpowered and inadequate to detect any clinically important treatment effects of probiotic therapy on mortality. Moreover, the inferences we can make from our current findings are further weakened, as randomization was concealed in 30 % of trials, whereas double-blinding was performed in 67 % of trials. Over recent years, several systematic reviews and meta-analyses have been conducted, although our metaanalysis is the largest and most current to date, as it contains the seven new suitable trials published since the most recent comprehensive meta-analysis publication on this topic, which focused on overall infections and other outcomes (not primarily VAP) in 2012 [14]. Further, these previous systematic reviews did not include analysis of synbiotic therapy. Overall, we have examined several relevant clinical outcomes in a heterogenous ICU patient population, and therefore our results could be applied to a broad group of critically ill patients with sepsis, trauma, severe pancreatitis, or who have undergone surgery. Specific to pancreatitis, concerns have been raised about the safety of probiotic therapy following the 2008 trial, Probiotic prophylaxis in patients with predicted severe acute pancreatitis (PROPATRIA) [43], which showed that Ecologic 641® given with fiber post-pyloric was associated with higher mortality and bowel ischemia. This postpyloric method of administration was associated with an increase in small bowel necrosis, which was subsequently associated with death in a number of patients receiving the prebiotic fiber/probiotic mixture. It is possible that the post-pyloric administration of this fiber/multiple probiotic strain mixture in patients with pancreatitis may carry OR odds ratio significant risk and should likely be avoided [52]. Unfortunately, there were significant ethical and statistical concerns raised about the conduct of the trial [53], limiting the utility of the data. Further, more recently, a systematic review and meta-analysis by Gou et al. [54] found that probiotics had neither beneficial nor adverse effects in patients with pancreatitis.
Despite the limitations of the PROPRIATA trial, it has contributed to concerns around the safety of probiotic administration in critical illness and limited the design of larger-scale clinical trials and/or more routine clinical administration of live probiotics. To address this, the American Health Care Research and Quality (AHRQ) agency reviewed and reported on the safety of probiotic therapy in over 600 published clinical trials and case reports [55]. It should be reassuring to future investigators that the overall conclusion of the extensive AHRQ report indicates that probiotic therapy in both adults and pediatric populations was not been found to be associated with any increased risk of infectious or other adverse events in either healthy or ill patients. Importantly, their report revealed a trend towards less adverse events in probiotic-treated critically ill patients, although isolated adverse effects of probiotic administration have been reported [56]. In any case, careful and appropriate safety monitoring in all future probiotic clinical trials should be conducted.
Recent data indicate that infection during critical illness continues to be a major challenge worldwide. A multi-national ICU study of 14,414 patients in 1265 ICUs from 75 countries, revealed that 51 % of ICU patients were considered infected on the day of survey and 71 % were receiving antibiotics [57]. Of the infections in this study, 64 % were of respiratory origin and the ICU mortality rate in infected patients was more than twice that of non-infected patients (25 % vs. 11 %, P < 0.001), as was the hospital mortality rate (33 % infected vs. 15 % non-infected, P < 0.001) [57].
Currently, VAP is the second most common nosocomial infection in the USA and the most prevalent ICUacquired infection. Notwithstanding, its incidence is highly variable depending on diagnostic criteria used to identify this infectious complication. In fact, in 2015 Ego et al. [58] reported that the incidence of VAP ranged from 4 % to 42 % when using the six published sets of criteria and from 0 % to 44 % when using the 89 combinations of criteria for hypoxemia, inflammatory response, bronchitis, chest radiography, and microbiologic findings. In our systematic review we found that the incidence of VAP ranged from 9 % [46] to 80 % [39]. Additionally, the apparent effect of probiotics on VAP is largely driven by the studies of Kotzampassi et al. [39] study and the Zeng et al. [26]; both trials explain 45.5 % of the signal and thus, provide an unstable estimate. Moreover, current knowledge shows that VAP is associated with high cost and poor clinical outcomes [59]. In 2002, Rello et al. [60] demonstrated that VAP leads to an additional US$40,000 in hospital charges per patient, and recently it has been suggested that the use of prophylactic probiotics may be cost-effective for prevention of VAP from a hospital perspective [61].
Probiotic therapy may prevent VAP and other infections by restoring non-pathogenic flora, which competes with nosocomial pathogens inhibiting their overgrowth, modulating local and systemic immune response, and improving gut barrier function. However, in spite of these protective effects the role of probiotics as a nonpharmacological strategy in preventing VAP has previously been inconclusive. In 2010, Siempos et al. [12] aggregated five probiotic trials demonstrating a reduction in the incidence of VAP, whereas in 2012 Petrof et al. [14] and subsequently Barraud et al. [13] and Wang et al. [15] did not demonstrate any significant effect of probiotic therapy on VAP. More recently, a Cochrane review of probiotic therapy specifically for VAP [17], found with low quality of evidence that probiotic therapy is associated with a reduction in the incidence of VAP. Our current systematic review demonstrates a significant treatment effect of probiotics in reducing VAP and did not demonstrate statistical heterogeneity, strengthening the signal that this may be an effective therapy for VAP. Recently, a Canadian survey [27] on the use of probiotics as a prophylactic strategy for VAP showed that most Canadian ICU pharmacists have used probiotics at least once, although they do not routinely recommend probiotics for the prevention of VAP.
Currently, a large number of clinical trials have demonstrated that probiotics may reduce the incidence of antibiotic-associated diarrhea and Clostridium difficile infections, and systematic reviews have confirmed a significant signal of benefit on reduction of diarrhea and C. difficile-related colitis in all patients (not confined to ICU patients) [62,63]. Our results, when focused on ICU patients do not currently demonstrate a treatment benefit of probiotics in preventing and treating diarrhea in the critically ill, including antibioticassociated diarrhea.
An interesting finding of our meta-analysis was a reduction in antibiotic use in those patients who received probiotics. Nonetheless, only four trials [10,21,26,48] comprising 13 % of included studies reported duration of antibiotic therapy as an outcome. In addition, the study of Zeng et al. contributed to 90 % of the signal, which is a very unstable estimate that weakens our finding. Therefore, probiotics may shorten the duration of antibiotic therapy, although the limited clinical trial data available for this endpoint limits the strength of these findings and further investigation of this effect is needed.
We currently have a greater understanding about the potential benefits of probiotics therapy in critical illness, although much more data are needed. Subgroup analysis found that certain strains such as L. plantarum alone or in combination was associated with a significant reduction in overall infections, although the test for subgroup differences was not significant (P = 0.21). Certain specific biological properties have been described for L. plantarum, including an ability to prevent adhesion of pathogens to the intestinal epithelium secondary to the production of adhesins, enolase, and phosphoglycerate kinase on the bacterial surface [64,65]. These mechanisms may be crucial to reduction of bacterial translocation and modulation of local inflammatory response, and therefore the effect of this strain on systemic infectious complications. Interestingly, probiotics alone had a greater effect than synbiotics on infections, although the difference between these subgroups was not significant (P = 0.98) and more data on the specific effects of different prebiotic fibers are needed. Finally, future trials also need to focus on evaluating the changes in the microbiome following critical illness and the effect of probiotic or synbiotics on restoring a healthy microbiome in treated patients [66]. Recent advances in microbiome sequencing technology (16 s rRNA) in the last few years have resulted in an unprecedented growth in the amount of sequence data that can be collected at a previously unattainable low cost [66]. Thus, if we speculate that a specific probiotic or synbiotic therapy can be used to treat dysbiosis (a pathological change in the patient's bacterial flora) and restore a healthy microbiome, we need to evaluate this with the new accessible microbiome analysis techniques currently available. This may help us target probiotic or probiotic mixtures in the future and increase the personalization of care.
The strength of this current systematic review includes the use of several methods to reduce bias (comprehensive literature search, duplicate data abstraction, specific criteria for searching and analysis), and the analysis of relevant clinical outcomes in the critically ill. However, several important limitations in drawing strong treatment inferences are present. These include the significant potential for publication bias for the infection and hospital LOS outcomes, and the small numbers of trials included in subgroup analyses. In addition, the variety of probiotic strains, wide range of daily doses, and length of administration of probiotic therapy among the different trials weaken any possible clinical conclusions and recommendations. We were also unable to perform subgroup analysis for all clinical outcomes due to the limited number of studies evaluating each endpoint.
Based on our current data, there is not currently sufficient evidence to make a final strong recommendation for probiotics to be utilized in the prevention of infections, including VAP, in the critically ill. However, our current guideline recommendations suggest that probiotics should be considered to improve outcome in critically ill patients [19]. Future trials continue to need to address questions about timing, daily dose, and duration of therapy, which still remain unanswered.
Conclusion
In the largest systematic review and meta-analysis of probiotics to date, we demonstrated that in 30 trials enrolling 2972 patients, probiotics significantly reduced the incidence of infectious complications, including new episodes of VAP in critically ill patients. This finding is limited by clinical heterogeneity and potential publication bias for the overall infection outcome. This precludes a more meaningful statistical conclusion of the efficacy of probiotic therapy on overall infections and potentially the prevention of VAP in critical illness. Moreover, according to our findings probiotics has been demonstrated to be more effective in those trials with higher mortality in the control group. Probiotic therapy with L. plantarum currently demonstrates the most significant effect on the reduction of infections. Overall, the variety of strains, wide range of daily doses, and length of administration of probiotics weakens the strength of our conclusion. Certainly, additional large-scale, adequately powered, well-designed clinical trials, aimed at confirming our observations, are needed and warranted.
Key messages
Critical illness is characterized by a loss of commensal flora and an overgrowth of potentially pathogenic bacteria, leading to a high susceptibility of nosocomial infections Probiotics are living non-pathogenic microorganisms, which may protect the gut barrier, attenuate pathogen overgrowth, decrease bacterial translocation and prevent infection in ICU patients Probiotic use in the ICU remains widespread and controversial, current guidelines are not conclusive, and a significant number of new trials of probiotics have been published recently, which requires a current and comprehensive systematic analysis of probiotic and synbiotic therapy in critically ill patients Probiotics were associated with a significant reduction in infections and a significant reduction in the incidence of ventilator-associated pneumonia (VAP) was found in critically ill patients receiving probiotics alone versus synbiotic mixtures, demonstrating the greatest improvement in infectious outcome, limited synbiotic trial data are currently available Currently, clinical heterogeneity and potential publication bias reduce strong clinical recommendations and indicate further high-quality clinical trials are needed to conclusively prove these benefits Probiotics shows promise for the reduction of infections, including VAP in critical illness, and should be considered in critically ill patients Abbreviations CFU, colony-forming unit; CI, confidence interval; C.Random, concealed randomization; EN, enteral nutrition; ICU, intensive care unit; Ig A, immunoglobulin A; ITT, intention to treat; LGG, Lactobacillus rhamnosus strain GG; LOS, length of stay; MV, mechanical ventilation; NA, non-attributable; NR, non-reported; OR, odds ratio; RCT, randomized controlled trial; RNA, ribonucleic acid; RR, risk ratio; VAP, ventilator-associated pneumonia; WMD, weighted mean difference
Funding
No funding for the development, writing or submission of this manuscript was received.
Authors' contributions WM contributed to development of the concept of the manuscript, study grading, study selection, evaluation and interpretation of data, and also performed primary authoring and editing of all drafts of the manuscript. ML contributed to study grading, selection, evaluation and interpretation of data, performed much of the primary statistical analysis, meta-analysis and data analysis, and also contributed to the writing of the manuscript. PL contributed to development of study grading, study selection, evaluation and interpretation of data, and also contributed substantially to the writing of the manuscript. PW contributed to development of the concept of the manuscript, evaluation and interpretation of data, and also performed authoring and editing of all drafts of the manuscript. All authors read and approved the final manuscript. | 6,650 | 2016-08-19T00:00:00.000 | [
"Medicine",
"Biology"
] |
Adaptive multivariate dispersion control chart with application to bimetal thermostat data
Adaptive EWMA (AEWMA) control charts have gained remarkable recognition by monitoring productions over a wide range of shifts. The adaptation of computational statistic as per system shift is the main aspect behind the proficiency of these charts. In this paper, a function-based AEWMA multivariate control chart is suggested to monitor the stability of the variance–covariance matrix for normally distributed process control. Our approach involves utilizing an unbiased estimator applying the EWMA statistic to estimate the process shift in real-time and adapt the smoothing or weighting constant using a suggested continuous function. Preferably, the Monte Carlo simulation method is utilized to determine the characteristics of the suggested AEWMA chart in terms of proficient detection of process shifts. The underlying computed results are compared with existing EWMA and existing AEWMA charts and proved to outperform in providing quick detection for different sizes of shifts. To illustrate its real-life application, the authors employed the concept in the bimetal thermostat industry dataset. The proposed research contributes to statistical process control and provides a practical tool for the solution while monitoring covariance matrix changes.
www.nature.com/scientificreports/magnitude, which is often not the case.To address this limitation, researchers have focused on developing adaptive charting designs that provide improved performance against shifts of various sizes.One such approach is the adaptive EWMA (AEWMA) chart, which combines the strengths of both Shewhart-type and EWMA-type charts seamlessly 8 .By adjusting the weight of previous observations under the error magnitude, the AEWMA chart can detect shifts of different sizes while mitigating the inertia issue.The literature on adaptive control charts continues to advance.For instance, Zhao et al. 7 utilized adaptive algorithms to analyze dynamic monitoring systems in energy storage systems, specifically voltage difference faults.Arshad et al. 9 suggested an AEWMA chart that relies on a continuous function to oversee process variance.In industrial settings, there are often scenarios that require the simultaneous monitoring of multiple related quality characteristics.Multivariate statistical process control (SPC) is employed to address these situations.Quality control charts play a crucial role in multivariate SPC 10,11 .Various control charts have been designed to detect variations in the covariance matrix of multivariate normally distributed processes, considering different statistical tests and assumptions about subgroup sizes and data dimensions.However, in practical applications, where subgroup sizes are small and individual observations are considered, additional control charts need to be developed to account for the undefined covariance matrix.Monitoring the variance-covariance matrix in statistical process control is not merely an incremental improvement; it represents a fundamental shift in our ability to ensure process efficiency and product quality.While traditional control charts address univariate variations, the multivariate dispersion control chart enables a comprehensive analysis of multivariate data.This added dimension is pivotal in modern manufacturing and service industries, where processes are inherently complex, interconnected, and influenced by multiple factors.Huang et al. 12 proposed a control chart based on the trace of the covariance matrix to monitor variations in multivariate normally distributed processes using individual observations.This is the need to crucially design such a control chart that will monitor process variations while considering the multivariate design structure of variables.In recent years, various control charts are suggested monitoring process dispersion shifts both in univariate and multivariate scenarios: 13 proposed a mixed control chart using both EWMA and CUSUM statistic to construct an EWMA dispersion control chart, Abujiya et al. 14 has introduced an improvised form of dispersion control chart followed by EWMA statistic only and found effective in identifying small to moderate shifts, 15,16 has proposed an adaptive version of EWMA chart by using CUSUM accumulate error estimation scheme to estimate the process shift to efficiently monitor process dispersion Zaman et al. 15 recommended an adaptive control chart using Huber and Tukey function to compute smoothing constant value to determine the proposed EWMA dispersion control chart statistic and found it efficient.Similar efforts are made by the researchers, a few are mentioned as [17][18][19][20][21][22][23] , they suggested various modifications while monitoring multivariate cases and designed dispersion control charts.
In response to the constraints observed in current dispersion multivariate control charts, Haq and Khoo 24 introduced a novel AEWMA control chart known as AEWMA-II.This chart is designed for the surveillance of the covariance matrix in processes that follow a normal distribution.The AEWMA-II chart utilizes an EWMA statistic with an unbiased estimator to estimate the covariance matrix shift and determines the smoothing constant using a proposed continuous function.In this study, a more sophisticated AEWMA multivariate dispersion control chart is suggested to give sensitive detection over a wide range of shifts, named as proposed AEWMA-I.The motivation behind the efficacy of the proposal is the adaptation of smoothing constant value as per shift in the covariance matrix.The suggested control chart plotting statistic uses the smoothing constant as per the estimated shift size and quickly rings the alarm.The proposed AEWMA-I chart overcame the limitations of a high false alarm rate which was due to the higher SDRL than the ARL.The authors addressed this issue by suggesting the new AEWMA-I multivariate dispersion control chart.The suggested design improved the high SDRL issue as well as improved the ARL.
The efficacy is analyzed in terms of smaller run length (RL) profile values like average RL (ARL), standard deviation RL (SDRL), and percentiles at 5th, 10th, 25th, 50th,75th,90th, and 95th in extensive tables through Monte Carlo Simulations.The rest of the paper is structured as: in section "The existing charts" existing control charts are presented, and section "Proposed AEWMA-I control chart" was comprised of the proposed AEWMA I control chart design.Section "Run-length computation" explains the RL computational procedure and performance evaluation is provided in section "Performance comparisons".Real life data set is used in section "Illustrative example" to elaborate on the implementation of the suggested design.At the end of the manuscript, the discussion is wrapped up conclusively in section "Conclusions and further recommendations" with further recommendations along with theoretical contributions and practical implications.
The existing charts
Suppose we have p variable y = y 1 , y 2 , y 3 , . . ., y p ′ with mean vector µ and the covariance matrix , such that,y ∼ N p (µ, �).Suppose we have the target covariance matrix 0 that can vary because of the shifts in the process.This study focused on adapting the value of the smoothing constant with a continuous function.Let the independent, identically distributed (i.i.d.) sequence y t ∀t > 0 , is taken from N p (µ 0 , � 0 ) .Both µ 0 and 0 are the mean vector and covariance matrix, respectively.Assuming that the process remains in-control state for some unknown time t 0 , that is y t ∼ N p (µ 0 , � 0 )∀t ≤ t 0 .After that, the process becomes out-of-control because of an unknown shift δ 2 occurs in 0 , that is y t ∼ N p (µ 0 , � 1 )∀t > t 0 , where 1 = δ 2 0 and δ > 0 .δ=1, ∀t ≤ t 0 and ∀t > t 0 , δ = 1.
Khoo and Quah 25 proposed a Shewhart control chart to observe the covariance matrix 0 based on the successive differences between multivariate observations.That is It can be shown that M t ∼ χ p 2 , ∀1 < t ≤ t 0 , a positively skewed distribution.Experiencing the same thing a control chart with plotting statistic M t gives biased ARL results on account of its non-normal approach regardless of that y t has the normal distribution.In the field of SPC, it is a widely adopted practice that numerous researchers have followed, which involves transforming an asymmetrically distributed statistic into a random variable that follows a normal distribution.In what follows, we first transform M t into a standard normal random variable and then construct an proposed AEWMA-I control chart using this transformed standard normal variable.In the proposed AEWMA-I control chart, a transformation proposed by Quesenberry 26 is used to normalize the M t , as follows: where G(.) is the cumulative distribution function (CDF) of the χ 2 distribution with p degree of freedom and the � −1 (.) is the inverse CDF of the normal distribution.As Z t ∼ N(0, 1) gives unbiased ARL values for ∀t ≤ t 0 .Let E(Z t ) = 0 when ∀t > t 0 .Thus, it becomes feasible to prepare the conventional mean control chart using {Z t } to monitor the erratic fluctuations in the covariance matrix of a multivariate normally distributed process.Let identically dependent distributed {Z t } , ∀t > 0 be a sequence of variables based on y t .Note that the control charts considered here trigger out-of-control signal only when t > 1 and .
The existing EWMA chart Roberts 3 proposed EWMA control chart for observing shifts in the mean of a normally distributed process.Haq and Khoo 24 proposed multivariate EWMA control chart.This chart is helpful to monitor the covariance matrix.
Let an EWMA sequence {A t } based on {Z t } , given by where the smoothing parameter ψ ∈ (0, 1] .The EWMA chart reduces to the Shewhart chart when ψ = 1.A t is normally distributed with the mean 0 and variance The term (1 − ψ) 2t converges to zero, As the time t increases.The EWMA chart triggers an out-of-control signal when |A t | exceeds the control limit L (> 0), i.e., A t < − L or A t > L to indicate a downward or an upward shift in the covariance matrix of the process.The in-control ARL of the EWMA control chart is controlled by L.
The existing AEWMA-II chart
Haq and Khoo 24 have suggested an AEWMA-II chart to observe the irregular variations in the covariance matrix of a normally distributed process.The AEWMA-II chart updates the smoothing parameter of plotting statistic according to the estimated size of the shift.
Let δ t be a biased free estimator of shift δ at time t.Now where and the smoothing constant ψ ranges from 0 to 1 such as ψ ∈ (0, 1] .The plotting statistic of the AEWMA-II chart is where K 0 = 0 and f δ t ∈ (0, 1] such that www.nature.com/scientificreports/ The AEWMA-II chart triggers an out-of-control signal when |K t | exceeds the control limit L (> 0), i.e., K t < − L or K t > L to indicate a downward or an upward shift in the covariance matrix of the process.
Proposed AEWMA-I control chart
In this section, we examined the suggested AEWMA-I control chart.This control chart is useful for detecting irregular variations in the covariance matrix of a p-dimensional multivariate process.The proposed AEWMA-I chart is designed to overcome the limitations of the existing AEWMA-II chart, which exhibits a high false alarm rate due to the SDRL being greater than the ARL.To address this issue, we propose the new AEWMA-I multivariate dispersion control chart, which is based on a continuous function.This mitigates the problem of a high false alarm rate and improves the performance of shift detection.In adaptive control charts, different methods have been suggested for selecting the value of the smoothing constant.Since the size of the shift is generally unknown in advance and varies, it is advisable to consider it as a random variable and estimate it using an appropriate estimator.In our method, we evaluate the magnitude of the shift using an impartial estimator and ascertain the smoothing constant for the proposed AEWMA-I multivariate dispersion control chart through a continuous function.This enhances the design effectiveness in detecting shifts of a diverse magnitude in the covariance matrix.Let ⌣ δ t be the shift estimate at time t.Following 27 , we have where where δ * 0 = 0 and ψ ∈ (0, 1] .The δ t = δ * * t to find an estimate of δ .Thus, the plotting statistic of the offered control chart is where S t = 0 and g δ t ∈ (0, 1] such that Drawing inspiration from the logistic function, where the response function lies within the range of 0-1, we employed a systematic trial-and-error approach.This involved experimenting with various functions, such as logarithmic and exponential functions, along with different constants.We aimed to find an appropriate smoothing constant, denoted as g δ t , that would render the classical EWMA scheme effective in detecting shifts in the covariance matrix within predefined δ t ranges.The continuous function g δ t is used for determining the value of the smoothing constant that improves the efficiency of the proposed control chart.The provided text seems to describe the recommended values of constants for a proposed continuous function in the context of an AEWMA-I chart.The purpose of this function is to improve the ARLs and SDRLs of the AEWMA-I control chart, specifically in the early recognition of shifts in the process.The function g δ t plays a crucial role in determining the value of the random variable S_t, which is used as the plotting statistic for the proposed AEWMA-I control chart.The authors have conducted experiments and analysis, and based on their findings, they suggest that specific values for the constant in the function g δ t (i.e., 24 and 19) are optimal over certain ranges of δ_t ( 0.0 < δ t ≤ 1.0 and 1.0 < δ t ≤ 2.7 , respectively).These recommended constant values (24 and 19) have resulted in the proposed control chart functioning as a roughly optimized system, achieving smaller and improved ARLs and SDRLs compared to existing control charts.
The AEWMA-I control chart's working methodology is similar to that of the existing AEWMA-II control chart, as recommended by Haq and Khoo 24 .However, the proposed control chart shows a significant improvement in the Run Length (RL) profiles, indicating that it performs better in detecting shifts in the covariance matrix of the process.
Decision rule.Whenever |S t | > L, the AEWMA-I control chart gives an out-of-control signal.
The process parameter is unknown
The underlying process parameter covariance matrix might not be understood in advance in real-world situations.Then, using this dataset, we may estimate the covariance matrix, assuming that trustworthy historical data is available from an in-control process.All n observation vectors y 1 , y 2 , y 3 , . . ., y n can be transposed to row vectors and listed in the data matrix Y of order (n x p) as follows: www.nature.com/scientificreports/Then, the unbiased estimator of covariance matrix is, given by where I is the identity matrix of order n and J is (n x n) matrix of one's.
Run-length computation
In this research, we opted the Monte Carlo (MC) simulation approach to asses the efficiency of the AEWMA-I control chart.The MC simulation method is a well-established and widely acknowledged approach for assessing the run-length characteristics of control charts.
To examine the run-length characteristics, including averages, standard deviations, and percentiles, we performed MC simulations with 50,000 iterations.In each iteration, the AEWMA-I control chart was simulated to observe its performance under different scenarios or conditions.By repeating this process 50,000 times, a robust estimate of the control chart's performance characteristics is obtained.During each iteration, we sampled from a multivariate normal distribution to obtain the necessary data for the control chart.By analyzing the results of these simulations, we were able to calculate the average run length (ARL) and the standard deviation of run length (SDRL) for the AEWMA-I chart.The in-control ARL ( ARL 0 = 370) and ψ = 0.15.The same is performed for the ( ARL 0 = 500) by taking and ψ = 0.15 and p = 2 in Table 1.The respective Table 1 is a comparative picture of existing EWMA multivariate dispersion control chart and existing AEWMA-II multivariate dispersion control chart with the proposed AEWMA-I multivariate dispersion control chart.it is found that for all respective increasing and decreasing dispersion shifts the proposed chart gives outstanding effects with improved ARL and controlled SDRL along with the quantiles at 5th, 10th, 25th, 50th,75th,90th, and 95th.One more performance measure is determined in Table 1 as E(ARL), expected ARL to analyze the picture in a broader spectrum.
The values of L (threshold) of all three charts EWMA, AEWMA-I, and AEWMA-II are given in Table 2.The run-length characteristics of the AEWMA-I chart with different p are given in Table 3 when δ of any magnitude enters the process covariance matrix.Additionally, to depict the overall conduct of the outcomes a short discussion is given by • When ψ and δ are fixed, with an increase in the value of p, both ARL and SDRL show a tendency to decrease, and vice versa.For instance, from Table 3 with fixed ψ = 0.15, δ = 0.95, and p = 2, 3, 4, 5 the respective ARL = (237.34,188.16, 157.01, 134.43) and SDRL = (213.39,163.58, 134.59, 114.47) at ARL0 = 370.This shows that the sensitivity of the control chart increases with an increase in the value of the p. • Table 2 presents the values of threshold (L) when ARL 0 = 370, ψ = 0.15, one can observe an increasing pattern in the value of L with an increase in the p.This shows a wider control limit with the increase in the p. • When δ decreases or increases, both the ARL and SDRL values decrease due to the heightened magnitude of δ in the process dispersion, elucidating the sensitivity of the suggested chart.For instance, from Table 3 shifts like δ = (0.95, 0.90) with ψ = 0.15 gives the ARL = (237.34, 117.22) and SDRL = (213.39, 86.90), whereas the shifts like δ = (1.05, 1.10) with ψ = 0.15 gives the ARL = (207.36, 112.37) and SDRL = (177.60, 85.06) for the p = 2 and ARL0 = 370.The same pattern is observed at p = 3, 4, and 5.
Performance comparisons
In the field of SPC, the performance of a control chart is commonly assessed by analyzing its run-length profiles, ARL, SDRL, and percentiles.In this study, we follow the same approach and utilize run-length profiles as a benchmark for comparison.To evaluate the effectiveness of the suggested AEWMA-I control chart, we compare it with the existing EWMA and AEWMA-II control charts proposed by Haq and Khoo 24 .The existing AEWMA-II chart was designed to monitor the covariance matrix of a multivariate process that follows a normal distribution.
In order to assess the proposed AEWMA-I multivariate dispersion chart, we analyze its RL profiles alongside the EWMA and AEWMA-II charts, considering various magnitudes of shift sizes.In our evaluation, we set the initial ARL (ARL 0 ) to 370 and the smoothing constant (ψ) to 0.15.To calculate the run-length profiles of the AEWMA-I, AEWMA-II, and EWMA control charts, we conducted 50,000 iterations using the MC simulations method.This enables us to compare the performance of these control charts under different shift sizes.
Comparison of proposed AEWMA-I and existing EWMA charts
The presentation of the AEWMA-I multivariate dispersion control chart with the EWMA chart is given at p = 2, 3, and 5 for δ in Tables 4, 5 and 6.The proposed one is efficient than the EWMA chart for detection of shifts in the covariance matrix.Furthermore, the out-of-control run-length profiles of the AEWMA-I control chart are are notably shorter compared to those of the EWMA control chart for all considered δ s, in other words, the AEWMA-I consistently enhances the run-length profiles compare to EWMA chart.The comparison between the
Comparison of proposed AEWMA-I and existing AEWMA-II charts
In Tables 7, 8 and 9, we presented th comparison of AEWMA-I and AEWMA-II charts.The AEWMA-I performs better than AEWMA-II at the various shift sizes δ ∈[0.75, 1.10].It's important to highlight that the AEWMA-II chart exhibits a notably poor performance in terms of SDRLs.The SDRLs of the AEWMA-II chart are greater than those of ARLs.That's why when δ ∈ ([0.25, 0.50] ∧ [1.15, 1.75]) AEWMA-II chart seems a bit better than the AEWMA-I chart and otherwise, the effectiveness of both charts is the same.For example, at p = 2, the ARLs for δ = (1.05, 1.15, 3.50) of the AEWMA-II and AEWMA-I charts are (215.76, 68.86, 2.36) and (207.36, 72.92, 2.81), respectively.Similarly, at p = 2, the SDRLs for δ = (1.05, 1.15, 3.50) of the existing AEWMA-II and AEWMA-I charts are (236.20,71.47, 0.87) and (177.60,52.63, 1.25), respectively.Overall, the results from Tables 7, 8 and 9 suggest that the AEWMA-I control chart is generally superior to the AEWMA-II control chart in terms of www.nature.com/scientificreports/percentiles, indicating better early detection of process shifts.However, the AEWMA-II control chart may have a slight advantage in terms of ARLs for moderate and large shifts, though its results might be less stable than those of the AEWMA-I control chart.Also, it can be seen that P10 = (5, 5) and P95 = (1326, 886) in AEWMA-II control chart whereas P10 = (43, 42) and P95 = (921, 759) in AEWMA-I control chart at δ = (0.97, 1.03).These observations aligns with our findings in the run-length profile results, particularly at p = 3 and 5. Additionally, these findings are visually reinforced in Figs. 1, 2, 3 and 4.
Illustrative example
The real dataset used in the study is taken from Santos-Fernández 28 .The dataset pertains to a bimetal thermostat, a device commonly used for various practical applications.Bimetal thermostats utilize a bimetallic strip composed of two different metallic strips.This bimetallic strip converts temperature changes into mechanical displacement due to the varying thermal expansion properties of the two metals.In this study, the bimetallic strip, made by combining steel and brass metals, is subjected to quality testing in a laboratory.The bimetallic strip, got by joining steel and brass metals, is investigated in a quality testing lab by testing five quality attributes, including, the redirection (V1), curvature (V2), resistivity (V3), hardness in the low expansion side (V4) and hardness in high expansion side (V5).The quality control division takes 28 samples from the assembling process for both Phase-I and Phase-II datasets.The Phase-I dataset is used to estimate the parameter of the process, as the parameter of the process is unknown, and the twenty-eight samples of Phase-II are considered to observe the covariance matrix of the process.
Here, the understudy quality characteristics variables are V1, V4, and V5 which is p = 3.The proposed AEWMA-I, AEWMA-II, and EWMA control charts are applied to this dataset using in-control ARL as 370.The parametric choice for proposed AEWMA-I, AEWMA-II, and EWMA control charts are (L = 0.2181, ψ=0.15),Figures 5, 6 and 7 make it clear that all three control charts have remained stable for the first 28 samples, indicating that the process is currently under control.Nevertheless, all three charts in the subsequent 28 samples demonstrate an ascending change in the process covariance matrix.The EWMA, AEWMA-II, and AEWMA-I control charts create out-of-control signals at the 40th, 39th, and 34th observations, respectively.An intriguing observation is that the AEWMA-I control chart provides an out-of-control signal earlier compared to the EWMA and AEWMA-II control charts.This illustrates the superiority of the proposed control chart over the multivariate control charts under consideration.
The proposed AEWMA-I control chart offers the advantage of early detection of shifts in the covariance matrix of the process compared to existing control charts.This early detection enables the identification of process variations at an earlier stage, resulting in fewer defective items being produced.Consequently, this leads to cost savings by reducing the expenses associated with discarding faulty products and the cost of reworking them.Moreover, when monitoring correlated multivariate data, using a single multivariate control chart is more appropriate and cost-effective compared to employing multiple univariate charts for each quality characteristic.This becomes particularly relevant when there are numerous related quality characteristics to be monitored.Overall, the proposed AEWMA-I control chart demonstrates higher efficiency than its counterparts in promptly generating out-of-control signals, allowing for timely intervention and quality improvement in the production process.
Conclusions and further recommendations
Recently, adaptive control charts have gained significant attention due to their increased sensitivity compared to non-adaptive control charts.They are particularly useful in providing better protection when the process shift is expected to occur within a certain range.We proposed the AEWMA-I multivariate dispersion control chart as a method to monitor irregular variations in the covariance matrix of a process following a normal distribution.The MC simulation method is used to compute the average run length (ARL) for performance evaluation.Through comprehensive analysis of ARL properties, we find that the AEWMA-I control chart consistently outperforms other memory-based control charts in detecting variations in the covariance matrix of the process.Furthermore, the AEWMA-I control chart exhibits a smaller standard deviation of run length (SDRL) values, making it more reliable for real-life applications.To illustrate its application, we provide a numerical example using real-life data.Thus, we recommend using the AEWMA-I control chart for monitoring irregular variations in the covariance matrix of multivariate processes following a normal distribution.In future research, it would be valuable to develop new AEWMA charts that monitor shifts in the process mean vector or jointly monitor both the mean vector and covariance matrix.Additionally, extending the current research to design AEWMA control charts for non-normally distributed processes would be an interesting avenue to explore.Another important area of investigation could involve understanding the causes behind signals generated by control charts for multivariate data, particularly when monitoring a process covariance matrix.The theoretical contribution behind the proposed dispersion control chart is the target to provide a sensitive control chart with not only gives quick detection of dispersion shift but also improves the SDRL characteristic in comparison with the existing AEWMA-II dispersion control chart.The respective suggested design with controlled SDRL and improved ARL would open new practical implications to utilize the design and may give manufacturing process defect free environment.The SPC literature is not as much enriched with multivariate dispersion adaptive designs to practically suggest designs to real life industries.So, the proposed AEWMA-I multivariate dispersion control chart would be a remarkable effort in this regard as the manufacturer is more comfortable utilizing multiple variables monitoring through a single plotting statistic rather than a univariate.
Figure 5 .
Figure 5.The EWMA chart for bimetal thermostat data.
Table 1 .
Comparative analysis of existing EWMA and AEWMA-II with AEWMA-I ARL 0 = 500 and p = 5.Significant values are in bold.
Table 2 .
Values The results are presented in Tables 4, 5 and 6 for different values of shifts.The results shows that the proposed control chart is efficient to detect the shifts in the covariance matrix as compared to the exist- of L for all control charts for ARL 0 = 370, ψ = 0.15.Vol.:(0123456789) Scientific Reports | (2023) 13:18137 | https://doi.org/10.1038/s41598-023-45399-3www.nature.com/scientificreports/proposed AEWMA-I multivariate dispersion control chart and the conventional EWMA chart was conducted for various values of p.
Table 4 .
Comparative analysis of control charts based on run length profile.Significant values are in bold.
Table 5 .
Comparative analysis based on run length profile for p = 3. Significant values are in bold.
Table 6 .
Comparative analysis based on run length profile for p = 5.Significant values are in bold.
Table 7 .
Comparative analysis based on run length profile.Significant values are in bold.
Table 8 .
Comparative analysis based on run length profile for p = 3. Significant values are in bold.
Table 9 .
Comparative analysis based on run length profile for p = 5.Significant values are in bold. | 6,335 | 2023-10-24T00:00:00.000 | [
"Engineering"
] |
Circuit Organization Underlying Optic Flow Processing in Zebrafish
Animals’ self-motion generates a drifting movement of the visual scene in the entire field of view called optic flow. Animals use the sensation of optic flow to estimate their own movements and accordingly adjust their body posture and position and stabilize the direction of gaze. In zebrafish and other vertebrates, optic flow typically drives the optokinetic response (OKR) and optomotor response (OMR). Recent functional imaging studies in larval zebrafish have identified the pretectum as a primary center for optic flow processing. In contrast to the view that the pretectum acts as a relay station of direction-selective retinal inputs, pretectal neurons respond to much more complex visual features relevant to behavior, such as spatially and temporally integrated optic flow information. Furthermore, optic flow signals, as well as motor signals, are represented in the cerebellum in a region-specific manner. Here we review recent findings on the circuit organization that underlies the optic flow processing driving OKR and OMR.
INTRODUCTION
When an animal moves in an environment, either actively or passively, its displacement in the space causes its visual field to shift. Thus, an animal's visual system is constantly activated by flow-like movements of the visual scene that are caused by its own movement (i.e., self-motion). Detecting such visual information, which is known as whole-field motion or optic flow, is essential for many animals because it serves as a feedback signal that allows them to estimate their own movement relative to the surrounding environment. In turn, the sensation of optic flow induces highly stereotyped behavioral responses of the eyes and body, by which animals adjust and compensate the displacement caused by the self-motion. These visuomotor behaviors are conserved across vertebrates, including teleost fish (Huang, 2008;Masseck and Hoffmann, 2009).
Over the past decade, significant progress has been made on our understanding of the neural circuits underlying optic flow processing in the larval zebrafish brain. In zebrafish larvae, functional imaging, mainly by means of calcium imaging, enables one to non-invasively and systematically probe response properties of neurons at a single cell resolution over a wide extent of the brain or even across the entire brain (Ahrens et al., 2012(Ahrens et al., , 2013. Thus, this technological advance, combined with rich genetic techniques, in the larval zebrafish system provides an unparalleled opportunity to exhaustively identify neurons that respond to optic flow as well as reveal network functions by which optic flow inputs are converted to behavioral outputs. In this review, we discuss recent discoveries of the neural circuits underlying optic flow processing in the larval zebrafish system. We will review the main cell types and brain regions, specifically retinal ganglion cells (RGCs), pretectum, and cerebellum, that process optic flow information and mediate the transformation of visual information to behavior.
Optic Flow-Induced Behavior
An animal's self-motion generates a drifting movement of the entire visual field called optic flow. Similar to other vertebrates, optic flow in zebrafish typically drives two behavioral responses, namely, the optokinetic response (OKR) and optomotor response (OMR). The OKR consists of two alternating phases that involve a rotating eye movement that tracks the perceived motion (slow phase), followed by a fast saccadic eye movement that flips the eyes back to the opposite direction (fast phase). The OMR is a swimming response in which the zebrafish swims in the direction of the optic flow. As a result, the directional motor behavior of the OKR and OMR serves to stabilize the gaze and body posture and/or position, respectively. Because the OKR and OMR are innate, robust behavioral responses that can be quantitatively measured already at larval stages (Clark, 1981;Easter and Nicola, 1997;Orger et al., 2000;Rinner et al., 2005;Portugues and Engert, 2009), they have served as instrumental assays for testing visual functions in forward genetic screens (Brockerhoff et al., 1995;Neuhauss et al., 1999;Muto et al., 2005). Horizontally moving stimuli and resulting horizontal OKR are used in most studies, but a torsional OKR induced by pitch motion has also been reported in larval zebrafish (Bianco et al., 2012).
Typically, OKR and OMR assays use simple, synthetic visual stimuli, such as sinusoidal or square gratings presented over a large field of view. What attributes of visual motion are actually extracted by the zebrafish visual system to induce OKR and/or OMR? The visual system detects motion by analyzing spatiotemporal patterns of light. First-order motion is defined by changes in luminance in space and time, whereas second-and higher-order motions are not defined by luminance modulations but rather by modulations of higher-order features, such as local contrast, flicker, or local motion. Zebrafish larvae perform OMR and OKR in response not only to first-order motion but also to second-order motion (Orger et al., 2000;Roeser and Baier, 2003). Furthermore, motion defined by three-point (third-order) correlations in space and time is also sufficient to effectively induce OMR in zebrafish (Yildizoglu et al., 2020) as well as flies (Clark et al., 2014). Additionally, by correlating components of visual features consisting of the forward-moving gratings and the elicited forward OMR bouts of the fish, it was found that not only the global whole-field motion, as expected from previous studies, but also a local spatiotemporal change of the luminance from light to dark close to the fish's head were identified as key visual cues for OMR, indicating that the two features (i.e., global whole-field motion and local light-dark transition) work together to elicit the forward OMR . In terms of color sensitivity, OMR is dominantly driven by red and green stimuli with a minimum contribution from UV/blue spectrum inputs (Orger and Baier, 2005).
Optic Flow Processing Circuit: Linking Visual Inputs to Behavioral Outputs
In this section, we briefly describe an overview of the optic flow processing circuit based on findings identified in zebrafish as well as other fish species (Figure 1). For both OKR and OMR, optic flow information in the visual scene is detected by the retina and transmitted to visual brain areas, mainly to the pretectum. For the OKR, the pretectum sends a signal (either directly or indirectly) to the oculomotor system that contains the motor neurons controlling the extraocular muscles (Masseck and Hoffmann, 2009). OMR swimming is regulated by the midbrain nucleus of the medial longitudinal fasciculus (nMLF) and hindbrain neurons including the reticulospinal (RS) system, which receive visual inputs from the pretectum. The nMLF and RS neurons are directly involved in controlling swimming of the fish via their descending axons, which reach the spinal cord (Orger et al., 2008;Severi et al., 2014;Thiele et al., 2014). In this mini review, we focus on three cell type/brain regions for which the functional and anatomical underpinnings of optic flow processing were recently uncovered, namely, retinal ganglion cells (RGCs), pretectum, and cerebellum.
RGCs
RGCs are the output neurons of the retina that project their axons to the brain, thereby transmitting visual information to the brain. A subset of RGCs encodes a direction of visual motion, making them direction-selective (DS) (Barlow and Hill, 1963;Dhande and Huberman, 2014). In zebrafish larvae, RGC axons arborize in 10 anatomically distinct regions, termed arborization fields (AFs) and identified as AF1 to AF10 (Burrill and Easter, 1994;Robles et al., 2014), with the largest one being the neuropil of the optic tectum (AF10). In vivo calcium imaging of RGC axons that innervate the tectal neuropil/AF10 identified DS RGCs in zebrafish (Nikolaou et al., 2012). DS RGC axons are composed of three subtypes, each of which is tuned to a different direction of motion that is approximately 120 • apart. These DS RGC axons are located only in the most superficial layer of the stratum fibrosum et griseum superficiale (SFGS) of the tectal neuropil. Furthermore, in vivo calcium imaging of RGC axon terminals in extratectal AFs identified DS RGC inputs terminating in AF5 and also partially in AF6 . These AF5-targeted DS RGCs respond not only to a conventional grating motion but also to more complex motions, such as motion defined by three-point correlations, which can effectively induce OMR (Yildizoglu et al., 2020).
On the basis of the projection patterns of individual RGC axons, zebrafish RGCs are morphologically classified into 20 projection classes, each of which innervate a different combination of sublayers in the tectal neuropil/AF10 and extratectal AFs (Robles et al., 2014). One particular morphological class of RGCs projects to the SFGS1 sublayer of the optic tectum and also forms collateral branches in AF5 in the pretectal neuropil (Robles et al., 2014). Combined with the physiological evidence that DS RGC inputs are detected in SFGS1 (Gabriel et al., 2012;Nikolaou et al., 2012;Gebhardt et al., 2013) and AF5 Yildizoglu et al., 2020), FIGURE 1 | Organization of the optic flow processing circuit in the larval zebrafish brain. The direction selective (DS) retinal ganglion cells (RGCs) project to SFGS1 layer of the optic tectum (OT) neuropil and AF5 in the pretectal (PT) neuropil. DS neurons in the pretectum exhibit various response properties, such as four orthogonally arranged preferred directions, tuning for receptive field (RF) size and location, sensitivity to motion defined by higher-order correlations, binocular integration, translation-selective response, and temporal integration (see section "Pretectum" for details). For triggering OKR, the pretectum sends a signal (either directly or indirectly) to the oculomotor system [the oculomotor (nIII), trochlear (nIV), and abducens (ABN) nuclei] that contains motor neurons controlling the extraocular muscles. OMR swimming is regulated by the midbrain nucleus of medial longitudinal fasciculus (nMLF) and hindbrain neurons including the reticulospinal (RS) neurons, which likely receive visual inputs from the pretectum. nMLF and RS neurons are directly involved in controlling swimming of the fish via their descending axons reaching the spinal cord. In addition, rotation-and translation-selective information is represented in the rostromedial and caudolateral regions in the cerebellum (CB), respectively. OKR, optokinetic response; OMR, optomotor response. Solid lines indicate projections that have been shown in zebrafish larvae, whereas dotted lines represent proposed connections.
this anatomical observation confirms that the SFGS1-and AF5-projecting class of RGCs corresponds to a DS RGC subpopulation. Thus, DS information is conveyed to both tectal and pretectal neurons via the same population of RGCs. However, it remains unknown whether the two postsynaptic targets of DS inputs (i.e., tectum/SFGS1 and pretectum/AF5) derived from the same DS RGCs are involved in different visual functions or behaviors. An ablation of RGC axons that innervate the tectal neuropil/AF10 (Roeser and Baier, 2003) and tectal neurons (Pérez-Schuster et al., 2016) showed that the optic tecum is not necessary for the generation of the OKR per se, but rather plays a role in the habituation of the OKR (Pérez-Schuster et al., 2016). Thus, it is possible that retinotectal DS inputs provide information required for the habituation of the OKR or they are involved in behaviors other than optic flow-induced responses, such as hunting of small moving objects during prey capture.
Pretectum
The zebrafish pretectum is part of the diencephalon and is located ventrally to the optic tectum. Calcium imaging during wholefield motion revealed that DS neurons are highly enriched in the pretectum of zebrafish larvae (Kubo et al., 2014;Portugues et al., 2014;Naumann et al., 2016;Chen et al., 2018) and each DS neuron prefers one of four orthogonally arranged directions (Wang et al., 2019). Furthermore, optogenetic manipulation and laser ablation showed that the pretectum is essential for OKR (Kubo et al., 2014) and OMR (Naumann et al., 2016), suggesting that the pretectum is the principal brain area for processing optic flow in zebrafish and considered to be functionally homologous to the accessory optic system in mammals. The pretectum of zebrafish larvae consists of at least two functionally distinct regions: one is the optic flow-sensitive region described here, and the other corresponds to a more rostrally located region that is involved in prey capture behavior (Semmelhack et al., 2014;Muto et al., 2017;Antinucci et al., 2019).
The role of the pretectum in OKR and OMR predicts that pretectal neurons sample motion signals from a wide area of the visual field and/or local light-intensity transitions, as opposed to tectal neurons whose tuning to small-size moving stimuli agrees with their role in hunting small prey objects (Niell and Smith, 2005). Indeed, pretectal neurons have relatively large receptive fields (RFs) whose RF centers are located in the lower half of the visual field of the fish . In contrast, neurons in the tectum, which is dispensable for OKR and OMR (Roeser and Baier, 2003) and involved in detecting small objects for behaviors such as prey capture (Gahtan et al., 2005), have smaller RFs whose RF centers are located in the upper-nasal part of the visual field . Consistently, the presentation of forward translational motion in the lower visual field induces OMR more effectively than that in the upper visual field . On the other hand, the OKR is efficiently evoked by moving stimuli located laterally and near the equator of the fish's visual field (Dehmelt et al., 2020). Consistent with their large RF size, pretectal neurons can integrate motion signals over space in a random dot motion kinematogram paradigm (Bahl and Engert, 2020;Dragomir et al., 2020). Interestingly, some pretectal neurons can also accumulate motion signals over time, suggesting that they act as a temporal integrator (Dragomir et al., 2020).
In lateral-eyed animals, including zebrafish, comparing the motion information between the left and right eyes is an efficient strategy to estimate optic flow patterns across a wide extent of the visual field. The two most common optic flow patterns are rotation and translation, which are thought to mainly trigger OKR and OMR, respectively (Figure 2A). To test whether pretectal cells differentially represent rotational and translational optic flow patterns, response properties of a population of pretectal cells were examined via calcium imaging using a visual stimulus sequence that consisted of different monocular and binocular optic flow patterns in the horizontal plane (Kubo et al., 2014). These optic flow patterns consisted of four eye-specific DS monocular motions and rotational [clockwise (CW) and counterclockwise (CCW)] and translational [forward (FW) and backward (BW)] binocular motions ( Figure 2B). By classifying pretectal cells into one out of all possible combinations of binary activity patterns in response to the eight stimulus phases (i.e., 2 8 = 256 types) or to a "barcode" (Figure 2B), two major types of pretectum neurons were identified (Kubo et al., 2014). "Simple" monocular pretectal cells consist of four populations of DS neurons, each of which encodes either a nasal or temporal direction of motion presented to either the left or right eye (Figure 2C, top) and insensitive to the motion received by the other eye. In contrast, "complex" binocular pretectal neurons respond selectively to translational FW or BW motion, but not to CW and CCW rotational motion, indicating that different optic flow patterns are already distinguished in these cells (Figure 2C, bottom). Such binocular pretectal neurons were also reported using a slightly different visual stimulus presentation (Naumann et al., 2016;Wang et al., 2019). Mechanistically, the suppressed activity specifically during rotation but not during translation suggests that this suppression is provided by an input from the eye opposite to the one that activates the cells, thus rendering them binocular. Because the optic chiasm is completely crossed in zebrafish, pretectal binocular integration needs intra-pretectal commissures connecting both hemispheres of the pretectum. Indeed, ablation of the posterior commissure, which is a prominent commissure in the pretectal region, abolishes binocular integration (Naumann et al., 2016), suggesting that monocular information is transferred by the posterior commissure within the pretectum.
In addition to the aforementioned horizontal motion, two other major motion types (i.e., vertical and pitch motions) have been tested using a panoramic visual arena, thereby enabling investigation of the three-dimensional binocular encoding of optic flow (Wang et al., 2019). Approximately one-third of motion-sensitive tectal and pretectal neurons were "simple" monocular DS cells that responded to one of the four orthogonally arranged directions in one eye, irrespective of the motion presented to the other eye. Another one-third of the population was preferentially active only when one specific combination of binocular motion was presented to the left and right eyes and did not respond well to any other combinations of binocular motion, suggesting that a large fraction of pretectal and tectal neurons show translation-or rotation-selective representations in all three different axes (Wang et al., 2019). Such binocular pretectal neurons were detected irrespective of the visual field of the fish to which the visual stimulus was presented (Naumann et al., 2016;Wang et al., 2019). Thus, it is proposed that these selective pretectal neurons unambiguously encode appropriate directionality of OKR and OMR behaviors already at the level of the pretectum and no further sensory processing is, in principle, needed in the downstream circuit.
Although optic flow-responsive pretectal cells, be they monocular or binocular, are intermingled in the same pretectal region in larval zebrafish (Kubo et al., 2014;Naumann et al., 2016), their neurite projection patterns are different . Namely, morphological characterization of functionally defined optic flow-responsive cells using a technique named function-guided inducible morphological analysis (FuGIMA) revealed that monocular DS pretectal cells extend dendrites to AF5 where DS RGC axons terminate . In contrast, dendrites of binocular DS cells extend to dorsal AF6 and do not overlap with the region where DS RGC axons terminate . These observations suggested a circuit model in which DS information of DS RGCs transmitted to AF5 is first inherited by monocular DS cells and then integrated in binocular DS cells through AF6 . Pretectal projection neurons identified using a single cell atlas of zebrafish brain (Kunst et al., 2019) project axons to the reticular formation, tegmentum, hypothalamus, and cerebellum, suggesting that these brain regions are candidates for receiving pretectal-derived optic flow information downstream of the pretectum.
Optic flow-responsive cells in the pretectum are roughly organized in spatial clusters (Kubo et al., 2014). One of the clusters located in the ventral-lateral region contains neurons that respond to a classical motion illusion known as motion aftereffect (MAE) (Wu et al., 2020), which refers to a perception of illusory motion after a continuous exposure to a moving stimulus in one direction (Pérez-Schuster et al., 2016;Lin et al., 2019). These cells in the ventral-lateral pretectal cluster, consisting of a small number of neurons (∼12 neurons per fish), are largely monocular DS (Wu et al., 2020). Ablation and optogenetic activation studies showed that these MAE-correlated DS neurons are essential for OKR, suggesting that this rather small population of DS neurons in the ventral-lateral pretectum is an integral part of the optic flow-responsive circuit.
In adult zebrafish, the pretectum is subdivided into several retinorecipient and non-retinorecipient nuclei based on cytoarchitecture and efferent and afferent pathways (Wullimann et al., 1996;Yáñez et al., 2018). The correspondence between adult FIGURE 2 | Representation of binocular optic flow information by pretectal neurons. (A) Rotational and translational optic flow trigger optokinetic response (OKR) and optomotor response (OMR), respectively. (B) 8-phase visual stimulus protocol used to characterize monocular and binocular selectivity of pretectal neurons. NL, nasalward motion to left eye; TL, temporalward motion to left eye; TR, temporalward motion to right eye; NR, nasalward motion to right eye; CW, clockwise; CCW, counter-clockwise; FW, forward; BW, backward. (C) (Top) Monocular direction selective pretectal cells respond to motion that moves in one direction (either nasal or temporal) presented to one eye. This example cell responds whenever a nasal motion is presented to the left eye, irrespective of the motion presented to the right eye. (Bottom) Translation-selective cells that responds to forward translational motion but no to rotational motion. In contrast to the cell shown above, the response to clockwise motion is suppressed although its response is predicted from the activity of this cell responding to nasal motion in the left eye.
pretectal nuclei and optic flow-responsive pretectal neurons in larvae remains unclear. However, a recent study comprehensively matching the function and anatomy between larvae and adults proposed that AF5-pretectal circuits in larvae correspond to the dorsal accessory optic nucleus (DAO) of the adult pretectum (Baier and Wullimann, 2021).
In summary, pretectal cells not only encode monocular optic flow signals (much like DS RGCs) but also respond to a much wider variety of optic flow features, such as binocularly integrated optic flow, and these response properties have already been tailored to compute behaviorally relevant information. Future work is required to elucidate the circuit mechanism and connectivity by which pretectal cells integrate optic flow information across space and time.
Cerebellum
The cerebellum is known as a major brain region that controls motor coordination and learning (Ito, 2006). Several studies using zebrafish have demonstrated cerebellar activation during optic flow stimuli (Ahrens et al., 2012;Matsui et al., 2014;Portugues et al., 2014) as well as functional roles for the cerebellum in motor coordination, adaptation, and learning (Aizenberg and Schuman, 2011;Ahrens et al., 2012;Harmon et al., 2017;Matsuda et al., 2017).
Compared with the pretectum, cell type composition, cellular organization and connectivity are better characterized in the cerebellum. The larval zebrafish cerebellum is anatomically organized in a typical vertebrate trilayered structure, consisting of the two major cell types, gamma−aminobutyric acid (GABA)ergic Purkinje cells (PCs) and glutamatergic granule cells (GCs) (Bae et al., 2009;Hashimoto and Hibi, 2012;Hsieh et al., 2014;Hamling et al., 2015). PCs receive afferent inputs from climbing fibers that originate from the inferior olivary nuclei located in the caudal hindbrain. In contrast, GCs receive inputs from mossy fibers that originate from neurons in several precerebellar nuclei located in various brain regions. GC axons further convey the information to PCs through parallel fibers. PCs integrate the two sources of inputs and finally send their outputs outside the cerebellum, either directly or indirectly via eurydendroid cells, which are the sole output neurons of the cerebellum (equivalent to the deep cerebellar nuclei in mammals). Single cell reconstruction and tracer studies have identified neuronal connections from the pretectum to the cerebellum in larval Kunst et al., 2019) and adult (Yáñez et al., 2018;Dohaku et al., 2019) stages. However, it remains to be tested whether these pretectal-cerebellum projections carry optic flow-related signals, in other words, whether optic flow-responsive pretectal neurons directly project to the cerebellum.
Imaging of neuronal activity across the whole cerebellum population revealed regional differences in the cerebellum (Matsui et al., 2014;Knogler et al., 2019). In a pioneering work by Matsui et al. (2014), calcium imaging of PCs revealed that the rostromedial area of the cerebellum was activated during OMR, whereas the caudal part of the cerebellum was activated during OKR. These OMR-and OKR-related neuronal responses in PCs were absent when the tail or eyes of the fish were restrained, suggesting that these responses were related to proprioception and/or motor signals (Matsui et al., 2014). Furthermore, optogenetic manipulation of the rostromedial and caudal PC populations impairs tail movements triggered by OMR and eye movements induced by OKR stimulus, respectively (Matsui et al., 2014). Thus, cerebellar PCs are highly regionalized such that different functions are organized in rostromedial and caudal regions. Moreover, these functionally distinct regions have distinct afferent projection patterns. The rostromedial cerebellum projects to locomotor-related regions, such as nMLF, red nucleus, thalamus, and reticular formation, whereas the caudal cerebellum projects mainly to a vestibular-related region, namely, the descending octaval nucleus (Bae et al., 2009;Matsui et al., 2014;Knogler et al., 2019). Taken together, these functional and anatomical observations suggest that functionally distinct motor information is relayed to distinct downstream pathways, thereby driving the divergent motor outputs required for OKR and OMR.
Building on this finding, Knogler et al. (2019) examined whether PCs receive visual or motor inputs by presenting translational and rotational motions and simultaneously recording tail and eye movements. Taking advantage of the fact that variables encoding the visual inputs and behavioral outputs are correlated but temporally separable, the authors disambiguated whether PCs responded to either visual or motor variables (Knogler et al., 2019). Electrophysiological recordings of single PCs allowed the authors to separately analyze the two excitatory input streams that PCs receive, namely, complex spikes that originate from climbing fibers from the inferior olivary nucleus and simple spikes that arise from parallel fibers of GCs (Knogler et al., 2019). Inputs from the climbing fiber, which were measured by complex spikes, conveyed sensory, but not motor, information. Interestingly, climbing fiber inputs carrying translational motion information (i.e., OMR-triggering visual information) were frequently represented in the rostromedial region, whereas those carrying rotational motion information (i.e., OKR-triggering visual information) were highly abundant in the caudolateral region. In contrast, inputs from GC-derived parallel fiber measured by simple spikes were highly correlated with motor activity of the fish (measured by ventral root recordings in fictive swimming preparations), since such motor-related simple spikes were observed even without visual stimulation (Sengupta and Thirumalai, 2015;Scalise et al., 2016;Knogler et al., 2019). Consistent with these motor-related properties of simple spikes, GCs themselves were also motor correlated (Knogler et al., 2017(Knogler et al., , 2019. In summary, the cerebellum is spatially organized into behavioral modules, in which the two input streams (i.e., inferior olive-derived sensory stream and GC-derived motor stream) converge and integrate in PCs in a region-specific manner, and thus represents distinct visual features with motor context.
One of the major hypotheses for the role of the climbing fibers and complex spikes in the cerebellum is that climbing fiber input conveys error signals, i.e., discrepancies between a motor command and a feedback signal of the produced motor outcome, such as an unexpected image motion or retinal slip (Ito, 2013;Streng et al., 2018). Such error signals play a teacher's role for correcting subsequent behavior. When larval zebrafish passively experience optic flow and receive no visual feedback upon their own movements, an error signal is likely to be generated. However, evidence so far has not definitively identified the encoding of error signals in the cerebellum of larval zebrafish (Scalise et al., 2016;Knogler et al., 2019). Since the error hypothesis has been developed mostly in the context of learning, it is possible that different principles apply for the innate coding of sensory features during OKR and OMR in naïve animals. Thus, it remains to be concluded what exact signals are carried by climbing fibers in the larval cerebellum (e.g., error/novelty/salience).
OUTLOOK
As discussed in this mini review, a series of recent studies uncovered the general organization of the optic flow processing pathway, involving a dedicated channel for DS processing in RGCs, integration of sensory information in the pretectum, and sensorimotor transformation and regionalization in the cerebellar circuit (Figure 1). Most of these discoveries were made possible by functional imaging at the systematic and cellular level as well as by testing a wider parameter space of visual stimulations. Although functional imaging has exhaustively identified key brain regions and cell populations for optic flow processing, this approach can, by definition, only correlate neuronal activity with sensory or motor variables, but cannot prove connectivity of neurons within or between given brain areas. To overcome this limitation and go beyond correlational analysis, other analysis approaches, such as spatiotemporally specific functional manipulations and anatomical or molecular analyses, will be required to cohesively understand the whole network mechanism that mediates optic flow processing and behavior.
AUTHOR CONTRIBUTIONS
KM and FK drafted the manuscript. Both authors contributed to the article and approved the submitted version.
FUNDING
This work was supported by Grants-in-Aid for Scientific Research 20K15906 and 19K23787 (KM) and 17K20147 (FK) from the Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT) and the Tomizawa Jun-ichi and Keiko Fund of Molecular Biology Society of Japan for Young Scientist (FK). | 6,346.2 | 2021-07-21T00:00:00.000 | [
"Biology"
] |
Effect of Runner Thickness and Hydrogen Content on the Mechanical Properties of A356 Alloy Castings
Earlier studies demonstrated the detrimental effect of entrained bifilm defects on aluminum cast alloys’ tensile and fatigue properties. It was suggested that hydrogen has a contributing role as it diffuses into the bifilms and swells them out to form hydrogen porosity. In this study, the effect of the runner height and hydrogen content on the properties of A356 alloy castings was investigated using a two-level full factorial design of experiments. Four responses, the Weibull modulus and position parameter of both the ultimate tensile strength (UTS) and % elongation, were assessed. The results suggested that decreasing the runner height and adopting procedures intended to decrease the hydrogen content of the casting caused a considerable enhancement of the Weibull moduli and position parameters of the UTS and % elongation. This was reasoned to the more quiescent practice during mold filling, eliminating the possibility of bifilm formation as well as the decreased hydrogen level that eliminated the amount of hydrogen diffused into the bifilms and accordingly decreased the size of the entrained defects. This, in turn, would allow the production of A356 cast alloys with better and more reproducible properties.
Introduction
Recently, there has been an increased demand for aluminum alloys to produce efficient vehicles of lighter weights that are capable of reducing fuel consumption and harmful emissions. This necessitated the development of high performance aluminum alloys, particularly with regard to their mechanical properties. 1,2 The mechanical properties of aluminum alloys were found to be highly dependent on the alloy's inclusion content, and especially type occurrence of double oxide film defects, or bifilms. [3][4][5] During casting, bifilm defects are generated due to surface disturbance of the molten Al throughout the transfer and/or pouring procedures. This would cause such an oxidized surface to fold (by the action of a breaking wave) and then be entrained into the melt as a double oxide film with an air layer captured between the two dry surfaces of the defect. [6][7][8] Bifilm defects typically act as cracks in the solidified casting due to the lack of bonding among their inner (unwetted) surfaces, which was shown to adversely affect Al alloys' tensile and fatigue properties. [9][10][11][12][13] In addition, their stochastic spreading within the casting had been revealed detrimental to the reproducibility of the properties. It was also reported that oxide films could serve as initiation sites for pores and iron intermetallic. [14][15][16] Based on theoretical, computational and experimental studies, a general agreement was grown among researchers that the bifilm entrainment taking place during mold filling could be explained through the identification of the critical ingate velocity. If the liquid metal enters the mold at a speed exceeds such critical value (about 0.5 m/s for most Al alloys), the liquid front would no longer be stable, and surface oxide entrainment is encountered. [17][18][19] Results of these studies demonstrated the impossibility of top-pouring methods to produce reliable castings. They advocated that only bottom-pouring gating techniques can avoid melt quality retrogradation during mold filling if the design of the gating system would take place in such a way that satisfies critical ingate velocity requirements for sound castings. 13,[20][21][22][23][24][25][26][27] During solidification, the solubility of hydrogen in Al drops significantly, causing the former to be rejected by the growing dendrites. Concurrently, the entrained bifilms, initially being compacted due to bulk turbulence forces during pouring, start to unfurl, encouraged by the negative pressure arising from the shrinkage of the freezing metal. This would cause hydrogen to diffuse into the internal atmosphere of bifilms, inflating them into pores. 28,29 These findings were recently supported by the results by El-Sayed and Griffiths that demonstrated a harmful influence of hydrogen on the mechanical properties of an Al casting. [30][31][32] Design of experiments (DoE) is a systematic approach used to plan, conduct and analyze tests to study the effect of different parameters of a given process on the response(s) of that process through performing the minimum number of experiments. [33][34][35] A two-level full factorial design is one of the most widely used experimental designs in which each of the process parameters is set at two levels. These levels are called ''high'' and ''low,'' ''Good'' and ''Bad'' or ''?1'' and ''-1,'' respectively. A factorial design denoted 2 k design is a full factorial design of k parameters. Each has two levels, and the design will involve 2 k runs. 36 In the current research, the effect of the runner thickness and the hydrogen content of the casting on the amount and morphology of bifilm defects and subsequently the properties of A356 alloy castings were investigated. A twofactor two-level full factorial design (2 2 runs) was used for the modeling of the casting process. This might allow a better understanding of the factors dominating the quality and reproducibility of light metal cast alloys.
Experimental Procedure
In this study, castings from A356 alloy (Al-7wt%Si-0.3wt%Mg) were produced via gravity casting. Chemical composition of the alloy, certified by the supplier, is given in Table 1. The accuracy of the measurements was reported to be B0.005 wt%.
Two factors of the sand casting process were considered: the runner thickness and hydrogen content of the casting. In addition, four responses were determined: the Weibull modulus and position parameter of the UTS, and the Weibull modulus and position parameter of the % elongation. Each parameter was varied over two levels: ''-1'' and ''1.'' The experiment, therefore, contained 4 combinations of significant factors and the full factorial experiment (design matrix) is shown in Table 2.
A two-level full factorial design was applied to explore the effects of the two selected parameters and their interaction using Design-Expert Software version 7.0.0 (Stat-Ease Inc., Minneapolis, USA). Figure 1 shows a sketch of the pattern used to produce the resin-bonded sand molds in this work. The gating ratio is defined as the ratio of the crosssectional area of the sprue exit ''As'' to the cross-sectional area of the runner(s) ''Ar'' to the cross-sectional area of the ingate(s) ''Ag'' [As:Ar:Ag]. 37,38 In the current study, the sprue exit had a rectangular cross section of 13 9 10 mm 2 . The runner had also a rectangular cross section with a width of 20 mm and a thickness of either 10 or 25 mm. The ingate cross section was a cylinder with a diameter of 11 mm. Therefore, As = 13 9 10 =130 mm 2 Ar (10-mm-thick runner) = 10 9 20 9 = 400 mm 2 (two runners at the right and left of the sprue) Ar (25-mm-thick runner) = 25 9 20 9 2 = 1000 mm 2 (two runners at the right and left of the sprue) Ag = p/4 9 11 2 9 10 = 951 mm 2 (ten test bars) In this way, the gating ratio was determined for the 10-and 25-mm-thick runners to be 1:3.08:7.32 and 1:7.69:7.32, respectively.
For the preparation of the molds, two types of resin were used as sand binders: polymer polyols (35-50%) in trimethylbenzene and diphenylmethane diisocyanate (60-80%) in high boiling aromatic solvent, each at a percentage of 0.6%. The mold consists of ten test bars of a length and diameter of 100 and 11 mm, respectively. Two molds (20 test bars) were cast for each of the four experiments listed in Table 2. In each experiment, six kilograms of charge of A356 alloy was melted in an induction furnace, and then, the melt was kept at a temperature of 800°C under a partial vacuum of about 0.2 bar for 2 hours to promote the expansion of most oxide films (already existing in the charge) and their subsequent flotation to the surface of the melt. In this way, old oxide inclusions could be eliminated. 39,40 The melt was then poured (at a temperature of about 700°C) from a height of about 1 m into the sand molds. Therefore, the pouring speed could be estimated to be about 4.47 m/s. This was intended to cause the creation and entrainment of new double oxide film defects and their introduction into the melt. The time taken by the melt to completely fill the mold was determined to be about 7 seconds.
In order to evaluate the influence of the hydrogen content of the casting, the experiments were grouped into two categories: the first category (the low hydrogen content (experiments 1, 2)) and the second category (the high hydrogen content (experiments 3, 4)). For experiments 3 and 4, the molten metal was poured into sand molds that had been prepared one day before the experiment. However, for experiments 1 and 2, and to ensure obtaining castings with low hydrogen content, the melt was argon degassed using a lance and hydrogen content was verified using AlSCAN TM device for 30 minutes before pouring.
In addition, and to eliminate the amount of hydrogen picked up by the melt from the mold walls in such experiments, the molds were kept under a reduced pressure of about 0.5 bar for 14 days before the experiment. This approach was proposed to allow for the removal of most of the solvent present in the resin bonding the furan sand molds. 32 Finally, the runner heights were 10 mm (thin) for experiments 1 and 3, and 25 mm (thick) for experiments 2 and 4. The appropriate selection of the runner height would prevent the melt flowing through it from jetting into the air with the associated risk of the recreation of oxide bifilm defects. No metal treatment was carried out before, during or after the casting operation.
After solidification, and for each of the four experiments, a sample was cut from the runner bar and analyzed using LECO TM hydrogen analyzer for solid-state hydrogen measurement of the castings from different experiments. Tensile test bars were then machined from solidified castings, with a gauge length and diameter of 37 and 6.75 mm, respectively. Twenty test bars were produced from each experiment. Testing was performed with a WDW-100E universal testing machine with an extension rate of 1 mm.min -1 . The UTS and %elongation results were assessed using a two-parameter Weibull distribution to evaluate the effect of different casting parameters on the scatter of the casting tensile properties. Finally, the fracture surfaces of test bars were examined using a Philips XL-30 scanning electron microscope (SEM) equipped with an energy-dispersive X-ray analyzer for the evidence of bifilm.
Results
In the current work, an A356 alloy melt was first held under vacuum to eliminate the effect of previously introduced oxides in the raw and ensure that the variability among the castings produced is mainly due to the changing casting Figure 1. Sketch of the pattern used in this experiment (dimensions in mm). Note that the pattern design with parallel sprue and zero radii between the sprue base and the runner was intended to introduce more turbulence and hence allow the creation of more bifilms.
conditions under which they were produced. 31,41 These conditions (factors) involved the runner thickness and the amount of hydrogen in the solidified casting. The results of different experiments were interpreted to better understand the behavior of bifilms in Al alloy castings.
The results showed a significant effect of the degassing treatment as well as the holding of the sand mold under a reduced pressure, for a given time before pouring in, on the casting hydrogen content. The average hydrogen content of Leco samples cut from the solidified undegassed castings (experiments 3 and 4) and degassed castings (experiments 1 and 2) was 0.24 and 0.12 cm 3 /100g, respectively. The noticeable reduction in the hydrogen content in experiments 1 and 2 is reasoned to both the degassing treatment that would decrease the amount of hydrogen in the melt before pouring and the vacuum treatment of the mold before use that seemed to minimize the amount of hydrogen picked by the poured melt from mold walls. 32,42 In an earlier study by Green and Campbell, it was shown that the Weibull distribution could better analyze the probability of failure of cast metals under a mechanical loading than a normal distribution. 43,44 Two important terms are used to characterize this distribution: the position parameter and Weibull modulus. Position parameter is a characteristic value at which about 63% of the samples failed. The Weibull modulus is the slope of the line fitted to the log-log Weibull cumulative distribution data and is used to describe the variability of the property examined. A larger modulus reveals a lower spread of the property. As the casting has fewer defects, it would experience higher Weibull modulus and position parameters of its mechanical properties, which indicate higher and more reproducible properties. The current study employed the two-parameter Weibull distribution to quantify the variability of the UTS and %elongation of cast metals obtained by employing diverse casting parameters.
In the current study, the Weibull modulus and position parameter for the UTS and %elongation of the test bars from different experiments were determined and considered as responses of the experimental design. Table 3 lists the casting conditions of the experiments performed and the corresponding Weibull analysis results for different properties. Figure 2a, b and c shows the effect of runner thickness, hydrogen level and the interaction between the two parameters, respectively, on the Weibull modulus of the UTS. Corresponding plots related to the % elongation are presented in Figure 3a, b and c, respectively.
It was indicated that both moduli had increased consistently with reducing the runner thickness and/or hydrogen content. At a runner thickness and hydrogen content of 25 mm and 0.24 cm 3 /100g Al, respectively, the Weibull moduli of the UTS and % elongation were 4.2 and 2.7, respectively. Reducing the thickness to 10 mm raised the moduli to 6.7 and 4, respectively, while decreasing the hydrogen content to 0.12 cm 3 /100g Al elevated the moduli to 9.8 and 6.6, respectively.
However, adopting the thinner runner and the application of degassing and mold treatment (that resulted in the reduction in the hydrogen content to 0.12 cm 3 /100g Al) resulted in a significant improvement of the moduli to reach 19.2 for the UTS and 10.1 for the % elongation. Finally, the results suggest that the interaction between the runner thickness and hydrogen content is also significant for both moduli, as shown in Figures 2c and 3c. At lower hydrogen content, the converse effect of runner thickness on both moduli is more evident. Likewise, the impact of hydrogen content on the Weibull moduli is more evident at smaller runner thickness.
The influence of the runner thickness, hydrogen contents and interaction between the two casting parameters, respectively, on the UTS position parameter and % elongation position parameter, are shown in Figure 4a, b and c, respectively, and Figure 5a, b and c, respectively.
The change of position parameters of the UTS and % elongation exhibited comparable trends to those of the Weibull moduli of both properties. The position parameters were enhanced from 87 to 158 MPa for the UTS and from 4.4 to 7.6 for the % elongation upon reducing both the runner height and hydrogen content. Furthermore, the interaction between both factors was also revealed to remarkably impact the position parameters of both tensile properties. Decreasing the hydrogen content caused the relationship between the runner height and the position parameter of the UTS (Figure 4c) and % elongation (Figure 5c) to be sharper, and vice versa.
Generally, it was obvious that the use of thin runners as well as the application of casting procedures that tended to minimize the hydrogen content of the casting had a significant effect on the enhancement of the Weibull moduli and position parameters of both tensile properties. As shown in Figures 2, 3, 4 and 5, the properties of the castings produced in experiment 1, where low hydrogen content castings were produced using thin runners, were the highest among all castings. This indicates that the casting properties have been improved, and the variability among them has been reduced.
Using the experimental data, a factorial analysis using the analysis of variance (ANOVA) statistical approach was executed to determine the standardized effects of studied parameters (the runner thickness and hydrogen content of the casting) and their interaction on the Weibull modulus and position parameter of both the UTS and % elongation. Table 4 summarizes the list of factors and their interaction as well as the effect of each factor and/or interaction. The effect is the change in the response as the factor changes from the ''-1'' level to the ''?1'' level. In other words, the effect of a given factor A is the difference between the mean values of the response at levels ''?1'' and ''-1'' of A. A positive value of the effect denotes a lineal influence that favors optimization, whereas a negative sign signifies a converse effect of the parameter on the studied response 45 .
It is clearly seen that both the runner thickness and hydrogen content have antagonistic effects on the four outputs evaluated in this study, see Table 4 responses was always higher than the effect of runner thickness by a factor ranging from 1.5 to 2.7. This is a clear indication of the more significant influence of the former on the reproducibility of aluminum castings.
Bifilm defects were detected at the specimens' fracture surfaces from the four experiments carried out in this work. Typical examples of such defects from experiments 1 and 4 are shown in Figures 6a and 7a, respectively. Results of EDX examination of the suspected oxide films, given in Figures 6b and 7b, respectively, confirmed that spinel films were present at these surfaces. It was shown that the areas of oxide fragments detected at the fracture surface of specimens from experiment 1 were much smaller than those detected in experiment 4. This is suggested to be a result of the significantly lower hydrogen content of the former experiment due to the application of degassing as well as the use of thin runners that seemed to minimize the oxide film entrainment during mold filling.
Discussion
Former research works demonstrated that the use of the badly designed gating system, presented in Figure 1, with a runner of C 25 mm height, was associated with the formation and entrainment of a substantial amount of bifilm defects. 32 They have also advocated that due to the lack of bonding between the inner (dry) sides of a bifilm, the rejected hydrogen during solidification passes into the defect and easily expand it like a balloon, creating a hydrogen pore in the final casting. 28,46,47 In the present work and in experiment 4, the poor mold design was deliberately used while casting practices were applied to produce a casting with high hydrogen content. This was anticipated to violate the critical ingate velocity and, accordingly, result in a copious entrainment of oxide films.
Also, the relatively high hydrogen content of the castings in this experiment (about 0.24 cm 3 /100g Al) was expected to increase the hydrogen ingress into the bifilms and increase their sizes. This could be readily recognized by comparing the areas covered with oxide layers on the fracture surfaces of test bars presented in Figures 6 and 7, corresponding to castings from experiments 1 and 4, respectively. This resulted in a substantial drop in the tensile properties of the casting produced in experiment 4 (a position parameter of UTS and % elongation of 87 MPa and 4.4%, respectively) and also widened their spread (a Weibull modulus of UTS and % elongation of 4.2 and 2.7, respectively). This could be easily inferred from the results shown in Figures 2, 3, 4 and 5 as well as Table 3, which showed that the castings from experiment 4 experienced the worst properties among all experiments.
The DoE results, presented in Table 4, indicated that increasing the runner thickness from 10 to 25 mm showed to have a negative effect on the Weibull modulus and position parameter of the UTS of about -6 and -19 MPa, respectively, and on the Weibull modulus and position parameter of the % elongation of about -2.4 and -1.2%, respectively. It might be argued that reducing the runner height would strengthen the ability of the advancing meniscus to remain coherent, and cause the splashing and disintegration of the metal front (during its passage through the runner) to be more difficult. This would permit a calmer and smoother flow behavior of the melt inside the mold, reduce the ingate velocity, minimize oxide film formation and in turn improve the tensile properties.
This corroborates the results obtained by Green and Campbell,44,48 who achieved a considerable improvement in the Weibull moduli of the tensile properties of Al-7Si-Mg castings, of about 350%, while applying a turbulentfree filling system designed to prevent oxide film entrainment. Furthermore, the hydrogen level of the solidified casting was found to significantly affect the tensile properties. As indicated in Table 4 was considerably smaller than that in the casting from experiment 4. Accordingly, the castings from experiment 1 showed a remarkable enhancement of the Weibull moduli of the UTS and % elongation by about 360% and 270%, respectively, compared to those in experiment 4. Also, a less significant increase in the position parameters of about 80% and 70% for UTS and % elongation, respectively, had been obtained due to the reduced hydrogen content and the use of thin runner. See Table 3 and Figures 2, 3, 4 and 5.
It should be noted that the use of extra thin runners might have an adverse effect on the filling regime. The reduction in the runner cross-sectional area below a certain value might increase the ingate velocity and violate the critical velocity considerations. Another point is that for the thick runner another factor might have contributed to the formation of bifilms (rather than the violation of critical velocity), which was the increase in air entrainment during the flow of the melt through the relatively larger runner. This would require further investigation by carefully studying the effect of runner height (with values ranging from 5 to 25 mm for example) on the filling behavior and recording the ingate velocity corresponding to each runner height.
The connotation of the current findings would be that the careful adjustment of runner height could significantly reduce the production of bifilm defects. Moreover, the reduction in the casting hydrogen content would minimize the amount of the gas diffuses into the entrained bifilms, decreasing the size of the defects. Thus, appropriate treatment procedures for both the melt and the sand mold would benefit the production of castings with minimum hydrogen content. Also, the usage of a mold with a thin runner would encourage a more quiescent mold filling regime. These considerations would allow a casting producer to reduce both the number and the size of the bifilms in the melt and consequently obtain an Al cast alloy with improved mechanical properties.
Conclusions
1. The detection of bifilms at fracture surfaces of the majority of tensile samples examined indicates the deleterious influence of these inclusions on the mechanical properties of Al cast alloys. 2. Holding the sand molds under a partial vacuum for some time prior to pouring can cause the mold to lose most of its solvent (of the resin bonding the sand grains), and therefore minimize the amount of hydrogen picked up by the melt from the mold walls. 3. With more careful and quiescent mold filling practice, the amount of entrained bifilm defects (and accordingly oxide-related hydrogen-containing porosity) would be significantly decreased in the final casting. 4. Factorial analysis revealed that reducing the hydrogen level of A356 cast alloy had remarkable positive effects of about 9 and 5, respectively, on the UTS Weibull modulus and % elongation Weibull modulus, which was about double the corresponding effects of decreasing the runner thickness. 5. DoE results also indicated that hydrogen level was more influential on the UTS and % elongation position parameters, with effects of 52 MPa and 2%, respectively, which was also higher than the corresponding effects of decreasing the runner thickness by factors of 1.7 and 0.6, respectively. 6. The optimized casting conditions involving the implementation of 10-mm-thick runners and the reduced hydrogen content of 0.12 cm 3 /100g Al caused a considerable improvement of the Weibull moduli of the UTS and % elongation by about 360% and 73%, respectively.
Funding
This work has not received any funding.
Conflict of interest
The authors declare no conflict of interest.
Open Access
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/ licenses/by/4.0/. | 5,868.8 | 2022-01-24T00:00:00.000 | [
"Materials Science"
] |
Thionylchloride catalyzed aldol condensation: Synthesis, spectral correlation and antibacterial activities of some 3,5-dichloro-2-hydroxyphenyl chalcones
A series of substituted styryl 3,5-dichloro-2-hydroxyphenyl ketones [1-(3, 5-dichloro-2-hydroxy)-3-phenylprop-2-en-1-one] were synthesized using thionyl chloride assisted Crossed-Aldol reaction. The yields of chalcones were more than 80 %. The synthesized chalcones were characterized by analytical and spectroscopic data. From the spectroscopic data the group frequencies were correlated with Hammett substituent constants, F and R parameters. From the results of statistical analysis the effect of substituents were discussed. The antibacterial activities of these chalcones have been evaluated using Bauer-Kirby method.
INTRODUCTION
Chalcones are α, β unsaturated ketones possess methylene structural moieties and they belongs to biomolecules. Many alkyl-alkyl, alkyl-aryl and aryl-aryl categories of chalcones have been synthesized [1] and extracted from natural plants [2] by organic chemists. Various methods available for synthesizing chalcones such as Aldol, Crossed-Aldol, Claisen-Schmidt, Knovenagal, Greener methods-Grinding of reactants, solvent free and oxides of nanoparticles with microwave heating. Also microwave assisted solvent free Aldol and Crossed-Aldol condensation [3][4][5] were useful synthesis of carbonyl compounds. Due to C-C single bond rotation [6] of carbonyl and alkene carbons, they exist as E s-cis and s-trans and Z s-cis and Z s-trans conformers. These structural conformers of chalcones have been confirmed by NMR and IR spectroscopy.
General
All chemicals used were purchased from Sigma-Aldrich chemical company Bangalore. Melting points of all chalcones have been determined in open glass capillaries on Suntex melting point apparatus and are uncorrected. The ultra violet spectra of the chalcones synthesized have been recorded using ELICO-double beam BL222 Bio-Spectrophotometer. Infrared spectra (KBr, 4000-400cm -1 ) have been recorded on AVATAR-300 Fourier transform spectrophotometer. BRUKER-500MHz NMR spectrometers have been utilized for recording 1 H and 13 C spectra in CDCl 3 solvent using TMS as internal standard.
Synthesis of chalcones
Appropriate mixture of 3,5-dichloro-2-hydroxyacetophenone (100 mmol) and substituted benzaldehydes (100 mmol), 15mL of diethyl ether and (100 mmol) of thionylchloride were added. The reaction mixture was vigorously stirred at room temperature for 30 minutes (Scheme 1). After complete conversion of the ketones as monitored by TLC, the mixture was allowed to stand 20 minutes. The reagents were removed by filtration. The filtrate was washed with distilled water and recrystallized from absolute ethanol, dried well and kept in a desiccator.
RESULTS AND DISCUSSION
In our organic chemistry research laboratory, we attempts to synthesize aryl chalcone derivatives by Crossed-Aldol condensation of electron withdrawing as well as electron donating group substituted aryl methyl ketones and benzaldehydes in the presence of vigorous acidic catalyst thionyl chloride in diethyl ether except acid or base or its salt in atmospheric temperature condition. Hence the authors have synthesized the chalcone derivatives by the reaction between 100 mmol of aryl methyl ketones 100 mmol substituted benzaldehydes, 100 mmol of thionyl chloride and 15 mL of ether at room temperature (Scheme 1). During the course of this reaction the acidic thionyl chloride catalyzes Aldol reaction between aryl ketone and aldehydes and elimination of water gave the chalcones. The yields of the chalcones in this reaction are more than 80 %. The proposed general mechanism of this reaction is given in Fig. 1. Further we investigated this reaction with equimolar quantities of the 3,5-dichloro-2-hydroxyacetophenone and benzaldehyde (entry 27). In this reaction the obtained yield is 83 %. The physical constants yield and mass spectral data are presented in Table 1. We have studied the effect of solvent for this Aldol condensation by observing the yield of the products. The solvents such as ethanol, methanol, dichloromethane, dimethyl formamide and water have been used for this Aldol reaction with 3,5-dichloro-2hydroxyacetophenone and benzaldehyde. Carrying out this Aldol reaction with above solvents the resulting yields are 73 %, 68 %, 65 %, 66 % and 65 % of chalcones respectively. The same reaction was carried out with reflux conditions and there is no improvement for the yield of the products. Here the authors have achieved the aim of this synthetic method with the observation of more than 82% yields of aryl chalcones by condensation of 3,5-dichloro-2-hydroxyacetophenone and benzaldehyde in presence of SOCl 2 /Et 2 O in room temperature. The Ultra-violet, infrared and NMR spectral data of unknown chalcones, substituted 3,5dichloro-2-hydroxy phenyl ketones were summarized below (27-36). In the present investigation UV absorption maxima form UV spectra, the spectral linearity of chalcones has been studied by evaluating the effect of substituents. The assigned group frequencies of all chalcones like carbonyl stretches νCO, the deformation modes of vinyl part CH out of plane, in-plane, CH=CH and >C=C< out of planes (cm -1 ), the vinyl hydrogen from IR spectra and chemical shifts δ(ppm), of H α , H β , C α , C β and CO from 1 H and 13 C NMR spectra have been correlated with various substituent constants.
1. Ultra violet spectral study
The UV spectra of all synthesized chalcones were recorded in SHIMADZU-1650 SPECTROMETER ( max nm) in spectral grade methanol. The measured absorption maxima ( max nm) of these chalcones are presented in Table 2. These values are correlated with Hammett substituent constants and F and R parameters using single and multi-linear regression analysis. Hammett correlation involving the group frequencies and absorption maxima, the form of the Hammett equation employed is where λ o is the frequency for the parent member of the series.
The results of statistical analysis of these values with Hammett substituent constants are presented in Table 3. From Table 3, Hammett substituent constants σ, σ + , σ I, σ R , F and R values gave poor correlations with λmax. All constants gave negative ρ values. This is due to 24 ILCPA Volume 4 The inability of effect of substituents on absorption and the resonance conjugative structure shown in Fig. 2. regression analysis of these frequencies of all ketones with inductive, resonance and Swain -Lupton's [22] constants produce satisfactory correlations as evident in equations 2 and 3.
2. IR spectral study
The carbonyl stretching frequencies (cm -1 ) of s-cis and s-trans isomers of present study are presented in Table 2. The stretching frequencies for carbonyl absorption are assigned based on the assignments made by Hays and Timmons [23] for s-cis and s-trans conformers at 1690 and 1670 cm -1 , respectively. As anticipated the lowest carbonyl frequency is observed in both the conformers when strongest electron withdrawing group is present in phenyl ring while highest frequency is noted when strongest electro attracting group present in phenyl ring. A similarly trend in absorption was earlier noted by Perjessy and Hrnciar [24] too whose investigated on chalcones demonstrates that s-trans conformers transmit more effectively than s-cis conformers due to reason stated earlier. The difference in carbonyl frequencies between the s-cis and s-trans conformers is higher in this study than the difference observed by Silver and Boykin [25] between similar conformers in phenyl styryl ketones. These data have been correlated with Hammett substituent constants and Swain-Lupton constants [22]. In this correlation the structure parameter Hammett equation employed is as shown in the following equation: Where υ is the carbonyl frequencies of substituted system and υ 0 is the corresponding quantity of unsubstitued system; σ is a Hammett substituent constant, which in principle is characteristics of the substituent and ρ is a reaction constant which is depend upon the nature of the reaction. Hammett equation is one of the important tools for studying linear free energy relationships and it has been widely used in structures of the chemical reactivity of substituted aromatic system.
From Table 2, the s-cis conformers gave satisfactory correlation with Hammett , + , and I constants. And the s-trans conformers, the correlation C=O fails with Hammett parameters. All correlations gave positive ρ values and it implies that there is a normal substituent effects operates in all systems.
The correlation of CH in-plane and out of plane modes with Hammett σ constants were fails in correlation. The CH in-plane modes gave negative ρ values in all correlations.
A satisfactory correlation obtained for CH=CH out of plane modes with Hammett σ R constants. Also all correlation were fails with C=C out of plane modes with Hammett constants. Similarly the OH stretches also fail in correlations. This is due to the inability polar, resonance and inductive effects substituent constant to predict the reactivity of the frequencies along with the resonance conjugative structure shown in Fig. 2. The correlation of νOH stretches were fails in correlation with Hammett substituent constants, F and R parameters.. Some of the individual single parameter correlations were fails with hammett substituent constants and F and R parameters. While the multi-regression analysis seems worthwhile with Swain-Luptons [22] constants and the generated equations are shown in 5-18.
CONCLUSION
We have developed an efficient Crossed-Aldol condensation for synthesis of chalcones using thionyl chloride catalyst. The yield of the reaction is more than 80 %. The purities of these synthesised chalcones are checked by their physical constants, analytical and spectral data. The spectroscopic data of the chalcones were correlated with Hammett substituent constants, F and R parameters. The antibacterial activities of all synthesized chalcones have been studied using Bauer-Kirby method. | 2,118.2 | 2012-11-19T00:00:00.000 | [
"Chemistry"
] |
New Uniform Motion and Fermi–Walker Derivative of Normal Magnetic Biharmonic Particles in Heisenberg Space
: In the present paper, we firstly discuss the normal biharmonic magnetic particles in the Heisenberg space. We express new uniform motions and its properties in the Heisenberg space. Moreover, we obtain a new uniform motion of Fermi–Walker derivative of normal magnetic biharmonic particles in the Heisenberg space. Finally, we investigate uniformly accelerated motion (UAM), the unchanged direction motion (UDM), and the uniformly circular motion (UCM) of the moving normal magnetic biharmonic particles in Heisenberg space. 53C80; 83A05
Introduction
In relativistic physics, mathematical description of the motion of the particle is given by its kinematics. Kinematic features of a particle moving through a continuous, differentiable curve or geometric features of the curve itself in space are mostly described by the moving orthogonal frame such as Frenet-Serret frame, parallel frame, rotation minimizing frame, etc. Generally, the factors affecting this motion are not discussed with the exception of projectiles and falling bodies. These studies have been recently improved by considering the quantities that influence the motion, i.e., mass and force. Thus, dynamics of the motion of the particle can be discussed by using the mathematical description and geometric characterization in a given space [1,2].
The description of UAM in relativity has been reviewed and the concept of a UAM of a viewer in standard space time is investigated in-depth in [3]. Approach to the viewer may be noticed as a Lorentzian, offering a different construction of a stationary regular space. The trajectories of UAM are shown as the prolongation on the space time of some integral curves of the new vector field defined on a convinced fiber bundle in the space time.
Therefore, they found a new geometric approach to provide that an inextensible UAM viewer does not abandon in a limited appropriate time. The investigation of these motions has an observed technological and physical fascinate because they correlate to the orbits of some artificial satellites, planets or stars.
The notion of the UAM was analyzed in detail by giving its novel geometric characterization by Fuente and Romero [4]. The description of the unchanged direction motion (UDM) was presented by extending the UAM by Fuente, Romero, and Torres [5]. The intrinsic definition of the uniformly circular motion (UCM) was given by Fuente, Romero, and Torres as a particular case of a planar motion [5].
Practically, essential details of the Landau-Hall structure are obtained as among the additional solution individuals of the Lorentz force formulation. Hence, this as well indicates that magnetic particles are utilized to resolve a variational issue [6][7][8][9].
As it can be seen in the literature, the major patterns to be taken into account were the situations of magnetic curves in Riemannian spaces and in Riemannian surfaces of constant sectional curvature consecutively regarding situations of less simple curvature, different signatures, and higher dimensions [10][11][12][13].
Research on magnetic particles have been concentrated on a shifting charged particle, which usually is free of any kind of exterior force within its motion, in a connected magnetic field [14][15][16][17]. On the other hand, this is genuine to presume that there could possibly be a few exterior forces impacting the tendencies of the particle including gravitational force, frictional force, normal force, etc. Simply by encouraged this point, we research new uniform motion of velocity magnetic biharmonic particles and some vector fields with Fermi-Walker derivative in Heisenberg space. In [18], we already characterized the frictional magnetic curves on a 3-dimensional Riemannian manifold by providing a straightforward exposition of physical modeling of special magnetic trajectories. Considering the Riemannian geometry and standard methods of differential geometry, we aim to investigate another significant magnetic trajectories on the 3D Riemannian manifold.
The Heisenberg Group and Magnetic Particles
Heisenberg group with 3-dimension can easily be mentioned as R 3 offered by way of the subsequent multiplication: A basis of left invariant vector fields is presented by simply The only non-trivial bracket relations are [e 1 , e 2 ] = e 3 .
We will construct Riemannian metric The Levi-Civita connection in the Heisenberg group is defined by D. Koszul formula and the Lie bracket relations we obtain ∇ e 1 e 1 = ∇ e 2 e 2 = ∇ e 3 e 3 = 0, A magnetic field B defined on manifold (M n , g) is a 2 form such that its new Lorentz force is a new field φ presented by From Levi-Civita connection, magnetic particles ζ satisfies Also [2], Lorentz force φ can be presented by Let α be a regular biharmonic particle and B be a magnetic field in Heisenberg space. We call the particle α a normal biharmonic magnetic particle, if the normal field of the particle satisfies the following Lorenz equation
Uniform Motion for Normal Biharmonic Magnetic Particles
In this part, we characterize the uniformly motion of moving charged particles with unit speed velocity magnetic biharmonic particles. We obtain necessary and sufficient provisions that have to be satisfied with the biharmonic particle with Frenet curvatures of world line of magnetic particles. We present the following definitions of UDM, UAM, and UCM [19,20].
where R is any vector field along with the particle, h is metric, T is the tangent field, ∇ is the derivative operator [20].
It provides us the theorem.
By using parallelism we obtain lemma.
Proof. From Fermi-Walker derivative we have following.
This completes the proof.
Uniformly Accelerated Motion (UAM)
In this detail, we recognize UAM in Heisenberg space.
By using the Fermi-Walker derivative and the following equation we have the above system.
This gives us the subsequent equation The φ (N) observes a UAM iff By using Fermi-Walker derivative and following equation we have above system.
The φ (B) observes a UAM iff From the Fermi-Walker derivative and the following equation, we have the above system. Figure 1 demonstrates the magnetic trajectories of N-magnetic particle with UAM.
Unchanged Direction Motion (UDM)
In this detail, we recognize UDM in Heisenberg space.
where r 1 , r 2 are constants.
The φ (N) observes a UDM iff where r 3 , r 4 are constants.
The φ (B) observes a UDM iff where r 5 , r 6 are constants. Figure 2 demonstrates the magnetic trajectories of N-magnetic particle with UDM.
Conclusions
In this work, we investigate the special type of magnetic trajectories such that it corresponds to a moving charged particle in an associated magnetic field in Heisenberg space. This study differs from the former studies in the literature since it is considered in the Heisenberg space. We consider uniformly accelerated motion (UAM), the unchanged direction motion (UDM), and the uniformly circular motion (UCM) of the moving normal magnetic biharmonic particles in Heisenberg space.
In future studies, we will investigate the physical implications of the external force on a charged particle by obtaining different trajectories in different space time structures such as considering Heisenberg space time, de Sitter space time, Anti de Sitter space time, etc. Finally, this study leads up to the research of classifying the magnetic trajectories associated with characterizing moving binormal biharmonic magnetic particles in de Sitter space.
Conflicts of Interest:
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 1,698.2 | 2020-06-16T00:00:00.000 | [
"Physics"
] |
The role of radial electric fields in linear and nonlinear gyrokinetic full radius simulations
The pivotal role played by radial electric fields in the development of turbulence associated with anomalous transport is examined by means of global gyrokinetic simulations. It is shown that the stabilizing effect of E×B flows on ion temperature gradient (ITG) modes is quadratic in the shearing rate amplitude. For a given shearing rate it leads to an increase in the critical gradient. The electric fields (zonal flows) self-generated by ITG modes interact in a nonlinear way and it is shown that a saturated level of both the zonal flow and ITG turbulence is reached in the absence of any collisional mechanism being included in the model. The quality of the global nonlinear simulations is verified by the energy conservation which is allowed by the inclusion of nonlinear parallel dynamics. This demonstrates the absence of spurious damping of numerical origin and thus confirms the nonlinear character of zonal flow saturation mechanism.
Introduction
The development of turbulence from underlying micro-instabilities is one of the mechanisms most widely held responsible for anomalous transport in magnetically confined plasmas. Various types of drift-wave like instabilities have been considered, e.g. ion temperature gradient (ITG) modes, trapped electron modes (TEM), trapped ion modes (TIM), electron temperature gradient (ETG) modes and kinetic ballooning modes (KBM, or Alfvén-ITG). Fluid, gyrofluid, or gyrokinetic theories have been developed, most of them within the electrostatic approximation. Various approaches, differing in their degree of sophistication, have been used to solve the models, ranging from local dispersion relations to ballooning approximations to flux tubes to full radius computations.
29.2
While linear theory is relatively well established for most of the instabilities cited above, the nonlinear evolution of these modes is a much more difficult problem. The challenge is not only of a technical nature but also to understand the physics mechanisms at play. Several theoretical works [1]- [16] have shown that one of the pivotal ingredients of the physics is the existence of radial electric fields that are self-generated by the turbulence. These are the E × B 'zonal flows' (ZF). One mechanism for the ZF generation is by modulational instability of the ITG modes. The importance of the ZF is in that they can have a stabilizing effect on the ITG modes. The mechanism usually invoked is that the ZF, if they are sheared, can tear apart the turbulent eddies of the ITG, hence the alias 'sheared E × B flows'. Therefore a nonlinear feedback loop links the ITG modes amplitude and the ZF amplitude.
Since ZF have k = 0 they are not subject to Landau damping. The question of ZF damping is crucial [5] because it affects the ZF amplitude, which in turn determines the ITG turbulence level and therefore the anomalous heat flux. Various linear or nonlinear mechanisms for the damping of ZF have been considered. Some models have examined Kelvin-Helmolz instabilities of the ZF [10], while some others consider collisionality [7,8]. In nonlinear numerical simulations it is therefore important to verify that the discretization does not introduce fake damping or drive mechanisms. This issue is addressed in this paper.
Radial electric fields can also be imposed by external means, e.g. creating sheared toroidal rotation or with an applied electrode bias [17,18]. These external electric fields could play an important role in the formation of transport barriers. There is a generally observed correlation between the appearance of radial electric fields, reduction of turbulence and improved energy confinement [19]. Experimental evidence of the self-regulating role of ZF has recently been observed in both tokamaks [20,21] and stellarators [22].
In this paper we examine two aspects of the role of E × B flows. First, considering the radial electric field as an equilibrium quantity, we consider ITG, TIM and TEM instabilities. We show that an applied E × B flow is generally stabilizing for ITG modes with a quadratic dependence on the shearing rate, whereas E × B flows can destabilize trapped particle modes. Second, we show in a nonlinear full radius gyrokinetic simulation how the system reaches a quasi-stationary state with finite level of both ITG and ZF amplitudes. The energy conservation property is checked and shows the absence of spurious damping or drive of numerical origin. Since our model does not include any linear damping of ZF (Landau or collisions) this points to the essentially nonlinear mechanism for the saturation of ZF.
Gyrokinetic global model
We consider low β magnetic configurations with prescribed equilibrium profiles of safety factor q(ψ), density n 0 (ψ), electron and ion temperatures T e (ψ) and T i (ψ), to which can also be added an electrostatic potential Φ 0 (ψ). The symmetry of the configuration is either axisymmetric or helical. All profiles are taken as function of the poloidal magnetic flux ψ (respectively helical flux). The radial coordinate is defined as s = ψ/ψ s , where ψ s is the edge poloidal flux (respectively helical flux).
The gyrokinetic model is used to describe electrostatic perturbations that satisfy the usual ordering ω/Ω ∼ k /k ⊥ ∼ eδφ/T e ∼ ρ L /L n ∼ ρ L /L T ∼ O( g ), where ρ L is the ion Larmor radius, Ω is the ion cyclotron frequency, L −1 n = |∇ ln n 0 |, L −1 29.3 f = f 0 + δf and the electrostatic potential as Φ 0 + δφ the equations read [23], after linearization: where the brackets indicate Larmor averaging. We consider small Mach numbers of the E × B flow and thus terms of order (v E /v thi ) 2 and B (v E /v thi ) have been neglected. The system of equations is closed by the quasi-neutrality condition Without loss of generality the perturbations are written as where θ is the poloidal angle, ϕ is the toroidal angle and χ is a straight field line poloidal coordinate. Taking advantage of the fact that perturbations have k k ⊥ , the parameters s 0 , m 0 and ω 0 are chosen such that the transformed quantitiesδf andδφ have much smoother poloidal and time dependences. This proves to considerably improve the computational accuracy [24]. For electrons the Larmor radius is neglected. Passing electrons are assumed to respond adiabatically. Non-adiabatic trapped electrons are considered as drift-kinetic.
For nonlinear simulations in this paper we shall neglect magnetic shear and magnetic curvature effects. Electrons are assumed to respond adiabatically along the magnetic field lines. The corresponding equations are where n i is the gyro-averaged ion density andφ is the magnetic surface average of the potential. Note that the parallel nonlinearity is retained in our model. This implies that energy conservation is satisfied: 29.4 The turbulence-driven average radial energy flux is While the nonlinearly driven flux is the most interesting physical quantity in the context of anomalous transport studies, the energy conservation property of the equations can be used as a strong test of the quality of the numerical simulation [25]. If satisfied, it shows that there is no spurious damping or drive mechanism of numerical origin. The equations are discretized using quasi-particles (gyrocentre tracers) and finite elements on a magnetic coordinate system [24]. In [25] several techniques are described to improve the quality of the numerical scheme. In particular it is shown that energy conservation can be satisfied with this PIC-δf scheme.
The stabilization of ITG modes by the shearing effect of E × B flows is often expressed by the criterion |ω E×B | > γ 0 , where ω E×B is the shearing rate and γ 0 is the maximum linear growth rate in the absence of flow. The shearing rate is best described by the expression [3] ω E×B = ∆ψ ∆ϕ in which ∆ψ/|∇ψ| and R∆ϕ are the radial and toroidal correlation lengths of turbulence, respectively. Note that strictly speaking the turbulence suppression criterion [3] is when |ω E×B | is comparable to the inverse turbulence correlation time. For application purposes, however, this is often replaced by the linear growth rate. The last expression in equation (15) is valid in the circular, axisymmetric, large aspect ratio limit and ρ is the minor radius. It shows that the shearing rate is not simply the shear in the poloidal E × B velocity but there is a contribution from the magnetic shear combined with the value of the poloidal E × B velocity.
Applied E × B flows
In this section we focus on the way applied E × B flows can influence the ITG growth rate using a model with adiabatic electrons. The question of the non-adiabatic trapped electron response will be examined in the following section. It will be shown that for toroidal-ITG, slab-ITG and helical-ITG modes, the stabilizing effect of the flows is essentially quadratic in the shearing rate. However, marginal stability is reached when |ω E×B | is comparable to γ 0 within a factor of about 2. In [26] a detailed analysis of this effect is made in both axisymmetric and helically symmetric configurations.
To show the generality of this behaviour let us consider three different magnetic configurations.
(1) A circular cross-section tokamak of aspect ratio 5.5 with a q profile given by q(s) = 1.25 + 3s 2 . (2) A helically symmetric heliac configuration with an elongation of 2 and shearless q profile.
In what follows we shall refer to these configurations as 'tokamak', 'heliac' and 'cylinder' for the sake of simplicity. In all these plasmas we consider T e = T i profiles given by the heliac a/L T = 1.3 and in the cylinder a/L T = 4. The toroidal mode number n (respectively helical mode number and longitudinal wavenumber) is chosen so that it corresponds to the most unstable ITG mode of the corresponding type. The respective average values of k θ ρ Li are k θ ρ Li ≈ 0.5 for the tokamak and heliac cases, and k θ ρ Li ≈ 0.8 for the cylinder case. These values are slightly modified by the presence of applied radial electric fields examined in this paper.
Three different types of profiles of radial electric field are considered.
where B c is the magnetic field on magnetic axis, a is the average minor radius and M , k x and ∆ x are dimensionless parameters. While profiles (a) and (b) are very global, profiles (c) resemble the ZF self-generated by the turbulence, as will be shown in section 5. It should be noted that there is nevertheless a fundamental difference between applied E × B flows, which are treated as equilibrium quantities in this section, and the ZF, which are fluctuating quantities. Figure 1 shows that for all the considered cases there is a quadratic dependence of the resulting ITG growth rate on the E × B shearing rate. Note that throughout this paper the definition used for the shearing rate is that of equation (15), and although it was established 29.6 in [3] in the context of turbulence decorrelation by ZF, it will be shown below that the criterion for ITG stabilization in the presence of applied (equilibrium) E × B flows can also be expressed with the same definition for the shearing rate. The circles correspond to the tokamak case while the diamonds are results for the heliac case. The filled circles have shearless v E profiles (type (a) defined above), while for the three other cases linear profiles (type (b) defined above) have been specified. The curve with filled diamonds corresponds to a helical-ITG mode while the open diamonds are slab-like ITG modes in the heliac. For each of these four cases a parabolic fit of the global gyrokinetic results is shown with dashed curves.
We conclude that the effect of the shearing rate of radial electric fields on ITG modes is essentially quadratic, and that this quadratic nature is a generic feature. As a matter of fact all our simulations of ITG modes in the presence of E × B flows show this behaviour when trapped particle effects do not dominate the instability drive. Trapped particle effects seem to modify this simple behaviour and an example will be shown in the next section.
Looking at the results of figure 1 there appears an asymmetry with the sign of the shearing rate. This asymmetry is due to the fact that the eigenmode in the absence of E × B flow is not up-down symmetric. With one sign of the shearing rate the mode is first straightened so that the radial structure of the mode is aligned in the most unfavourable grad-B drift direction, and therefore the growth rate is maximized; increasing the shearing rate further tilts the radial structure and stabilizes the mode, in a similar mechanism as shown in [27,28]. For all the cases shown in figure 1 the ITG modes are stabilized when the shearing rate is approximately equal to the growth rate without flow: |ω crit E×B | ≈ γ 0 seems therefore to hold, at least within a factor of 2. This should not mislead us to assume that the ITG growth rate in the presence of flow is diminished linearly as γ 0 − |ω E×B |: as we have just shown the ITG growth rate in the presence of flows is quadratically dependent on ω E×B .
Neglecting now the effects of magnetic curvature and magnetic shear we consider a cylindrical configuration with constant axial magnetic field. We set a/L T = 4 and η i = 5 and study the effect of the radial wavelength of E × B flows on ITG mode stability. The results (figure 2) show that considering a very global profile of the shearing rate (ω E×B ∼ ρ/a), the open squares in figure 2, is most effective to stabilize the ITG. For profiles given by equation (16) the effect decreases with the radial wavenumber k x of the E × B flow. The filled triangles in figure 2 are for k x = 5, ∆ x = 0.3 and the filled squares are for k x = 10, ∆ x = 0.15.
The dashed curves in figure 2 are parabolic fits to the data for absolute values of the shearing rate smaller than 0.15: there is an extremely good agreement with the results of the global gyrokinetic simulations, once more showing the generality of the quadratic effect of the shearing rate on ITG modes. For large values of the shearing rate, however, we observe a deviation from this quadratic behaviour. A detailed analysis of the stabilizing effect of E × B shearing rate has been made for the case k x = 10. Figure 3 shows the contribution of the E × B flow to the particle-wave rate of power exchange, q i v E · δE f d 6 z, plotted as a function of the shearing rate. The quadratic stabilizing effect of the shearing rate is well evidenced for small enough values. The stabilizing effect is seen to saturate for large values of the shearing rate (|ω E×B |(a/c s ) >≈ 0.2).
In order to better understand the reason for this behaviour we show in figure 4 the radial profile of the shearing rate for the case k x = 10, ∆ x = 0.15, ω E×B (a/c s ) = 0.1 and the radial position of the maximum ITG mode amplitude as a function of the applied shearing rate: a clear radial shift occurs, pushing the mode away from the maximum shearing rate position. It also appears that this radial shift saturates. It is interesting to plot the ITG growth rate not as a (16) with k x = 5. Filled squares: for a flow profile given by equation (16) with k x = 10. The dashed curves are parabolic fits to the respective data for small absolute values of the shearing rate. The result is shown if figure 5. The dashed curve is a parabolic fit to the data. It shows that, for a good part, the deviation from a parabolic dependence of the growth rate on the shearing rate can be attributed to the radial shift of the ITG mode towards regions of smaller shearing rate.
Keeping the shape of the E × B profile with k x = 10 we performed a double parameter scan in the ITG and in the E × B shearing rate amplitude. The growth rates for various values of a/L T are shown in figure 6 as a function of the shearing rate. Except for large values of the shearing rate the dependence of the growth rate is quadratic. In figure 6 negative growth rates are also shown. The question of damped modes is important in relation to the process of nonlinear saturation because they act as an energy sink. In [29] a spectral approach was used to investigate the stable ballooning modes, and it was shown that, in addition to normal modes having an exponential decay there is a continuum mode with a 1/t 2 asymptotic behaviour. For our results with negative γ in figure 6 an exponential decay of the mode amplitude was observed in the simulations, indicating that these are normal modes. The continuum mode is either absent, or may appear only when the amplitude has reached below the noise level inherent to the time-evolution, PIC, δf method used. We note in figure 6 a smooth continuation between the unstable and stable sides; a similar smooth continuation was also obtained in [29], albeit in a different parameter scan, for different types of modes (we do not have ballooning modes in figure 6) and obtained with a different method.
The ITG mode growth rates are plotted in figure 7 versus a/L T for different values of the E × B shearing rate. It shows clearly that there is an upshift in the critical gradient with increasing values of |ω E×B |. The value of the upshift depends on the shearing rate. For all values of the shearing rate the ITG growth rates increase linearily with a/L T above the marginal value, with some levelling off at very high growth rates. We note that the growth rate depends almost exclusively on (a/L T ) − (a/L T ) crit . This is compatible with the results of nonlinear simulations [9] which have shown that self-generated E × B flows can virtually suppress the turbulence when the temperature gradient is slightly above the critical value in the absence of flow. These nonlinear simulations also show that the turbulence suppression mechanism works only up to an upshifted critical gradient value. The critical gradient for marginal ITG stability is shown in figure 8 as a function of the E × B shearing rate at s = s 0 = 0.5 (top). It is found that the critical gradient upshift is proportional to the square of the shearing rate for small values of the shearing rate. Then, due to the saturation of the E × B stabilization (see figure 3), the upshift deviates noticeably from this dependence. As noted earlier, see figures 4-5, this is mainly due to the fact that the E × B flow is pushing the radial position of maximum ITG amplitude, s max , towards lower shearing rate regions. When plotted against ω E×B at s = s max (figure 8, bottom), the quadratic dependence of the critical gradient is more evident.
Applied E × B flows: trapped particle effects
In this section we address the question of the role of trapped ions and trapped electrons in the presence of E × B flows. In the previous section the modes under study were toroidal-ITG, helical-ITG or slab-like ITG. We now consider a case for which the most unstable mode, in the absence of radial electric field, is a TIM [30]. We include in our model the non-adiabatic response of trapped electrons which are modelled as drift kinetic. The configuration is a tokamak with the following parameters: R 0 = 1.5 m, a = 0.5 m, B 0 = 1 T, hydrogen ions, T i = T e , T 0 = 5 keV, a/L T = 3.33, a/L n = 0.33, s 0 = 0.7, with a q profile such that q(s 0 ) = 1.5 and the magnetic shear at s = s 0 is unity.
We consider an applied radial electric field with a shearless v E profile, v E = M v thi ρ/ρ 0 , where M is the Mach number of the E × B flow at s = s 0 , and study the instabilities as a function of M . (Note that although this is a shearless v E profile the shearing rate is not zero, see equation (15), because of the finite value of v E and the magnetic shear). The results of our 29.11 TIM is first stabilized but then another mode is destabilized to even higher growth rates than the case without flow. The frequency of the mode, of opposite sign, shows that this mode is rotating in the electron diamagnetic direction. We have checked that this mode is absent if the trapped electron response is neglected. We conclude that it is a TEM. The TEM and TIM have very similar eigenmode structures, see figure 10. Both modes have a maximum amplitude on the unfavourable magnetic curvature region and a ballooning-like appearance, which is, by the way, also the case for toroidal-ITG modes. Comparing the TIM and the TEM, which are the most unstable modes at different Mach numbers of the E × B flow, we notice an effect of shearing the radially extended structures ('fingers'). There is also a small outward shift in the radial position same behaviour as the growth rate. Note that the critical temperature gradient of the TEM with flow is lower as compared to the case without flow. We conclude that the inclusion of E × B flows in this case brings an overall destabilization and that this effect is essentially due to trapped electron dynamics.
Nonlinear interaction of ITG and zonal flows
Turning now to the global nonlinear gyrokinetic model, equations (7)-(10), we focus our attention to the role of ZF on the saturation mechanism of turbulence. As mentioned in the introduction the ITG modes and the ZF modes (m = 0, n = 0) form a coupled nonlinear system that offers Figure 12 shows the time evolution of the perturbed gyrokinetic ion density for some selected Fourier mode components. In particular, the Fourier component corresponding to the linearly most unstable ITG mode (m = 24, n = 1) is plotted in red, whereas the ZF contribution (m = 0, n = 0) is plotted in green. The perturbation contains a broad Fourier spectrum of modes, which are not shown here for the sake of clarity of the figure. As expected, the early stage of the simulation is characterized by the exponential growth of the linearly most unstable ITG mode. Then the ZF amplitude starts to pick up, with an instantaneous growth rate that exceeds the ITG linear growth rate by a factor up to about 2. This phase thus corresponds to the nonlinear generation of ZF by ITG modes. At the time of maximal ITG amplitude, t = 1.2 × 10 −4 s, the ZF amplitude is high enough to have a stabilizing effect on the ITG modes, in agreement with the studies presented in section 3. This reduces the ITG amplitude. The nonlinear evolution is also characterized by a significant broadening of the spectrum of ITG modes: the linearly most unstable ITG mode is no longer the dominant mode. On a slow timescale, t = 1.2-4×10 −4 s, the ITG amplitude decreases. For t > 4 × 10 −4 s both ITG and ZF amplitudes are nearly constant. Figure 13 (top) shows that the average turbulent-driven energy flux, equation (14), also reaches a quasi-steady-state, after a phase of rapid growth followed by saturation. In figure 13 (bottom) we show the energy balance relation, ∆(E f + E k )/E f , see equations (11)-(13), as a function of time. The energy conservation is satisfied to within 20% of the field energy for the whole duration of the simulation. There is no indication of divergence and no numerical noise accumulation. In particular, in the nonlinear, quasi-steady-state phase of the time evolution, energy is conserved to better than 10%. Satisfying this property is an important verification of the quality of the numerical simulation. However, the implication is also crucial for the nature of the physics processes: in our model the ZF are linearly completely undamped, neither Landau nor collisionally. The energy conservation indicates that there is no spurious damping of numerical origin. But it is well known [31] that in the absence of any ZF damping the dynamic evolution should exhibit one pulse of turbulence followed by ZF saturation at finite amplitude while the turbulence level is completely quenched. This is clearly not what we observe in our gyrokinetic simulations. Therefore, one is left to explain the existence of a quasi-steady-state with finite amplitude of both ITG modes and ZF through an essentially nonlinear ZF saturation mechanism. The evolution of the ZF radial profile during the fast growth phase (t < 1.2 × 10 −4 s) corresponds to a shearing rate profile that is maximum at s = 0.3, which is also the position of the ITG mode maximum amplitude. The region of finite shearing rate covers the radial extent of the ITG, i.e. s ≈ 0.2-0.4. During the nonlinear phase of the simulation (t > 1.2 × 10 −4 s) there is a definite broadening of the region with finite shearing rate, s ≈ 0.1-0.55. This broadening corresponds to that of the ITG fluctuations which are seen to expand in the radial direction as compared to the linear eigenmodes. As an illustration of a typical shape of ZF profile we show in figure 14 the instantaneous profile of ZF shearing rate ω E×B (s) at time t = 3.5 × 10 −4 s. Note that the position of maximum shearing rate is fluctuating back and forth around s = 0.3. The value of the maximum shearing rate is also fluctuating around a value close to the linear growth rate of the most unstable ITG mode (the dashed line in figure 14).
The quasi-stationary phase of the simulation is therefore characterized by nearly constant volume-average amplitudes, but locally fluctuating, of both ITG turbulence and ZF. The ITG and ZF thus form a self-regulated dynamical system, and while the ZF does not contribute directly to the anomalous heat flux it is instrumental in bringing the ITG amplitude to a finite level [1]- [10].
Conclusions
The full radius gyrokinetic simulations presented in this paper prove to be a useful tool in the understanding of the effects of applied E×B flows on ITG modes and of the complex mechanisms involved in the interactions of ITG modes with ZF. To sum up the main results obtained: • The effect of applied E × B on the growth rate of ITG modes is quadratic in the shearing rate (see figures [1][2][3][4][5][6]. Note that this is in line with fluctuation-flow models [32,33]. • For large and localized shearing rates the stabilizing effect seems to saturate (see figure 3), largely because the E × B flow appears to push the ITG mode to radial positions where the shearing rate is lower (see figures 4 and 5). • The behaviour of the ITG growth rate as a function of the ion temperature gradient (see figure 7) for different values of the E × B shearing rate yields an upshift of the critical temperature gradient that is quadratic in the shearing rate (see figure 8). • The inclusion of trapped particle effects can lead to a more complex behaviour: sometimes the effect of sheared E × B flows is destabilizing (see figures 9-11). • Energy conserving nonlinear full radius gyrokinetic simulations, figures 12-14, in which a quasi-stationary state is reached with finite values of both ITG turbulence and ZF in the absence of any linear damping of ZF, confirm the existence of a nonlinear saturation mechanism of ZF. Note that this is also in line with fluctuation-flow models [31]- [33].
In this paper we have considered separately the equilibrium E × B flows and self-generated E × B flows (ZF). The next step will be to consider the presence of both. In other words, it will be interesting to study the dynamical evolution of ZF and ITG modes in the presence of an externally applied E × B flow. This could be of interest for the study of transport barrier formation. | 6,694.8 | 2002-05-01T00:00:00.000 | [
"Physics"
] |
Spatio-temporal weight Tai Chi motion feature extraction based on deep network cross-layer feature fusion
Tai Chi is a valuable exercise for human health. The research on Tai Chi is helpful to improve people's exercise level. There is a problem with low efficiency in traditional Tai Chi motion feature extraction. Therefore, we propose a spatiotemporal weight Tai Chi motion feature extraction based on deep network cross-layer feature fusion. According to the selected motion spatio-temporal sample, the corresponding spatio-temporal motion key frame is extracted and output in the form of static image. The initial motion image is preprocessed by motion object detection and image enhancement. Traditional convolutional neural network extracts features from the shallow to the deep and builds a classifier for image classification, which is easy to ignore the shallow features. Based on the AlexNet network, a CL-AlexNet network is proposed. Batch normalization (BN) is used for data normalization. The cross-connection structure is introduced and the sensitivity analysis is performed. The Inception module is embedded for multi-scale depth feature extraction. It integrates deep features and shallow features. The spatio-temporal weight adaptive interpolation method is used to reduce the error of edge detection. From the edge features and the motion spatio-temporal features, it realizes motion features extraction, and outputs the extraction results. Compared with the state-of-the-art feature extraction algorithms, the experiment results show that the proposed algorithm can extract more effective features. The recognition rate exceeds 90%. It can be used as guidance and evidence for Tai Chi training.
Introduction
As the essence of Chinese martial arts, Tai Chi is a national intangible cultural heritage. Studies have shown that Tai Chi can not only help people reduce blood pressure [1], enhance the functional level of the immune system, relieve physical stress and improve the quality of sleep [2], but also enhance muscle strength, improve flexibility and prevent falls [3,4]. So it is attracted by more and more people.
Gesture motion recognition has always been a hot research topic in the field of computer vision, which has important academic value in many fields such as video surveillance, motion analysis, sports events and medical diagnosis [5][6][7]. The recognition of human posture motion will apply relevant algorithms and techniques. The motion recognition methods based on spatio-temporal feature extraction and based on motion trajectory analysis are the most frequently used motion attitude and motion recognition methods at present. In order to improve the recognition accuracy of human motion posture behavior (Tai Chi), this paper optimizes the spatio-temporal feature extraction method of motion.
The spatio-temporal weight feature extraction combines computer vision technology and image processing technology. The computer vision technology is used to extract the relevant information of human spatio-temporal posture motion, and determine whether the point of each motion image belongs to a feature of an image [8]. By dividing the points in the image into different subsets to form continuous curves or regions, the feature extraction results of human posture motion can be obtained. In practice, the results of human motion recognition can be obtained by comparing the extracted features with the information in the standard database. Traditional spatiotemporal weight attitude motion feature extraction algorithms include regular grid [9], image content analysis [10] and Mel-frequency cepstral coefficient (MFCC) [11]. Because of the rapid transformation speed of human posture movement and the diversity of behavior, the implementation of feature extraction algorithm is very difficult. In addition, light, angle and other objective factors will also affect the accuracy of the spatio-temporal weight motion feature extraction results.
In order to solve the above problems, based on the traditional motion feature extraction algorithm, the idea of cross-layer feature fusion based on deep network is introduced. The main contributions are as follows: 1) On the basis of traditional algorithm extraction steps, the improved AlexNet algorithm is introduced to collect Tai Chi motion images, and the operating process of the AlexNet algorithm is followed to detect motion feature objects, thus improving the integrity and accuracy of motion feature extraction results. 2) Processing the collected motion images, calculating the threshold values and features based on the data, and dividing the matching blocks to be used. 3) According to the weighted matching, motion feature fusion is completed, and the spatio-temporal weight Tai Chi motion feature extraction algorithm is realized, which indirectly improves the recognition accuracy of human spatio-temporal weight Tai Chi motion. 4) The weight of Tai Chi motion is calculated to obtain the fusion results of multi-scale motion feature extraction. The structure of this paper is as follows. In section 2, we detailed introduce the proposed Tai Chi motion feature extraction. The experiments and analysis are conducted in section 3. There is a conclusion in section 4.
Extracting the spatio-temporal motion key frame
Before extracting the spatio-temporal motion key frame, the corresponding spatio-temporal sample needs to be selected first. Monitoring equipment is installed in Tai Chi sports venues, and the completed video files are the selected spatio-temporal samples. The horizontal spatiotemporal slices of the shot are extracted from the selected motion video samples [12], and the spatio-temporal slices of the video are clustered. After cluster processing, motion video files may appear time discontinuous but be clustered together. After spatio-temporal slicing and clustering processing, the motion samples are collected to form the corresponding sub-lens, so a key frame can be extracted as the object image with motion features according to the preset rules. The constraint conditions that the extracted space-time Tai Chi motion key frames need to satisfy are as follows: Where, s represents the range that can be selected by the motion video block v in the spatio-temporal sequence. k represents the number of objects containing video frames in the current video sequence. Constraint condition formula (1) can ensure the integrity of the key frame extraction content of spatio-temporal Tai Chi motion.
Spatio-temporal weight Tai Chi motion feature extraction based on deep network cross-layer feature fusion 3 The extracted spatio-temporal motion key frame is output in the form of static image. Through motion object detection, image enhancement, morphological processing, image normalization and other steps, it can achieve the Tai Chi motion image preprocessing results.
Moving object detection
The key problem of time-space weight gesture motion feature extraction is to detect high quality moving human object image. In this process, background subtraction, image extraction and other technical approaches are involved [13,14]. The specific moving object detection process is shown in figure 1.
Substituting the background model of equation (2) into equation (3), the foreground moving image of the video image can be obtained. Finally, the foreground image is cropped to get the moving object [15]. Moving object detection and processing can be used as a foundation to extract the motion features of spatio-temporal weight.
Motion image enhancement processing
Enhancement processing of moving images mainly includes the following two steps: Step 1: Taking the foreground image in the collected image as the processing object for image denoising. This step is to avoid the noise in the image that can affect the sharpness of the moving image [16].
Step 2: Using the filter to enhance the moving image. Assuming that the noise reduction filter of the moving image is ) , ( y x h , the convolution operation of the noisy image can obtain the image after noise elimination, and the noise elimination process can be described as: Choosing Gabor filter to sharpen and enhance image can make the filter obtain the best resolution in both spatial and frequency domain.
Image normalization
In the motion image, the location of human body area in the movement process is in a state of constantly changing, so we need to normalize the image into uniform size. And the movement area of the human body is defined in the same central position, so that the positions of the human body in all images are aligned, which is convenient for the subsequent extraction of the corresponding image features in the Tai Chi motion image. First, human edges in video sequence images are detected, denoted as min (5) It cuts the image to a fixed size and ensures that the completed human movement area can be retained in the trimmed image.
Cross-layer feature fusion convolutional neural network
AlexNet network proposed by Krizhevsky et al. in 2012 triggered a boom in the field of deep neural networkbased image processing. The network consists of five convolutional layers and three fully connection layers. It has successfully trained about 1.2 million images of 1000 categories with ReLu as activation function, multi-GPU parallel computing, local response normalization, overlapping pooling, and Dropout layer to coordinate network performance. The 17% top-5 error rate of ILSVRC2012 dataset is achieved with 60 million parameters. After AlexNet, various deep neural network structures have been put forward. GoogLeNet network uses global pooling and Inception moudle to cluster sparse matrices into dense sub-matrices to improve computing performance and optimize parameters. The network has 22 layers and adjusts parameters for gradient EAI Endorsed Transactions Scalable Information Systems 10 2021 -01 2022 | Volume 9 | Issue 34 | e6 problems caused by network depth with three Loss outputs. Another VggNet-16 also shows that network depth is the key to excellent performance of the algorithm.
In this paper, the improvement of AlexNet network is mainly studied on Tai Chi motion image recognition. The performance of GoogLeNet and VggNet-16 deep network in Tai Chi image feature classification is compared to carry out correlation analysis.
New network design
This paper proposes a new network based on the original AlexNet network. It consists of an input layer, four convolutional layers (followed by pooling layer), one Inception module, a cross-layer connection structure, two full connection layers (followed by Softmax loss function), and an output layer. It uses BN to replace Local Response Normalization (LRN), a second pooling layer is cross-linked to the full connection layer, which fuses with deep features extracted from the backbone network, and eventually plugs into the classifier.
The new AlexNet network structure is shown in figure 2. Table 1 lists the specific parameters of the new AlexNet network, including the Type, convolution kernel Size, Stride and Output Size of each network layer. h9 After the convolutional feature extraction, AlexNet network is normalized by LRN, and lateral suppression is performed on the neurons adjacent to the activated neurons to achieve local suppression and improve the model generalization ability. However, BN can effectively accelerate model convergence, prevent single samples from being frequently selected during batch training, and prevent "gradient dispersion", while abandoning the dropout layer and L2 regular term parameters [17]. In the new AlexNet network, the Inception-V1 module in GoogLeNet network is introduced to extract deep features of Tai Chi images before full connection layer. The structure of the Inception-V1 module is shown in figure 3, and it is connected in parallel with four convolution kernel of different sizes. The first one conducts 1×1 convolution for the upper input. The second one convolves the last layer, and then it is connected with the 3×3 convolution.
The third branch convolves the upper layer with 1×1 size, and then it is connected with the 5×3 convolution. Continuous feature transformation broadens the dimension of feature expression. The fourth is the 3×3 maximum pooling to realize the compression of perceptual information. Finally, the four converged filtering layers are connected. The higher layers of Inception module has the greater efficiency [18]. The traditional convolutional neural network extracts features from shallow to deep, processes features through classifiers, and outputs probabilities under different conditions. With the deepening of network depth, this process can not effectively fuse the low-level and highlevel features to form the feature classifier. In this paper, the idea of cross-layer connection proposed in DeepId [19] is introduced to connect the second pooling layer to the full connection layer for feature fusion. In general, the network firstly extracts layer features from 128×128 input images in Where j represents the positive integer that is not greater than the output third dimension number j(i) in the i-th hidden layer, that is, Loss function J(w) is: Where l 12 δ and l 11 δ are the feedback errors of the output layer and the full connection layer respectively. " " represents the Hadamard product. up(·) is the up-sampling process. ⊕ represents the outer convolution operation. 12 W is the weight between the output layer and the full connection layer. The new AlexNet network in this paper uses the gradient descent algorithm [20] to update the weight and bias. It is known that the training set D, momentum is M and learning rate is lr. Te specific algorithm process is as shown in figure 4.
Adaptive interpolation of spatiotemporal weights
Adaptive interpolation of spatio-temporal weight can effectively solve the problem of interpolation errors caused by motion estimation errors and inaccurate edge detection. It has the advantage of automatic fusion of edge adaptive field interpolation [21,22]. Firstly, the absolute difference of the pixels before and after the moving image element should be calculated by using the spatio-temporal weight, and the corresponding weight coefficient should be calculated. Then the weighted average of adjacent pixels is carried out to obtain the estimated value of the points to be interpolated. The final expression of adaptive pixel P of spatio-temporal weight is: Where P' is the pixel value after interpolation. (X(i,j)-1,t) and (X(i,j)+1,t) are the two pixels of the current field respectively.
Where z is the change of invariant matrix of human body in a period.
Temporal and spatial features of motion
The temporal and spatial features of Tai Chi posture movement include skeletal features of human body and joint angles of limbs [23]. Under the condition of constant topological structure, the outer pixels of the gait image are stripped layer by layer by iteration. The skeleton with single pixel width is obtained, which is the feature result of extracted motion limb joint. The extraction results of temporal and spatial features of the moving skeleton are shown in figure 5. The joint angle of human body is expressed in the form of coordinate, and the rotation angle of human limb joint at different time is calculated respectively. It arranges the calculation results of rotation angles in chronological order. The temporal and spatial variation of human movement is analyzed under the corresponding skeleton model. It can be seen from figure 5 that in the actual movement process, the displacement of human limb joint is small, so the spatio-temporal features of motion can be directly represented by the joint angle features [24].
Motion features fusion
The motion feature fusion is realized by using the calculated weight of spatio-temporal. The specific fusion process is shown in figure 6. Figure 6. Process of motion feature fusion Figure 6 shows that the reliability of different feature matching quantized values is different in the process of feature fusion. Therefore, according to the distribution of spatial and temporal weights, the fusion of motion features is realized from feature layer, data layer and decision layer. The data of each video sequence is analyzed step by step, and the data threshold is obtained according to the key frame to obtain the extraction results of the area and joint angle. The motion feature fusion is completed according to the weighted matching. Therefore, we can summarize our proposed algorithm as shown in figure 7.
Rule default
Video temporal and spatial slice processing by clustering
Experimental analysis and results
The experimental platform of this paper is Ubuntu16.04 operating system, Deep learning Caffe framework, Python2.7 interface language, GPU GTX2080Ti, processor Intel Core I7-7820x CPU@3. 60GHz× 16, and memory 64G. Setting the initial learning rate as 0.001 and using "step" attenuation. Multi-classification cross entropy loss function is used. Comparison results with SVM and traditional AlexNet is shown in figure 8.
Figure 8. Classification results
By comparing the results, it can be seen that the new AlexNet network constructed in this study has significantly improved the average classification accuracy, average accuracy, average recall rate and average comprehensive index F1 value compared with SVM [25] and traditional AlexNet network [26]. In particular, the difference in recall rate indicates that the new AlexNet network has high classification accuracy, outstanding classification effect for Tai Chi images, and strong model expression power.
Cross-layer connection analysis
It is necessary to discuss the influence of the difference of cross-connection modes on network performance. The new AlexNet network by cross-connection structure is introduced in this paper to make the deep features and shallow features fusion. Therefore, it is determined that the cross-connection terminal is the unchanged full connection layer. Now, the reliability analysis is carried out for the front segment of the network and the crossconnection initial end is changed to h2 and h6 respectively. The training process is performed under the same conditions and compared with the test results of new AlexNet network, as shown in Table 2: According to the test results, h4 hidden layer has certain advantages as the initial end of cross-connection. Compared with h2 and h6 as the initial end of crossconnection, the average classification accuracy is improved by 3.62% and 2.72% respectively. Other evaluation indicators also have obvious advantages. The feature graph output by h2 layer retains obvious edge information and has high overlap with the original input image information. The output features of h3 layer are more abstract than the previous layer, but it still retains some specific contour edges. h4 layer output features are already extremely abstract. Therefore, it can be concluded that, compared with new AlexNet without crossconnection structure, the classification accuracy of crossconnection network is significantly improved. However, when the shallow features of h2 layer output are fused with the depth features across the connecting end, the specific features are overemphasized and the contribution to the classification results is not high enough. The shallow feature output of h6 layer is too abstract, which has the disadvantage of parameter redundancy in feature fusion, and has less effect on the improvement of classification accuracy than h6 layer. Therefore, this paper chooses h4 layer as the initial end of cross-connection, which is the optimal solution.
Through the comparison of experimental results and visualization of training process, it is verified that the new AlexNet has higher classification accuracy than conventional SVM and AlexNet method in image classification, and there is no need to manually extract image features. The sensitivity analysis and visualization of the intermediate process verify the reliability of the cross-connection structure.
In order to verify the effectiveness of AlexNet network in the detection and classification of Tai Chi motion features, this paper conducts experiments on AlexNet network, GoogLeNet network [27] with Inception module, and VGGNet-16 network [28] on the same motion data set. In the process of model training, the method of control variable is adopted, the initial learning rate is set to 0.001, and the attenuation mode of "step" is used. The same loss function, optimization function, maximum iteration (10000 times) and parameter update method (Momentum+SGD) are used in different networks. The experimental results are shown in table 3. demonstrating the effectiveness of CNN in independent feature extraction.
Comparison with other methods
In this subsection, we select matching degree of feature extraction (MD), weighted matching elasticity (WME), multi-scale motion features fusion degree (MFD).
After AlexNet environment is formed, the algorithm in this paper undergoes object detection, moving image enhancement processing and normalization processing. Then it obtains the extraction environment that best matches the motion features. Therefore, the MD is set as one of the experimental indicators, and its calculation formula is: Where x(t) represents the normalized processing result.
In the process of motion feature fusion, it is necessary to calculate key frames to obtain data threshold, extract area and joint angle. Then it achieves WME by dividing matching blocks. The WME reflects the elasticity of weighted matching, that is, affects the recognition accuracy.
In order to realize the fusion operation of extracting motion features from the feature layer, data layer and decision layer, the multi-scale motion feature fusion degree (MFD) of the three layers is compared. The calculation formula of MFD is: Where L is the general feature to be fused. M is the scale optimization degree.
The WME results are shown in figure 9. It can be seen from figure 9 that under the limit of 25 iterations, the elastic curve of the proposed method fluctuates greatly, but it is superior to other methods in matching block distribution. It shows that the proposed method can complete motion feature fusion based on weighted matching method, and realize the feature extraction of spatio-temporal weight posture motion, which provides a basis for the recognition of Tai Chi spatio-temporal weight posture motion. Figure 9. Test results of WME by different methods The test results of MFD are shown in figure 10. It can be seen from figure 10 that the fusion results of multi-scale motion features with the proposed method are stronger than those of the TSF, CORR-OMP, SEMG methods at the feature layer, data layer and decision layer, respectively. Therefore, the proposed method can not only achieve higher efficiency than the traditional methods, but also obtain more accurate extraction results of spatiotemporal weight attitude motion features.
Conclusion
This paper combines cross-connection architecture and Inception to propose a new AlexNet network to realize the optimal weight design for the traditional algorithm. At the same time, BN is used for data normalization, and the gradient descent algorithm is used for optimization to accelerate the convergence of the network and avoid the gradient problem. By analyzing the time and space weights of moving objects, the problem of low extraction efficiency in traditional algorithms is solved. The motion feature fusion is completed according to weighted matching, which solves the problems of poor extraction efficiency and low recognition accuracy of traditional feature extraction algorithm in Tai Chi motion, and provides reference for related research in this field. However, the sample collection environment selected in the experiment is relatively simple, and the sample contains only one motion object. The actual identification work environment is complicated and there are many interference factors. Therefore, precise object positioning will be an important research direction in the future. In the future, we will apply this project to practical engineering applications. | 5,147.8 | 2018-07-13T00:00:00.000 | [
"Physics"
] |
On the way to remote sensing of alpha radiation: radioluminescence of pitchblende samples
In the framework of the project RemoteALPHA, an optical scanning system for remote sensing of alpha emitters using radioluminescence is being developed. After the feasibility of the technique was proven, current work aims at improving the sensitivity for detection of low surface activities. As calibration standard, pitchblende minerals were prepared. Their surface count rate of 80 Bq cm-2 to 105 Bq cm-2 was measured by alpha-track-detection and alpha-spectroscopy. Subsequently, radioluminescence measurements were performed in a sealed chamber filled with different gas atmospheres. The radioluminescence signal was measured in UVC and UVA spectral ranges for all samples.
Introduction
Since the usage of radioactivity in the civil or military sector began, radiation detection and protection have become very important [1]. Wide-area radiation monitoring is of high importance to public health and safety following the release of radioactive material from a nuclear facility due to an accident or even an attack. If contamination of the environment occurs, a fast and safe detection method is needed to perform protective measures based on knowledge of the contamination level and the kind of radiation. While gamma emitting nuclides are routinely screened by remote sensing techniques using helicopters or drones, contaminations by purely alpha-emitting radionuclides are harder to detect due to the short range of only a few centimetres of alpha particles in air [2,3]. This requires ground based measurement, which is a time-consuming procedure and entrails a considerable risk of exposure of the emergency team not only to ionizing radiation but also to potentially hazardous Annika Klose<EMAIL_ADDRESS>significant influence on the radioluminescence signal. This is mainly due to quenching effects of water vapour and oxygen [6,11]. Sand et al. measured a six times higher amount of photons in pure nitrogen compared to ambient air [6]. The signal increase of the 254 nm wavelength is even higher if a small amount of nitric oxide (NO) is added to the pure nitrogen atmosphere [10,12]. Kerst et al. [10] showed that the intensity of the signal is maximized for 50 ppm NO diluted in pure nitrogen. The light yield is about 25 times higher than the 337 nm line of nitrogen. In very pure nitrogen, a low ppm level of NO provides a significant increase in the UVC light yield. The limit of detection in UVC with 1 ppm NO is around 20 times lower than the limit of detection in pure nitrogen [2]. The use of small amounts of NO enables measurements of low activity samples. Krasniqi et al. [12] were able to measure an extended uranium source with an activity of 330 Bq by using an NO -amount of 3 ppm. Nitric oxide is an oxidizing gas that forms acids in the respiratory system. If human exposure is unavoidable, the exposure limit of 2 ppm over an 8-hour time weighted average, recommended by the Scientific Committee on Occupational Exposure Limits for Nitrogen Monoxide, should not be exceeded [13]. Therefore, it will be a challenge to use this approach in the environment.
For the detection of the radioluminescence signal, research groups used different optical systems. Some of them were listed by Crompton et al. [14]. Measurements were implemented under various conditions, with different nuclides and activities and diverse optical components. Baschenko [15] used a 37 MBq 239 Pu source and measured the emitted luminescence by a monochromator and PMT whereas Giakos [16] used the same radionuclide with doubled activity but used an ICCD camera for detecting and mapping the radioluminescence photons.
Up to now, no radioluminescence measurements were performed on environmental samples such as pitchblende. Pitchblende is a naturally occurring radioactive mineral that contains mainly uranium dioxide and its radioactive daughters in equilibrium. Most of the daughters are alpha-emitters, which makes it a suitable material for our experiments. The utilization of environmental samples for radioluminescence measurements in the lab is a first approach to reach the goal of measuring contaminations on environmental surfaces. Due to the low specific alpha activity of pitchblende, it is also applied to explore the detection limits of the optical system used.
Preparation
Pitchblende bearing ores from different locations with a comparatively high uranium content were cut into 5 mm thick slices with a micro-waterjet. The resulting surface is flat but not polished. The shape of the samples is irregular and their surface sizes range from about one to 15 cm 2 . The samples are listed in Sup. Table 1. A stone from Puy de Dôme in France was cut into the samples L, M, A and B. Sample S is from Uranium City in Canada. The remaining samples E, F, G, J and K are fractions of one single piece of ore originating from Wölsendorf in Germany. Sample J is the only polished sample. The small samples A, B, E, F, G and S are grouped to one sample in the following referred to sample Mix.
Alpha-track
For alpha-track-detection plastic detectors of the CR-39 type from TASL were used. After an exposure time of one to five minutes with alpha radiation from pitchblende samples, they were etched in 6 mol l -1 sodium hydroxide at 80 °C for three hours. The counting of tracks was performed by a microscope (Nikon LV-DAF) and the software ImageJ.
Grid ionization chamber
The grid ionization chamber from MABsolutions was filled with P10, a gas mixture of 90% argon and 10% methane, at a pressure of 1.025 bar. The accelerator voltage was set to 1600 V. Between the different measurements the chamber was flushed with air to remove radon from the previous sample. Measurement duration varied between 3000 and 14,000 s.
Optical system
The optical system consisting of lens, filter and a photomultiplier tube (PMT) is mounted on a goniometer in a way that it can be moved for scanning the sample area (Fig. 1). For collecting the UV-photons, a 240 mm quartz lens was used. Measurements were done using the UVC-photons and the UVA-photons. For the two different measurements, two different PMTs with different spectral responses were chosen. For the UVC a Hamamatsu H11870-09 with a spectral response between 185 and 320 nm was used. The 254 nm emission peak was selected by a Semrock FF01-260/16-25 filter. The spectral response between 230 and 700 nm was measured with an H10682-210 PMT for UVA and two filters, a Semrock FF01-340/12-25 and an Edmund Optics 337/10, to select photons of the 337 nm emission peak. As reported by Sand et al. [6] the photon count rate strongly depends on the distance between source and detector. Therefore, it was kept constant at 2 m for all measurements and samples. The detection of the radioluminescence signal at distances larger than 2 m will be done in future experiments. At this distance, the UVA-detector has a field of view (FOV) of 19 mm in diameter whereas the UVC-detector has a FOV of 50 mm in diameter. The smaller the FOV, the better is the spatial resolution of the reconstructed image of the scanned source. Therefore, the measured size of the radioluminescence glow in the UVA-scan is much closer to the real size of the glow than the measured size from the UVC-scan.
The samples were placed into a measurement chamber with a quartz window, which can be pumped and filled with any gas. The measurements in the UVA region were done in air and in an artificial atmosphere of pure nitrogen with 10 ppm nitric oxide (NO). Measurements in UVC were only made in the artificial atmosphere. The measurement time per pixel depends on the light yield detected from the sample. In the artificial atmosphere, the light yield is expected to be high. Therefore, the time per pixel was set to 0.2 s in UVC and 2 s in UVA ( Table 1). The UVA-measurements in air needed a longer measurement time of up to 20 s per pixel because of the missing scintillation effect of NO and the strong quenching of oxygen and water vapour.
Image processing
The recorded radioluminescence signal requires post-processing. Information about sample location and signal intensity is hard to see from the raw data as shown in Fig. 2a. The scan was smoothed by grouping neighbouring pixels, so that the information hidden in the noise becomes available. Figure 2 shows the smoothing process from raw data (a) via 3 pixels smoothing (b) to 9 pixels smoothing (e) which is used for all measurements. In (e) the radioactive source is clearly distinguishable from the background. The signal at the scan edges ( Fig. 2c, d, e) originate from reflections of the gas chamber and surrounding objects. The background and the maximum photon count rate can be calculated from the processed scan. The resulting image ( Fig. 2e) is described in more detail in Fig. 6f.
Results and discussion
The alpha-track-detection-method shows the distribution of alpha emitters on a sample. Therefore, it is especially suitable for finding small radioactive sources or contamination on a sample, for example finding so-called Hot particles in soil samples [17,18]. In this work, alpha-track-detection was used for analysing the homogeneity of the alpha-emitter distribution on the pitchblende samples. Samples chosen for radioluminescence measurements should yield a high amount of alpha emitters per area since the optical system is still under development and improvement. Other groups used commercially available sources of several kBq to MBq to detect the radioluminescence signal [2,14]. Accordingly, it is necessary to start radioluminescence measurements on environmental samples with properties close to standard All UVA 20 s Air Fig. 1 The optical scanning system The optical system is mounted at a distance of 2 m away from the sample. Depending on the kind of measurement, the gas chamber can be filled with air or different gas mixtures like N 2 and 10 ppm NO. The PMT and corresponding set of filters is changed for measurements in UVA and or UVC of alpha-emitters over the entire surface area as shown in Fig. 4. Furthermore, alpha-spectroscopy was performed using a grid ionisation chamber (GIC). Since pitchblende is a solid material with homogeneous distribution of uranium inside the bulk, self-absorption of alpha radiation takes place. This self-absorption causes the pronounced low energy tails of alpha particle energy in the spectra of Fig. 5.
Uranium and its solid daughters are identified by their maximum alpha energy edges. Their peaks are not as narrow as for example the 222 Rn peak due to the absorption process. Radon as a noble gas is able to leave the solid material and decreases self-absorption. Additionally, the efficiency for detection of a radioactive gas in a gas chamber is twice as high as for a solid surface sample. Therefore, radon is represented as peaks in the spectra. Radon daughters show a mixed behaviour as they may contribute from the solid material with self-absorption or from the gas without large area sources. The acquired microscope images from the alpha-track-detectors show black spots representing the track of each alpha particle (Fig. 3a). By adjusting the threshold settings of the software ImageJ the tracks become clearly visible (Fig. 3b).
From the processed image, the number of tracks was counted allowing an estimation of the surface activity. In the example of sample L (Fig. 3) 548 tracks per minute were detected. This representative image is one of 260 images taken from this sample each with a size of 2.49 mm by 1.87 mm. The large image of sample L is shown in Fig. 4 with the edges of the sample marked with a bold line. It shows a homogeneous distribution of tracks except for a small region at the bottom of the sample beneath the thin line. There are only a few tracks detected, indicating wasterock, not containing uranium. The same applies for sample A above the thin line and sample E outside the thin lines (Fig. 4). All other samples have a homogenous distribution Three different radioluminescence measurements were performed for three different pitchblende samples to investigate their influence on the count rate depending on the wavelength and the measurement atmosphere. The variation of the surface count rate is very small. Thus, the samples were expected to generate similar photon count rates under the same measurement conditions Since the surface activity of the pitchblende samples is low, the first radioluminescence measurements were done in the N 2 + 10 ppm NO atmosphere for maximum sensitivity. For both wavelengths, UV-photons were measurable and the radioactive source is clearly distinguishable from the background. Measurements of UVC in air were not performed because the UVC light yield in air is too low to be detected against the background. The detection of UVA in air was possible but required a much longer acquisition time per pixel, because the signal-to-background-ratio is below 0.4 absorption. Each peak belongs to a nuclide of a decay chain of uranium-238 or uranium-235.
From these spectra, the surface count rate of the sample could be estimated. Since only alpha particles emitted from the surface into the upper hemisphere were detectable, the calculated surface count rate is maximum half of the total surface activity of the sample. The results of this calculation are shown in Table 2. All samples have a detectable alpha count rate between 1 kBq and 1.5 kBq.
Although having a high impact on the measurements with the GIC, the influence of radon on radioluminescence measurements is negligible. The small amount of emerging gas is directly diluted in the gas chamber. Its alpha particles contribute to the total radioluminescence signal but little compared to the signal produced by the alpha particles of the solid pitchblende sample. The samples were in the chamber during the measurements, which lasted a maximum of eleven hours for the longest measurement. This time was not long enough to accumulate radon in the chamber. homogeneously distributed over the entire surface area. Areas containing much uranium and its daughters are labelled with "U" than in the comparable measurements in UVA. As mentioned before, the presence of small amounts of NO shifts the emission spectrum of the gas towards UVC, since NO emits mainly at wavelengths shorter than 300 nm [10] The background for the measurements of 337 nm is around 3.5 ± 0.2 photons per second and pixel. It is about twice the background at 254 nm, due to a higher contribution of scattered photons from lamps and electronics in this spectral band as well as a higher background of the UVA-PMT. In pure nitrogen the UVA-signal is increased by at least a factor of six compared to ambient air. It is in perfect agreement to the results of Sand et al. [6] ( Table 3). This is due to the large quenching of the excited nitrogen by oxygen and water vapour in air Nevertheless, these measurements of UVA in air are very important, because the optical detection systems are to be used in the environment where air is the only available medium and the control of the atmospheric conditions is not possible The measurements of the 254 nm wavelength (UVC) in the artificial atmosphere have the lowest background rate of 1.5 ± 0.7 photons per second and pixel on average. Additionally, the maximum photon count rate is between 126 ± 33 and 163 ± 38 photons per second and pixel ( Table 4). This leads to a signal-to-background-ratio up to 50 times higher They show a large amount of low energy alpha particles due to selfabsorption. Numbers identify nuclides of the 238 U decay chain whereas letters identify nuclides of the 235 U decay chain relatively similar as they have approximately the same surface activity. However, their surface areas are very different and range from 6.6 cm 2 (sample L) to 18.1 cm 2 (sample Mix). The difference in sample size leads to a variation in activity per cm 2 . Sample L as the smallest sample has the highest density of alpha-emitters of 156 cm -2 , whereas sample Mix has only 83 cm -2 . This results in a higher photon count rate per pixel for sample L although its surface activity is smaller than the one of sample Mix. For all samples, the highest count rate was over 100 counts per second and pixel measured in UVC. In comparison to UVC, the UVA measurements have a significant lower maximum photon count rate, but show the same behaviour with sample Mix as the one with the lowest count rate. The different surface areas are not only visible in the intensity but also in the size of the radioluminescence glow (Fig. 6). The more extended the source the larger the glow. It is also larger for the UVC measurements since the field of view of the UVC-detector is bigger than the field of view of the UVA-detector. Therefore, The photon count rates for the different samples are The background rate is higher as for measurements in UVC. Using the artificial atmosphere (middle row) leads to a three times higher signal than measuring in air (lower row). All samples have a similar surface activity and a similar photon count rate. attempts, however serves as proof of principle and as a starting point for further improvements. The locations and even shapes of the pitchblende samples were well recognizable on the scan (Fig. 7). The background-corrected maximum photon count rate is 1.4 photons per pixel and second. Some reflections from background photons disturb the bottom of the scan but had no influence on locating the samples. With this experiment, we were able to prove the feasibility of measuring environmental samples of low activity in air in the UVA spectral range
Conclusion
The aim of this study was to proof the feasibility of detecting low activity in environmental samples by measuring their radioluminescence with the optical detection system. Before the radioluminescence was measured, all samples were characterized with conventional alpha detection methods. Their surface activity was determined from their GICspectra and ranges from 80 Bq cm -2 to 105 Bq cm -2 . The optical system was used for two different wavelengths, 254 nm in the UVC-band of the electromagnetic spectrum and 337 nm in the UVA-band. The maximum count rate per pixel obtained in the artificial atmosphere was about 50 times higher in UVC than in UVA, due to a difference in the scintillation strength of nitric oxide for the two wavelengths. The use of an artificial atmosphere of nitrogen with 10 ppm nitric oxide provided a significant increase of the radioluminescence signal for all pitchblende samples compared to air.
Depending on the intensity of the radioluminescence, the acquisition time per pixel varied for the different measurements but was consistent for all samples within one measurement. Measurements in air needed the longest acquisition time per pixel (10 to 20 s), which directly effects the measurement duration. The scanning of all samples in air needed 64 h for an area of 30 by 40 cm.
In this work, pitchblende samples of a low specific alpha activity were measured from a distance of 2 m. Since the optical detection system developed by RemoteALPHA is meant to be used for radiological emergencies with very high alpha contaminations, our results show that it is possible to detect them. High alpha activities will lower the acquisition time per pixel, which will speed up the measurements. Compared to conventional time-consuming measurements by hand, the optical detection system will scan an area much faster, while it will not expose its operators to the risk of ionizing radiation or potentially hazardous materials, fire and debris.
the UVC-signal is nearly circular with the highest intensity in the middle of the radioluminescence glow and contains no more information about the geometry of the samples. In Fig. 6b, the UVA-signal in the N 2 + NO atmosphere is shown. The scan reveals information about the location of the highest density of alpha-emitters on the sample. This is in contrast to Fig. 3a where the image of the same sample is shown but made using the UVC-PMT with its larger FOV. This scan contains only information about the location of the source not about its alpha-emitter distribution The low signal-to-background-ratio ( Table 3) leads to blurring of the UVA scans (Fig. 6). Higher photon count rates at the scan edges (Fig. 6f) do not indicate presence of alpha emitters but rather originate from UVA-photon-reflections of surrounding objects. However, in spite of these artefacts, the location of the pitchblende samples is clearly identifiable from the radioluminescence signal in all cases.
The results of the different alpha detection methods are compared in Table 5.
The photon count rates for one kind of measurement show a variation between the three samples of one order of magnitude or less. Due do the missing scintillation effect of nitric oxide, the measurement in air has not only the lowest maximum count rate but self-evidently also the lowest total amount of radioluminescence photons per second. The order of the samples from highest to lowest within one kind of measurement is different for the maximum photon count rate and the total count rate for all measurements. There are two possible reasons for that. First, the samples have a very similar surface activity, which leads to an expected similar photon count rate for all samples. Second, pitchblende is a naturally occurring mineral with a possibly heterogeneous distribution of alpha emitters. Areas with a high density of alpha emitters increase the maximum count rate but may not affect the total count rate. Additionally, the large background in the UVA spectral range, almost as large as the maximal count rate, leads to a greater uncertainty in background corrected total photon count rate Finally, an experiment under realistic conditions was performed. All samples were placed on a total area of 30 by 40 cm and arranged at equal distance from each other, for measurements in the UVA spectral range in air. The measurement took 64 h, which is an admittedly very long time span for a proposed remote sensing method. These first as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/. | 5,257 | 2022-09-22T00:00:00.000 | [
"Physics",
"Environmental Science",
"Materials Science"
] |
Parameter-dependent unitary transformation approach for quantum Rabi model
Quantum Rabi model has been exactly solved by employing the parameter-dependent unitary transformation method in both the occupation number representation and the Bargmann space. The analytical expressions for the complete energy spectrum consisting of two double-fold degenerate sub-energy spectra are presented in the whole range of all the physical parameters. Each energy level is determined by a parameter in the unitary transformation, which obeys a highly nonlinear equation. The corresponding eigenfunction is a convergent infinite series in terms of the physical parameters. Due to the level crossings between the neighboring eigenstates at certain physical parameter values, such the degeneracies could lead to novel physical phenomena in the two-level system with the light-matter interaction.
Quantum Rabi model usually has the Hamiltonian H = ωa † a + g(a † + a)σ x + λσ z + ǫσ x , where σ x and σ z are the Pauli matrices for the two-level system with level splitting 2λ, a † and a are the creation and annihilation operators for the single bosonic mode with frequency ω, respectively, the light-matter interaction is controlled by the coupling parameter g, and the last term ǫσ x is the driving term which leads to tunnelling between the two levels. We note that the competition between g and ω produces the different experimental regimes. When g/ω is small, by applying the rotatingwave approximation, the Rabi model (1) with ǫ = 0 is equivalent to the so-called Jaynes-Cummings model [15], which is relevant to most experimental regimes. Because the Jaynes-Cummings model is integrable, it is easy to derive its analytical solution. With increasing g/ω, the ultrastrong coupling regime (∼ 0.1 < g/ω <∼ 1.0) [12] or the deep strong coupling regime (g/ω >∼ 1.0) [9] is reached, where the Jaynes-Cummings model is invalid and cannot be used to investigate the interaction between light and matter. Recently these regimes have rapidly growing interesting due to their fundamental characteristics and the potential applications in quantum devices [11][12][13][14].
Although the Hamiltonian (1) has a simple form, it has not been possible to obtain its correct analytical solution, which is considerably important for exploring accurately the light-matter interaction from weak to extreme strong coupling. In Ref. [16], Braak presented an analytical solution of the Rabi model (1) by using the representation of bosonic operators in the Bargmann space of analytical functions. The energy spectrum consists of two parts, i.e. the regular and the exceptional spectrum. However, such a spectrum structure is incorrect due to the derivation error in solving the time-independent Schrodinger equation in the positive and negative parity parts (see APPENDIX).
In this article, we exactly diagonalize the Hamiltonian (1) by using the parameter-dependent unitary transformation technique in both the occupation number representation and the Bargmann space. Such a direct and powerful approach has been used to solve successfully the complex two-dimensional electron gas in the presence of both Rashba and Dresselhaus spin-orbit interactions under a perpendicular magnetic field [17,18].
II. OCCUPATION NUMBER REPRESENTATION
The two-component eigenstate of the Hamiltonian (1) for the nth energy level with quantum number s has the general form where the 2 × 2 matrix is a unitary one, s = ±1 are associated with the two components under the level quantum number n, respectively, A ns is the normalized factor, ∆ ns is a real parameter to be determined below by requiring the coefficients α ns m and β ns m to be nonzero, φ m is the eigenstate of the mth energy level in the occupation number representation, i.e. a + φ m = √ m + 1φ m+1 , aφ m = √ mφ m−1 and < φ m ′ |φ m >= δ mm ′ . When m → +∞, α ns m = β ns m = 0. Substituting |n, s > into the eigen-equation H|n, s >= E ns |n, s > and letting the coefficients of φ m to be zero, we obtain a coupled system of infinite homogeneous linear equations for α ns m and β ns m 2g∆ns 1+∆ 2 where m = 0, 1, 2, · · · , ∞, and α ns m = β ns m ≡ 0 for m < 0.
A. Sub-energy spectrum I In order to obtain the analytical solution of the Hamiltonian (1) in the whole parameter space, we first choose which come from the vanishing of the two terms about α ns n+1 and β ns n in Eq. (3) with m = n + 1 and Eq. (4) with m = n, respectively. Such a choice is based on the observation of exact solution of the Hamiltonian (1) for the nth energy level with quantum number s when g = 0. We find that the non-zero eigenfunction associated with the eigenvalue E ns is solely fixed by letting [2λ∆ ns + ǫ(1 − ∆ 2 ns )]β ns n − 2g∆ ns √ n + 1α ns n+1 = 0, (7) or [2λ∆ ns + ǫ(1 − ∆ 2 ns )]α ns n+1 + 2g∆ ns √ n + 1β ns n = 0. The low-lying energy levels of the energy spectrum (13) in unit of ω as a function of the coupling parameter g at different λ under ǫ = 0. The solid lines denote n = 0, 1, · · · , 5 and s = 1 while the dash lines mean n = 1, 2, · · · , 5 and s = −1.
We solve the homogenous linear equations (5) and (6) about α ns n+1 and β ns n by vanishing of the coefficient determinant. Then the eigenvalue for the nth eigenstate with s has the analytical expression (9) Note that the quasiparticle energy E ns must be larger than zero. From Eqs. (6) and (7) or Eqs. (5) and (8), the parameter ∆ ns is determined by the highly nonlinear equation or After analysing carefully, we discover that Eq. (10) with s = −1(1) coincides with Eq. (11) with s = 1(−1). In other words, ∆ ns is independent of quantum number s, i.e. ∆ n,1 ≡ ∆ n,−1 , which leads to Ξ n,1 ≡ Ξ n,−1 . So we have where σ = ±1. It is easy to see from Eq. (12) that the analytical solution (9) is physical if and only if ∆ ns → 0 when ǫ → 0. Otherwise, Ξ ns ≡ σω/2, which is not true for arbitrary λ and g. When ǫ = 0, then ∆ ns = 0 according to Eq. (12). Therefore, the eigenvalue (9) has a simple formula in the absence of the driving term ǫ. Obviously, the eigenvalue (13) crossings between the neighboring eigenstates. With increasing λ, the energy levels with s = 1(−1) become higher (lower), and these crossing points move toward the origin.
For the eigenstate associated with the sub-energy spectrum I, from Eq. (6), we have where β ns n is an arbitrary constant and can be set to 1, and the coefficients α ns m and β ns m are uniquely determined by the recursion relations for m = 0, 1, 2, · · · , n, and (16) for m = n + 1, n + 2, · · · , +∞. Here we have defined where I is the 2 × 2 unit matrix. From the recursion equation (15), we can see that α ns m−1 and β ns m−1 (m = 1, 2, · · · , n) are linear functions of α ns n and β ns n+1 , which are obtained by solving Eq. (15) with m = 0.
For the nth eigenstate with s in the sub-energy spectrum II, we have where α ns n is an arbitrary constant and is set to 1. The other coefficients α ns i and β ns i also obey the same recursion relations (15) and (16) in the sub-energy spectrum I.
III. THE BARGMANN SPACE
In this section, we reinvestigate the eigenvalue problem for the Hamiltonian (1) in the Bargmann space [16], where the bosonic creation and anihilation operators in terms of a complex variable z can be transformed as In this representation, the state Ψ(z) can be normalized according to We assume that the two-component eigenstate of the Hamiltonian (28) for the nth energy level with quantum number s possesses the general form where s = ±1, ∆ ns is a real parameter in the unitary matrix to be determined below by requiring the coefficients A ns i and B ns i to be nonzero. When i → +∞, A ns i → 0 and B ns i → 0, so that Ψ ns is finite at any z in the Bargmann space. Substituting the eigenfunction (30) into the eigen-equation HΨ ns = E ns Ψ ns and requiring the coefficients of z i to be zero, we obtain the infinite system of homogeneous linear equations with the variables A ns i and B ns i 2g∆ns where i = 0, 1, 2, · · · , ∞, A ns m = B ns m ≡ 0 for m < 0. Eqs. (31) and (32) can be also solved exactly by employing the same procedure in the occupation number representation in section II.
A. Sub-energy spectrum I Following the trick presented in the occupation number representation, we let which come from the vanishing of the two terms about A ns n+1 and B ns n in Eq. (31) with i = n + 1 and Eq.(32) with i = n, respectively. Then the non-zero eigenfunction associated with the eigenvalue E ns is solely fixed by requiring [2λ∆ ns + ǫ(1 − ∆ 2 ns )]B ns n − 2g∆ ns (n + 1)A ns n+1 = 0, (35) or 2g∆ ns B ns n + [2λ∆ ns + ǫ(1 − ∆ 2 ns )]A ns n+1 = 0.
Solving the homogenous linear equations (33) and (34) about A ns n+1 and B ns n , we have For the eigenstate for the sub-energy spectrum I, from Eq. (34), we have where B ns n is a constant to be determined by the normalized condition (29). The coefficients α ns i and β ns i , proportional to B ns n , are obtained by the recursion relations (41) for i = 0, 1, 2, · · · , n, and [ω(n + 1) − The equations above originate in the vanishing of the two terms about A ns n and B ns n+1 in Eq. (31) with i = n and Eq. (32) with i = n + 1, respectively. The corresponding eigenfunction is uniquely determined by the condition Solving Eqs. (43) and (44), we have which is consistent with the eigenvalue (22) in the occupation number representation. Here ∆ ns satisfies the nonlinear equation or (1 + ∆ 2 ns )(E ns − nω) − λ(1 − ∆ 2 ns ) + 2ǫ∆ ns g(1 − ∆ 2 ns )(n + 1) A ns n , (50) where A ns n is a constant to be determined by the normalized condition (29). The other coefficients A ns i and B ns i , proportional to A ns n , also satisfy the same recursion relations (41) and (42) in the sub-energy spectrum I.
In order to compare with the energy spectrum of the Rabi model presented by Braak, here we employ the physical parameters in Ref. [16]. Figs. 5 and 6 exhibit the low-lying energy levels of the sub-energy spectrum I and II as a function of g at λ = 0.4ω and ǫ = 0 and at λ = 0.7ω and ǫ = 0.2ω, respectively. We can see that the energy spectrum possesses the level crossings between the neighboring eigenstates, which is dramatically different from that in Ref. [16]. It is expected that such the degeneracies at certain physical parameter values could produce novel physical phenomena in the two-level system with the light-matter interaction, similar to the two-dimensional electron gas with spin-orbit interaction under a perpendicular magnetic field [19][20][21].
IV. SUMMARY
We have exactly solved the quantum Rabi model (1) in both the occupation number representation and the Bargmann space. The complete energy spectrum is comprised of two double-fold degenerate sub-energy spectrum I and II. Such the exact solution can help us to deeply understand the light-matter interaction, especially in strong coupling regimes. Because the analytical expressions of the eigenvalue E ns in the occupation number representation are completely identical to those in the Bargmann space, this exact solution for quantum Rabi model is definitely correct. | 2,944.2 | 2020-08-05T00:00:00.000 | [
"Physics"
] |
Role of Maximum Entropy and Citizen Science to Study Habitat Suitability of Jacobin Cuckoo in Different Climate Change Scenarios
: Recent advancements in spatial modelling and mapping methods have opened up new horizons for monitoring the migration of bird species, which have been altered due to the climate change. The rise of citizen science has also aided the spatiotemporal data collection with associated attributes. The biodiversity data from citizen observatories can be employed in machine learning algorithms for predicting suitable environmental conditions for species’ survival and their future migration behaviours. In this study, different environmental variables effective in birds’ migrations were analysed, and their habitat suitability was assessed for future understanding of their responses in different climate change scenarios. The Jacobin cuckoo ( Clamator jacobinus ) was selected as the subject species, since their arrival to India has been traditionally considered as a sign for the start of the Indian monsoon season. For suitability predictions in current and future scenarios, maximum entropy (Maxent) modelling was carried out with environmental variables and species occurrences observed in India and Africa. For modelling, the correlation test was performed on the environmental variables (bioclimatic, precipitation, minimum temperature, maximum temperature, precipitation, wind and elevation). The results showed that precipitation-related variables played a significant role in suitability, and through reclassified habitat suitability maps, it was observed that the suitable areas of India and Africa might decrease in future climatic scenarios (SSPs 2.6, 4.5, 7.0 and 8.5) of 2030 and 2050. In addition, the suitability and unsuitability areas were calculated (in km 2 ) to observe the subtle changes in the ecosystem. Such climate change studies can support biodiversity research and improve the agricultural economy.
Introduction
The exponential change in the climate has directly affected the spatial distribution of species and communities in ecosystems, which is an essential requirement to understand the functions and processes of the ecosystem. As such, species movement in response to climatic shifts could be projected from species distribution models (SDMs), which provide an empirical way to assess the climatic impacts for the changes of species habitats (for example, reference [1]). A habitat is defined as a particular location where species live and reproduce with certain characteristics, behaviour, interactions and population patterns [2]. A favourable habitat that is significant for the survival of a species is called habitat suitability [3] and has importance in ecological research through habitat suitability modelling, which can help in conservation and protection plans. Several studies have investigated the suitability of species' habitats using maximum entropy (Maxent) [4] for evaluating the species range using geolocation data; for example, references [5][6][7][8][9][10][11][12]. The Maxent model has gained popularity in the literature for the modelling of species' spatial distributions, and related studies have received over 5000 citations in the Web of Science Core Collection, mostly used by distribution modelers (ca. >60%) [13]. In recent decades, countless studies have been carried out on the suitability of species' habitats using the maximum entropy method for evaluating species ranges using presence-only data [14][15][16][17][18][19]. Such studies help to derive useful guidelines on the parameterisations of the model, such as the minimum sample size and data sample requirement, selection of random samples from voluminous datasets and the determination of subsampling process for range predictions per species and per sample size [14][15][16]18,19].
Although such data-intensive modelling approaches can help in identifying the major factors behind species range expansion [20], the occurrences of species, which poses a significant contribution to the model, should preferably be recorded across spatial (species range) and temporal (time of observation) contexts. However, it is intrinsically a problematic and costly task to record geographically varied near-real-time observations, because such activities need the continuous monitoring of species movements, so the species are tagged with tracking devices without causing any harm to them. Therefore, these real-time or near-real-time observations through a volunteering approach for data collection could help in quantifying the species fitness at a large spatial scale and informing about the changes in climatic patterns. The species properties can also be obtained from floras, the literature, herbaria and museums as theoretical data to model the habitat suitability of a species [21]. However, the main challenge remains the spatial uncertainty, which may be sourced from incorrect geotagging or wrong datum information [21,22]. Until recently, the species data were collected and recorded as a textual description in the forms of names and places [22], and digitising the textual information also causes substantial errors and brings spatial uncertainty in the order of several kilometres [23]. Various techniques have been developed to estimate and to document the location uncertainty among species' occurrence records in order to eliminate high errors prior to suitability modelling [22,24].
Trained and untrained volunteers have helped in the data collection processes as a citizen science approach, which may provide robust and rigorous data with qualitative and quantitative attributes. The data collected using citizen science approaches have been applied to ecological niche models in recent years to mitigate the gaps in the quantity and quality of data, which also improved the approximation of the metric of interest [25]. Species distribution modelling for particular species requires a sufficient number of occurrences distributed across its extant [26][27][28]. Citizen science is a broad concept that can be understood in different forms [29], from highly systematic protocols to opportunistic surveys with no sampling designs [29,30].
In this paper, a ML-based maximum entropy (Maxent) algorithm was applied to Jacobin cuckoo (pied cuckoo or pied-crested cuckoo) occurrences with environmental variables, such as to evaluate the potential habitat suitability in Africa and in India. For the modelling procedure, the birds' occurrences were first divided into three time periods-June-September, which refers to India's southwest monsoon, October-December, which refers to India's northwest monsoon and Africa's wet season, and January-May-to predict the suitability site of the Jacobin cuckoo, as most parts of India have winter and summer seasons. This approach was also used to predict the change in monsoon patterns by modelling how this bird's favourable habitats will shift under different climate change scenarios. For this future prediction modelling, the obtained modelling results of current suitability with the existing environmental variables and species occurrences was projected with future models to observe the probable climatic changes in 2030 (an average of 2021-2040) and 2050 (an average of 2041-2060). In addition, the areas of suitability and unsuitability sites were also calculated to analyse the increase or decrease in the ecological system in response to changes in the monsoon patterns. The first ever Indian monsoon climate change study in terms of Jacobin cuckoos' migration was performed by Singh and Saran [31], in which the geographic occurrences of the Jacobin cuckoo with 19 current bioclimatic variables were modelled using the ML=based Maxent model in R software. This trained model was then projected with future bioclimatic variables under the RCP8.5 scenario of the Coupled Model Intercomparison Project (CMIP5) to assess the predicted changes in the suitability of Pied cuckoos' habitats by 2050. Specifically, the current and future bioclimatic variables were used at a resolution of 2.5 arc seconds (~4.5 km at the equator) of latitude and longitude. The above-mentioned study signified that the major environmental variables that affect the suitability of the Jacobin cuckoo were isothermality (16.8%), the mean temperature of the warmest quarter (15.7%), annual precipitation, precipitation of the warmest quarter (13.6%) and precipitation of the wettest month (11.3%) during the Indian summer monsoon season, i.e., June-October. As per the current suitability predictions, the states of Southern India-Andhra Pradesh, Goa, Karnataka, Kerala, Maharashtra and Tamil Nadu-and Northern India-Uttarakhand and Himachal Pradesh-showed high, as well as medium, habitat suitability, and the western states (e.g., Gujarat) displayed medium suitability; southern Africa was found unsuitable for this bird, because in the June-October months, a dry and hot climate is experienced there, which is not a favourable habitat. However, according to the future suitability prediction of this bird, the Jacobin cuckoo range contraction could happen in all parts of India except the southern parts of Tamil Nadu due to increased greenhouse gas emissions and a decrease in precipitation of the warmest quarter. In addition, the quantiles (5% and 95%) of the relevant environmental variables were calculated to observe the changes in climates of now and 2050 with respect to Indian monsoon seasons.
Citizen Science as a Biodiversity Research Method
The act of engaging volunteers in scientific tasks has proliferated in the past few decades with offered, more pressing opportunities for participants to deliver advanced approaches and make meaningful insights into their collected data. The activity of effectively utilising crowdsourcing, along with the Internet and mobile applications, over large geographic regions is known as citizen science. Citizen science "is a process where concerned citizens, government agencies, industry, academia, community groups, and local institutions collaborate to monitor, track and respond to issues of common community concern" [32] and "where citizens and stakeholders are included in the management of resources" [33,34]. Citizen science involves both professionals and non-professionals participating in both scientific thinking and data collection [33,35] with the support of technological advancements, such as smart phones with location services, camera, accelerometer, etc. [36]. However, based on its nature of engagement and utility in diverse domains, citizen science may have different conceptual definitions and meanings.
According to the nature of engagement, the galaxy of citizen science is categorised into four levels-crowdsourcing, distributed intelligence, participatory science and extreme [37]. Crowdsourcing is the most basic level, where the general public can contribute to science by processing and analysing collected data. The next level is distributed intelligence, in which citizens learn new skills before becoming involved in data collection and interpretation activities. The third level is participatory science, where citizens are involved with research groups for defining problems and data collection. The last level is extreme, where citizens are equipped with full control to define problems, collect data and performing analyses on it.
The above classification schemes can be demonstrated with the example of Project PigeonWatch, which is one of the citizen science projects at the Cornell Lab of Ornithology (CLO) and the National Audubon Society that engage many volunteers of all ages and professions throughout the world to collect hands-on data to study and analyse pigeon colour variations. On the basis of the above checklist, this project can be characterised as an "investigation" project, and the utilised approach is "crowdsourcing".
Amidst various citizen science projects, 72% of the projects relate to the discipline of biology [62], and due to such dominancy in this area, the study and research on the diversity and distribution of species [63] advance the rapid need for biodiversity monitoring, conservation planning and ecological research. Many citizen science programs have been realised over the span of years or even decades and are still being carried out to study the patterns of nature on a large spatial scale by collecting data on different locations and habitats of species. The way of collecting such information on the species' locations, habitats and other related information [63][64][65] by enlisting the public in scientific activities is now considered the best practice. It is not necessarily true that the scientific output is always benefitted by robust strategies and inferences from highly recognised and peer-reviewed scientific publications; rather, gathering information through public participation could be a better source of scientific information to answer specific questions [66,67]. Higher credit may be given to Cornell University's Lab of Ornithology, which laid the foundation for volunteer participation in biodiversity observations, monitoring and research [52]. However, there are many other organisations and research groups that have designed various citizen science programs to collect geographically well-distributed and dense data with rigorous spatial sampling, such as Species Mapping through the Indian Bioresource Information Network (IBIN) portal [68], bioblitz [69], the shell polymorphism survey [70], the water quality survey [71] and breeding bird surveys [72]. Such diverse datasets compel the aggregation of observation data from different sources for conventional research, but the major concerns even after data aggregation are data quality [73] and techniques for combining diverse datasets into different schemas [74]. Therefore, apposite planning is required for managing the voluminous dataset integration into a uniform schema with data quality check infrastructure for handling observational biases, "false absences" that yield to inadequate sightings [75] and uneven data distributions [76]. These challenges were addressed by a global concerted effort [77] that began in 2004 and has now resulted into the largest single gateway to observation-based datasets, known as the Global Biodiversity Information Facility (GBIF).
The GBIF is an intergovernmental organisation that provides "an Internet accessible, interoperable network of biodiversity databases and information technology tools" [78] as a "cornerstone resource" [79], with a "mission to make the world's biodiversity data freely and universally available via the Internet" [79]. Currently, the GBIF portal provides open access to more than 160 million biodiversity occurrences and taxa data from 1641 institutions and volunteered surveying data around the globe. Therefore, the GBIF has become an authentic repository where various organisations/institutes share their data with quality and in large quantity, which are essential for modelling and decision-making purposes. Edwards et al. [77] performed a spatial validation of the third-largest flowering plant family, the Leguminosa, using its taxa and distribution data from the GBIF portal to evaluate the quality and coverage of its geographic occurrences. Similar reviews could be seen by Graham et al. [21] and Suarez and Tsutsui [80] for additional uses of museum specimen data, which facilitated biodiversity policy and decision-making process [80]. Amongst the various other advantages, GBIF data can be used for biodiversity assessments [81], taxonomic revisions [82], compiling red lists of threatened species [83] and habitat suitability modelling [31,[84][85][86][87]. The latter is one of the prominent examples of climate change studies in which citizen science-based observations from the GBIF are being increasingly used [88][89][90][91][92][93][94][95]. In this paper, different climate change scenarios combined with the GBIF's observed occurrences of the monsoon favourable bird, Clamator jacobinus, are modelled using the Maxent approach to study the contemporary and future habitat suitability of this bird so that the variations in the Indian monsoon season can be examined.
The Jacobin (Pied) Cuckoo Species
As per Indian belief, the arrival of this partially migrated bird, the Jacobin cuckoo (Clamator jacobinus) (Figure 1), also known as "Chatak" in India, heralds the onset of the Indian monsoon [96]. During the summer, the bird flies from Africa to India for breeding, crossing the Arabian Sea and the Indian Ocean, as shown in Figure 2. The Jacobin Cuckoo belongs to the cuckoo order of small terrestrial birds with black and white soft plumage and long-wings with a spiffy crest on the head that quenches its thirst with raindrops. The species is also known as a brood parasite, i.e., instead of making its own nest, it lays its eggs in the nest of other birds, particularly Jungle babbler (Turdoides striata). This bird of an arboreal nature mostly sits on tall trees but often forages for food in low bushes and, occasionally, on the ground. It prefers well-wooded areas, forests and bushes in semi-arid regions. As widely known, the Jacobin cuckoo maintains their suitability in India in two ways. There is a population that is sighted as residents year-round in the southern part of the country. The Jacobin cuckoo is also sighted in the central and northern parts of India along with summer monsoon winds from just before the monsoon to early winter, i.e., May-August. The reason behind choosing the Jacobin cuckoo was that the arrival time of this bird is directly linked with the monsoon because it only drinks rainwater drops as it pours down and does not utilise any other water sources, such as collected rain waters, rivers, etc., to quench its thirst.
The Species Distribution Data and the Preprocessing
The distribution data was obtained from the GBIF repository that collated geographic records of this bird from surveys, museums, human observations and other data sources. Then, those occurrences recorded through "human observation" were selected for this study, because this research was focused on demonstrating the use of citizen science data for habitat suitability modelling [97]. The institutes/organisations contributed this bird's data in the "human observation" category through various citizen science programs. These institutes/organisations are the Cornell Lab of Ornithology, FitzPatrick Institute of African Ornithology, South African National Biodiversity Institute, iNaturalist.org, Observation.org, Xeno-canto Foundation for Nature Sounds, naturgucker.de, Kenya Wildlife Service, India Biodiversity Portal and A Rocha Kenya. However, the dataset contained repeated latitude and longitude values, as well as null values (NA: not available). By using a data cleaning algorithm in R, records with NA values and duplicated locations were removed. Since the Jacobin cuckoos are known for their close association with the onset of monsoon season in India, the compiled GPS records of human observations from 1991 to 2020 were divided into the following monthly sets: i.
Based on the period of the southwest monsoon season that typically lasts from June to September, the geographic occurrences of the Jacobin cuckoo were filtered for these months starting from the year 1991 to 2020. In this period, the whole country receives more than 75% of its rainfall [98]. ii.
The second input set was filtered using the months of northeast monsoon season, i.e., October-December. This monsoon season is also known as post-monsoon or winter monsoon season, in which the country receives about 60% of its annual rainfall in the coastal areas and about 40% in the interior areas [99]. Additionally, the rainy season starts from October and lasts until April-June in Africa, where the conditions are mostly suitable for its residency. iii.
The third and final set contains the data of months January-May that denote the mid-rainy period and end of the rainy season in Africa and India, respectively.
Hence, the abovementioned sets of geographic occurrences were combined with environmental datasets to understand their potential suitability ranges, environmental parameters and altered climatic variations in different climate change scenarios.
Selection of Environmental Variables
This section discusses the selection of environmental data that are assumed to ecologically influence mobile species like birds, particularly the Jacobin cuckoo distribution. These include bioclimatic variables, minimum temperature, maximum temperature, precipitation, elevation and wind at the spatial resolution of 2.5 arc-min from WorldClim. For the present climatic conditions, the bioclimatic variables, which were averaged for the years 1970-2000, were obtained from WorldClim version 2.1, the latest version of climate data launched in January 2020 [100]. As per this study, modelling was carried out for three different time periods; therefore, climatic variables such as precipitation, wind, minimum temperature and maximum temperature were taken and screened for the given three sets.
For future sets, 19 bioclimatic variables (Table 1) for the near-future (2021-2040) and remote-future (2041-2060) projections of the species distribution maps at 2.5 arc-min were obtained from WorldClim [101]. For future climate scenarios, the CIMP's climate data of the CNRM-ESM2-1 [102] global climate model (GCM) for four Shared Socioeconomic Pathways (SSPs): 126, 245, 370 and 585 were obtained from WorldClim's database, which was spatially downscaled and calibrated to reduce the bias. The 2013 IPCC (Intergovernmental Panel on Climate Change) fifth assessment report (AR5) generated climate models from CMIP5, and the 2021 IPCC sixth assessment report (AR6) presented CMIP6 with the 10 Earth system models (ESMs) [103]. CNRM-ESM2-1 is one of the ESMs that contains interactive earth system components such as aerosols, atmospheric chemistry and the land and ocean carbon cycles. In CMIP6, sufficient amounts of data on Carbon Brief were included to analyse the future emission scenarios, such as past and future warming and climate sensitivity since CMIP5. The IPCC AR5 introduced four Representative Concentration Pathways (RCPs) that examined future greenhouse gas emissions in different climate change scenarios: RCP2.6, RCP4.5, RCP6.0 and RCP8.5. These scenarios were updated with the Shared Socioeconomic Pathways (SSPs) scenarios in CMIP6-SSP1-2.6, SSP2-4.5, SSP3-7.0, SSP4-6.0 and SSP5-8.5. SSP1 is a world of sustainability-focused growth and equality. SSP2 is known as the "middle of the road", where the historical patterns are followed, SSP3 lies right in the middle of the range of the baseline outcomes produced by ESMs, SSP4 is a more optimistic world that fails to ordain any climate policies and SSP5 depicts the worst-case scenario. These SSPs could examine the demographic and economic factors, as well as how societal picks will have an impact on greenhouse gas emissions. While in RCPs, the socioeconomic factors are not included, but only the pathways are set to examine the greenhouse gas concentrations and the amount of warming that could occur by the end of the century. In this paper, the SSPs 1-2.6, 2-4.5, 3-7.0 and 5-8.5 were used.
Screening of Environmental Variables
The correlation test between the environment variables for each of the three seasonal sets was carried out to retain the ecologically relevant variables in the species' suitability. Spearman's correlation coefficients [104] were applied on the variables set, where, if the variables had a Spearman correlation < 0.7, those variables were not highly correlated. Then, the Variance Inflation Factor (VIF) was calculated in R software for each remaining variable using the vifcor function of R package usdm [105] and eliminated the environmental variables whose VIF values > 3, because the smaller VIF values hold low correlations. The resulting VIF values were <3 [106], and therefore, no further variables were eliminated. This correlation test among the environmental variables was performed for India and Africa separately, and these screened environmental variables were then used in the Maxent model to predict the habitat suitability in the abovementioned study areas.
The Spatial Distribution Model with Maxent
ML-based Maxent modelling [107] is the most popular and a well-established habitat suitability modelling approach [108][109][110][111][112][113][114][115][116] that predicts probable distributions based on species' occurrences and environmental variables. The advantage of using Maxent is that it uses presence-only data and provides a predictive map within the study area. This works on the principle of maximum entropy that estimates the probability distribution of species' habitats with no constraints and assumes that each feature has the same mean value in the approximated distribution as the species occurrences.
In this study, the maximum entropy algorithm with bird's occurrences and screened predictor variables was modelled to predict the potential suitable habitats and analyse the relative importance of different bioclimatic factors of each point of occurrence for the Jacobin cuckoo. This method was applied on all the three sets of time periods, so that the habitat suitability analysis could be performed to validate the belief that the Jacobin cuckoo are the harbinger of the Indian monsoon and analyse the suitable climates and range of this bird in India, as well as in Africa, during the selected periods. The jackknife test was applied to recognise the importance of the environmental variables. The species occurrences were split as training (75% of the total occurrences) and test (25% of the total occurrences) data for the models' calibration and assessment, respectively. The response curves; jackknife and other features such as linear, quadratic, product, threshold and hinge were set as true parameters in the habitat suitability model. The other model parameters were used as follows: i.
"replicates = 10" tells the model about the number of replicates that the model executes for cross-validating, bootstrapping or doing sampling with replacement runs; ii. "lq2lqptthreshold = 80" is the number of samples at which the product and threshold features start being used; iii. "l2lqthreshold = 10" is the number of samples at which the quadratic features start being used and iv. "hingethreshold = 15" is the number of samples at which the hinge features start being used.
The predictive performance of the generated model was then assessed by calculating the Area Under the Receiver Operator Curve (AUC) of the receiver operating characteristic (ROC) plot, which ranges between 0 (no discrimination) to 1 (perfect discrimination) [116]. The process of evaluating the model's predictive performance using AUC involves the process of setting thresholds on the model's prediction by generating various levels of false positive rates and then assessing the true positive rate as a false positive rate function. Here, the false positive rate referred to the prediction of a presence for those places where the species is absent, and the true positive rate is the successful prediction of a presence. The AUC range from 0.7 to 0.8 is acceptable, 0.8 to 0.9 is excellent and above 0.9 is an outstanding performance [117]. The dominant environmental variables in determining the species' probable distribution were assessed through the jackknife test (also called "leaveone-out") that gives the permutation importance against the environmental variables [110]. The species response curves were generated by the model to examine how the likelihood of species' occurrences responds to the variations in the changing environmental conditions. Then, the future climatic variations (2021-2040 and 2041-2060) were also modelled to estimate how the species will respond to changes in ecological systems, as their favourable habitats will shift under different climate change scenarios (i.e., SSP1-2.6, SSP2-4.5, SSP3-7.0 and SSP5-8.5).
The predicted habitat suitability maps were then reclassified into convenient classes that represented the threshold limits that differentiated the unsuitable and suitable habitats. The reclassified classes of habitat suitability were: the unsuitable conditions with a lower threshold and the suitable conditions that were further categorised into three classes: low, medium and highly suitable. This threshold helped in interpreting the ecological significance by identifying areas that were at least suitable as similar to those areas where the species has been recorded.
Selection of Environmental Variables
To detect the correlations among environmental variables, the Spearman's correlation coefficient threshold was set to 0.7, and then, the vifcor function was performed. From the visual assessment, some variables showed an intercorrelation that was then eliminated if their vif values was assigned as less than 3. This was the case for the minimum temperature, maximum temperature, precipitation, elevation, wind and bioclimatic variables. The final selected set of variables were then used to predict the suitability of the Jacobin cuckoo, given in Table 2.
Performance Evaluation Results of the Maxent Model
After executing the Maxent model on the species' occurrences and environmental variables, its predictive accuracy was evaluated by using AUC plots. As shown in Table 2, environmental variables were selected for India and Africa separately in three different time periods; therefore, the Maxent model was executed by separating the species occurrences of India and Africa into three different time periods. Additionally, the minimum temperature, maximum temperature and precipitation used in the Maxent model were taken according to these three time periods.
The AUC values for the Pied cuckoos' suitability prediction model given in Table 3 depicted that the model's prediction was very good, so that it could effectively predict the species distribution under the current and future climate scenarios.
Variable Importance and Contribution
Tables 4 and 5 depict the heuristic estimate of the percentage contribution and the permutation importance of the environmental variables used in the Maxent model for three different time periods with species occurrence data from India and Africa. These two tables helped us to interpret the most influential environmental variables that played a significant role in the Jacobin cuckoo's habitat suitability in India and Africa. It is observed in Table 4 that bio2, bio3, bio14, bio15, bio18, bio19 and wind are common for all the three time periods in India, whereas, in Africa (Table 5), bio 14 and bio 19 are common in all three time periods. In India, wind and precipitation play a minor role, whereas, in Africa, wind and elevation hold major contributions in suitability modelling. Therefore, this section concluded that the environmental variables related to precipitation play a significant role in the distribution of the Jacobin cuckoo and are essentially required in its potential suitable habitats.
Predicted Habitat Suitability Map of the Jacobin Cuckoo
Using the influential bioclimatic factors in the species distribution, the habitat suitability prediction was performed under the current and future climatic scenarios to estimate the changes in the ecological systems and how the species will respond to the changes in different climatic variations. This section discusses the spatial characteristics of the utilised distribution data, i.e., India's southwest and northeast monsoon seasons and Africa's rainy season, especially in Southern Africa.
Current Habitat Suitability June-September
The months of June-September are known as India's southwest monsoon period, in which all of India receives more than 75% of its rainfall [118], whereas these months bring the dry season in Central, South and East Africa. Therefore, the recorded occurrences of June-September (separately for India and Africa) with screened environmental variables were supplied to the Maxent model, which resulted in the dominant bioclimatic variables, as well as the prediction of the current habitat suitability of this bird. After evaluating the model's performance, the output was then used to project the future habitat suitability of the Jacobin cuckoo under different climate change scenarios.
The species habitat suitability map shown in Figure 3 depicts that the areas covered with grey colour represent no suitability for this bird, whereas the yellow and brown colours represent good and low habitat suitability of the species, respectively. The species occurrences are plotted with red points on the map, and the habitat suitability ranges can be seen from the probability scale, which depicts the bird's residency. Accordingly, Africa has very low suitability during the June-September period because of its dry season, which might not be a favourable climate for the suitability of Pied cuckoos. However, the model predicted the good and high suitability of these birds in all of India as compared to Africa, because India receives 75% of its rainfall then, and thus, this could be one of the main reasons of their partial migration to India, so that they can get their suitable climatic factors. The results shown in Figure 3 were computed on occurrences from 1991 to 2020 and the climatic variables (tmin, tmax, prec and wind) of the June-September months.
In India, the major suitability predictions under the current scenario can be seen in Northern, Western and Southern India, but Eastern, Central and North-eastern India have shown very low or no suitability predictions. The model predicted the average suitability range of the Jacobin cuckoo in the following Indian states-Uttarakhand and Uttar Pradesh and in Madhya Pradesh of Central India. The reason behind its migration to these parts of India could be because of the wettest features due to the highest rainfall and the maximum number of rivers. A good suitability range was predicted in Western India, such as in Gujarat (sites near to the Gulf of Kachcha and the Gulf of Khambat) and in Maharashtra (sites surrounded by the Arabian Sea) and, also, in Southern India, such as in Western Ghats. The highest suitability of this bird was predicted in a few areas of South India, such as Southern Tamil Nadu, which is Rameswaram, Dhanushkodi and Thoothukodi. Therefore, there is a probability that this bird likes to stay at the wettest sites, which receive the highest amount of rainfall. In Africa, no habitat suitability was predicted for Jacobin cuckoo, because this time period is the dry season, which might force the Jacobin cuckoo's migration to India. The change in climate, particularly the Indian southwest monsoon patterns, were analysed with respect to the Jacobin cuckoo in the past 30 years by separating the datasets into two subsets-1991-2005 and 2006-2020. For this, the climatic variables were used for these yearly subsets, and their results are illustrated in Figures 4 and 5, respectively. In Figure 4, there is no suitability predicted in Africa in 1991-2006 during June-September because of its dry weather, which is unsuitable for the bird, but, in India, an adequate habitat suitability can be seen in the eastern parts and northern parts but not the southern and western parts, due to the bird's migration to India in search of a wet climate. When the suitability results of Figure 4 are compared with Figure 5, a decline is observed in the Indian monsoon rainfall in the north and east for the past 15 years in June-September. Additionally, this study on climate change was completely dependent on sightings of the Jacobin cuckoo, so one of the reasons of less suitability in Southern and Western India would be less sightings of the bird due to the unawareness of crowdsourcing. As per Google Trends search records from 2004 to the present (Figure 6), the public interest in the term "crowdsourcing" in India started in 2007.
October-December and January-May
The months of October, November and December are known for the northeast monsoon or winter monsoon in India, as its direction is set from the northeast to southwest of India, and the beginning of rainy and wet season in Africa, which typically lasts until April. As such, this set of months includes monsoons of two different countries. Therefore, the results discussed in this section are divided into two sub-sections: one for October-December and another for January-May.
The northeast monsoon season from October to December in India brings rain to Andhra Pradesh mainly in the coastal regions of Kerala, Puducherry, Rayalaseema, South Karnataka and Tamil Nadu. As compared to the southwest monsoon, this monsoon period gives only 11% of the annual rainfall in India, but in Tamil Nadu, this season gives almost half of its annual rainfall. The habitat suitability map given in Figure 7 depicts that, in India, the highest habitat suitability of the Jacobin cuckoo is predicted in Tamil Nadu, good in Andhra Pradesh and Karnataka and low in Kerala. Considering the model's predictions on the bird's habitat suitability, which is correlated with the patterns of the northeast monsoon (i.e., in October, November and December), it can be linked to the belief or fact that the bird's movements are completely linked with the monsoon rains of the northeast monsoon period.
After verifying and exploring the links of the Indian monsoon arrival with the bird's sightings, subsequently, the same study was carried out to investigate the major climatic factors that could be the cause behind the bird's return journey to Africa. Figures 7 and 8 depict that, when the rainy and the wet season starts in October, the Pied cuckoos, residents of Africa, might return to their native lands and reside there until April. Therefore, the sightings of the Jacobin cuckoo were observed in the provinces of Africa, except the Western Cape. However, the highest suitability (green colour) is predicted in the coastal areas of the Eastern Cape and Kwazulu-Natal Provinces. Therefore, the research on the habitat suitability of the Jacobin cuckoo proved the correlation between sighting of the Jacobin cuckoo and the arrival of the monsoon season in India and also gives rise to ancestral tales or traditional beliefs that these Jacobin cuckoos have the magical ability of summoning rain wherever they go, such as all over the Indian subcontinent, as well as in Africa.
Predicted Future Suitability under Different Climate Change Scenarios
This section is related to the modelling of future habitat suitability of the Jacobin cuckoo by using the existing Maxent model, existing occurrences data and existing environmental layers and then projecting them with future environmental variables of the years 2030 and 2050. In addition, the probable increase or decrease in suitable and unsuitable habitats in the current and future years were also estimated for the sites occupied by the species, which can be used in various climate change studies later.
June-September
The resulting suitability maps was then generated using the selected environmental variables that are given in Figure 9. The figure depicts that the probable habitat suitability conditions of the Jacobin cuckoo have are relatively medium in SSPs 2.6, 4.5 and 7.0 of 2030. As compared to the current predictions in Figure 5, declines are observed in different climatic scenarios of the future years, particularly in SSP 8.5, in which the pixels of good suitability disappeared. This might be due to the estimation of higher CO 2 emissions and the increase in global warming. Table 6 depicts that there is a decline in suitable areas and an increase in unsuitable areas as compared to the current ones. October-December and January-May This section discusses on how the distribution of potential habitats will shift under different climate change scenarios. According to the model's future predictions for the months of October-December in India ( Figure 10) and Africa (Figure 11), the habitat suitability of the Jacobin cuckoo is highly related to the current scenario ( Figure 7) under SSPs 2.6 and 4.5 of 2030. However, the suitability conditions under different scenarios such as 7.0 and 8.5 of 2030 and, in 2050, under all climatic scenarios, the probability of occurrence of the Jacobin cuckoo is predicted as quite low when compared with the current one. This can be observed in Tables 7 and 8, which represent the increase in unsuitable and decrease in suitable areas of the Jacobin cuckoo in India and Africa, respectively. Other future suitability predictions for the January-May months of 2030 and 2050 through the Maxent model are shown in Figures 12 and 13 for India and Africa, respectively, which provides the probability that the bird's suitability might become low in 2050 under all climate scenarios. Such a decline in habitat suitability of the bird during these months indicates that, in the future, India, as well as Southern Africa, might receive less rain and more dryness, which will result in a decline of the Jacobin cuckoo's suitability in India (Table 9) and Africa (Table 10). Although incongruities may exist between various climate modelling approaches [119], the strategy of assessing the current suitability and predicting the future changes in the distributions of diverse species, which are influenced by different climatic patterns, is still recognised as an important research area.
Discussions
The study presented here analysed the habitat suitability for the Jacobin cuckoo in different seasons, with particular reference to India, using the species' occurrences (1991-2020) in the ML-based Maxent model with environmental variables. The occurrences were obtained from the GBIF database, an observatory composed of data from public institutions, e.g., museums, and citizen observations. The Maxent model's predictive performance achieves higher AUC values, which denotes that this model is excellent and accurate. The results obtained using the Maxent method for predicting the potential suitability of the Jacobin cuckoo are different in all three seasons of India, i.e., June-September (the southwest monsoon), October-December (the northeast monsoon) and January-May (winter and summer). The important environmental variables affecting its habitat suitability are the precipitation of the driest month, precipitation seasonality, precipitation of the warmest quarter, mean temperature of the wettest quarter and wind. The model predictions showed that the species suitability followed the same pattern of both Indian monsoon seasons, i.e., southwest and northeast. Therefore, based on the results, the bird's migration can be linked with monsoons in the assessed regions-India and Africa. Nevertheless, in order to examine India's southwest monsoon season, the datasets were divided into two subsets-1991-2005 and 2006-2020, and then, the Maxent model with the environmental data was executed. From the results, it was surprisingly interesting to see that the monsoon patterns started declining in 1991 in a few regions of the northern and eastern parts of India during the June-September period, which might be because of anthropogenic activities, deforestation, etc. However, in Africa, the climatic conditions are always suitable for this bird's residency starting from October and lasting until April. When the rainy and wet season in Africa ends, the birds start migrating to different parts of the world where they get more favourable climate conditions. The future suitability of the Pied cuckoo bird was modelled here with a full set of climatic conditions under four scenarios (SSPs): 2.6, 4.5, 7.0 and 8.5 for 2030 (averaged for 2021-2040) and 2050 (averaged for 2041-2060), using the results of the current suitability and its projected bioclimatic variables. As per the future predictions carried out in this study, the potentially suitable climatic distribution will shrink in the future (2050 < 2030 < current) under different climate change scenarios, indicating that there could be a change in the monsoon season in India, as well as in Africa, which will result in less suitability for the Jacobin cuckoo. Such a direct link of this bird with the monsoon season helps to critically analyse the likely climatic change activities and which environmental variables play an influential role for its suitability and to support its migratory movements.
Conclusions
This study concluded that the ecological systems will be altered with respect to the climate changes, and the favourable habitats of species will shift under different climate change scenarios. The present study demonstrated an example of the modelling or the prediction of these shifts by using citizen observations, which provided the required set of data or the observations to apply robust ML models. Thus, the use of citizen science methods was essential for enabling such an analysis. Future suitability modelling using CMIP6 future datasets revealed that the precipitation and wettest climates might decline while warm and dry climates may rise.
The wettest season and precipitation are major elements in Jacobin cuckoos' distribution, and various collaborative programs are required to maintain the suitability of various migratory birds like the Jacobin cuckoo in such a changing and unexpected potential warming of the Earth. However, such predicted changes are based only on climatic factors and are not necessarily related to the distribution of human-occupied land use like urban settlements and dispersal ability.
Author Contributions: Conceptualisation and methodology, Priyanka Singh and Sameer Saran; investigation, validation, formal analysis, writing-original draft preparation, data curation and visualisation, Priyanka Singh; supervision, project administration and resources, Sameer Saran and writing-review and editing, Sultan Kocaman. All authors have read and agreed to the published version of the manuscript. | 9,651.6 | 2021-07-06T00:00:00.000 | [
"Environmental Science",
"Biology",
"Computer Science"
] |
Comparative Monte Carlo Analysis of Background Estimation Algorithms for Unmanned Aerial Vehicle Detection
: Background estimation algorithms are important in UAV (Unmanned Aerial Vehicle) vision tracking systems. Incorrect selection of an algorithm and its parameters leads to false detections that must be filtered by the tracking algorithm of objects, even if there is only one UAV within the visibility range. This paper shows that, with the use of genetic optimization, it is possible to select an algorithm and its parameters automatically. Background estimation algorithms (CNT (CouNT), GMG (Godbehere-Matsukawa-Goldberg), GSOC (Google Summer of Code 2017), MOG (Mixture of Gaussian), KNN (K–Nearest Neighbor–based Background/Foreground Segmentation Algorithm), MOG2 (Mixture of Gaussian version 2), and MEDIAN) and the reference algorithm of thresholding were tested. Monte Carlo studies were carried out showing the advantages of the MOG2 algorithm for UAV detection. An empirical sensitivity analysis was presented that rejected the MEDIAN algorithm.
Introduction
Tracking systems are particularly used in airspace surveillance applications [1]. These types of applications can be military or civilian. Advances in the development of UAV (Unmanned Aerial Vehicle) technology and a very low cost increase the number of objects in airspace, while safety rules are not applied by users. This leads to a breach of airspacesharing security rules, a physical threat to ground facilities, and a breach of the privacy of people. The list of specific problems is very large and depends on the legal, social, or religious conditions that are specific to a given country or region. In practice, the use of UAVs should be agreed upon with the landowner, who may apply their own rules. Not only might the flight of an UAV pose a problem when using the airspace but also taking pictures and filming, including capturing infrared or near-infrared images, is regulated separately.
As UAVs are an extremely attractive platform for conducting terrorist activities, the problem of controlling the airspace is extremely topical. UAVs are considered effective asymmetric weapons. Even simple UAVs can carry explosives, being used potentionally for suicide attacks as well as for terrain reconnaissance against more traditional terrorist activities. Attacking large-scale infrastructures is simple both in manual (supervised) and automatic control mode with fully implemented phases of take-off, flight, and attack.
UAV detection and tracking can be very difficult depending on many factors related to UAV size, flight characteristics, environment, and airspace surveillance method used, including the algorithms used. There are several methods that enable UAV detection and tracking [2]. The most effective method is the use of an active radar, which is used continuously for civilian objects (for example, airports). It is possible to use mobile radars that can monitor the airspace at temporary large human gatherings. Unfortunately, due to the method of operation, coverage of a very large area with multiple radars is difficult or impossible due to bandwidth sharing. The use of a radar to protect, for example, a private property, is problematic due to the radar's signal emission and for legal reasons. Another method is listening to typical bands used for communication with UAVs [3]. This method is passive, but it provides only the detection of a potential object, without the possibility of tracking, and in the case of more objects in a certain area, it is difficult to separate and classify the objects. It is also possible to use passive radar and noncooperative radio transmitters (DAB (Digital Audio Broadcasting), LTE (Long-Term Evolution), and DVB-T (Digital Video Broadcasting-Terrestrial)) [4] or acoustic sensors [5]. Another method is the use of vision systems operating in the visible or near infrared spectra and even using thermal imaging.
The use of vision methods is attractive due to their passive mode of operation and potentially low cost, but it comes with various drawbacks. The radar provides the distance to the UAV data, while a single camera only shows the direction of the UAV. In practice, determining the distance requires the deployment of several cameras using triangulation to estimate the distance to the UAV. The most important disadvantage of vision systems is their sensitivity to weather conditions. In the cases of rain, fog, or snow, their work is practically impossible, but there are also conditions when the chance of a UAV flight is also small, in particular, a flight controlled by an operator using a UAV camera for orientation in the airspace. Cameras also have to work day and night, which means that images with a wide dynamic range should be acquired.
The size of the UAV in the image depends on the distance from the camera as well as the resolution of the sensor. In practice, due to the amount of processed data and the number of cameras, it is more acceptable to use wide-angle lenses. The result is that the size of the UAV in the image can be from a few pixels to a single pixel or less (sub-pixel object). The tracking of small objects is difficult mainly due to low contrast between the UAV and the background and the noise level in the image. Contrast depends on the possibility of separating the UAV from the background by an algorithm, and backgrounds are usually variable and the lighting of the UAV may be variable. The way that the UAV is painted also affects the visibility (an aircraft camouflage can be used). The noise in the image results from random variation in the background, the noise of the image sensor itself, and the A/D (Analog-to-Digital) processing system. UAV position estimation from a single image may be very difficult or impossible, but the use of advanced detection and tracking algorithms allows the implementation of this type of tasks in a wide range of applications. Typically, tracking systems are divided into four groups depending on the number of objects being tracked: single or multi-target, the number of sensors: single sensor or multiple sensors [1]. Most tracking systems use an architecture with detection, tracking, and assignment of results to paths (trajectories). Systems of this type are not effective in detecting objects for which signal is close to or below the background noise. For applications that require tracking of ultralow signal objects, the TBD (Track-Before-Detect) solution is used [1,6]. In both solutions (conventional and TBD), the purpose of detection is to obtain binary information about the observed object. In the case of input images, this is the threshold, where one state is a potential pixel belonging to the object and the other state is a potential pixel belonging to the background. In both solutions (conventional and TBD), the purpose of detection is to obtain binary information about the observed object. For conventional tracking systems, input images are thresholded, with one state being a potential pixel belonging to an object and the other state being a potential pixel belonging to the background. In TBD systems, thresholding refers to an estimated state (trajectory), where one state denotes potential detection of the object's trajectory, and the opposite denotes no detection for a specific hypothetical object trajectory. Conventional detection systems can be optimized to track a single object, whereas TBDs are usually multi-object tracking systems. In practice, due to interferences, it is necessary to use multi-object tracking systems. For UAV vision tracking systems, this is necessary due to background noise.
The implementation of a UAV video tracking system for monitoring a specific area requires the placement of cameras, usually close to the ground surface, slightly pointing upwards. This type of orientation means that background disturbances can be moving clouds, flying birds, and the movement of trees and their leaves caused by the wind. Disturbances of this type necessitate the use of image preprocessing (background estimation) in order to eliminate area changes and the use of multi-object tracking algorithms. Even in the absence of a single UAV within range, there are usually interferences that could be interpreted as UAVs without tracking algorithms. Tracking algorithms can eliminate this type of false detection by using a motion model.
Contribution and Content of the Paper
Since the first algorithm of a tracking system is background estimation, the effectiveness of the entire tracking system depends on the selection of that algorithm and its parameters. This article describes the method of selecting the background estimation algorithm using optimization, so that it is possible to select the best background estimation algorithm and parameters for a specific image database. The results may depend on the weather conditions in which the given system operates, and they may be different depending on the geographic location and height above sea level. For this reason, the use of an optimization algorithm is essential for effective implementation of the system.
The main contributions of the paper are as follows: • Preparation of the video sequence database (publicly available). • Proposal to use the genetic algorithm to select the parameters of a background estimation algorithm for UAV tracking. • Implementation of a distributed optimization system (the C++ code is available open-source). • Evaluation of background estimation algorithms (Monte Carlo method) depending on the noise level between the background and the UAV. Empirical estimation of the background estimation quality is crucial for the selection of background removal algorithms, as conventional tracking and TBD algorithms work better when the contrast between the UAV and the background is as high as possible.
A video sequence database was used in order to test the quality of background estimation. This database was used in the process of selecting the parameters of the background estimation algorithms, with the simultaneous use of data augmentation. Details about the database and augmentation are presented in Section 2. The proposed method for evaluation of background estimation algorithms and selection of their parameters is shown in Section 3. The results are presented in Section 4, and the discussion is provided in Section 5. Final conclusions and future work are considered in Section 6.
Related Works
There is a lot of research work on tracking algorithms and assessing their quality. For the most part, a zero mean value is assumed for background noise, which simplifies the analysis. Estimation algorithms are the basis of tracking algorithms not only for large objects but also for small ones. These algorithms depend on the signal acquisition methods and are used for video systems [7][8][9][10][11][12].
The algorithms used in this article are listed in Table 1, and they are part of the OpenCV library. These algorithms are often used in various applications, but new ones are constantly being developed [18][19][20][21][22][23]. This is due to many reasons. Sometimes, new algorithms are adapted for application, and often, taking into account a new element compared to the previous ones allows a user to obtain a better algorithm. Despite the progress in this field, this causes several problems, such as quality assessment or selection for a specific application. The [24] repository is a very large database of algorithms. Currently, the repository contains 43 algorithms. The source code is available under the MIT (Massachusetts Institute of Technology) license, and the library is available free of charge to all users, academic and commercial. This type of aggregation of algorithms allows for the implementation of meta-optimization consisting of the automatic selection of algorithms for a given application.
Image processing methods using deep learning, in particular, Convolutional Neural Networks (CNNs), are also used for background estimation using 2D and 3D data [25,26]. The Fast R-CNN is proposed for UAV detection [27], and the FG-BR Network is applied for analysis of traffic on the road as considered in [28].
Background estimation algorithms using machine learning open up new horizons for applications because their effectiveness is the result of the training patterns used, not heuritics, creating a knowledge base for the algorithm. One of the active research topics using CNN is the detection of drones and distinguishing them from bird images [29,30]. UAV images can be obtained with the use of thermovision, thanks to which they are much better distinguishable from the background [31]. These publications consider relatively simple cases because the drones are either large in the image or their signal in relation to the background (SNR-Signal-to-Noise Ratio) is high (SNR » 1). This type of assumption allows the use of CNNs with simple architectures as well as transfer learning. In the case of small objects and signals for UAVs close to the background and strong noise, further work on dedicated CNNs is necessary. In particular, the use of pooling layers in typical architectures leads to a reduction in the precision of position estimation, which is unacceptable especially for TBD algorithms.
There is a lot of work on comparing background estimation algorithms, but it should be noted that they require empirical evaluation. Thus, the results are dependent on the video sequences or individual images used [32]. Evaluation of the algorithms can be performed with real videos or synthetically generated sequences. In the case of synthetically generated video sequences, various models can be used to control the experiments to a much greater degree than for the real data [33]. In the work of [33], for example, the database of [24] algorithms was used. Background estimation algorithms can be tuned manually [33] by selecting the coefficients or by automatically using an optimization algorithm.
Some works focus on specific difficult cases related to changes in lighting, reflections, or camouflage [34]. Sometimes, evaluation of the algorithms uses additional criteria, such as CPU computing requirements or the amount of memory needed [33]. There are also papers describing the databases used to evaluate background estimation algorithms [35]. Metaanalyses related to the use of background estimation algorithms constitute an interesting source of knowledge related to the approaches of various researchers [36].
The optimization of background estimation algorithms with the use of genetic algorithms was considered in the works of [37,38]. The task of optimizing serial connection of algorithms in a context of assembly line balancing using PSO (Particle Swarm Optimization) is considered in the paper in [39]. The difference between the expected image and that processed by the background estimator as an optimization criterion for the genetic algorithm was used in [40].
Data
In order to analyze the quality of the background estimation algorithms, a video sequence database was created with an emphasis on cloud diversity. The background video sequence was combined with the UAV image, which followed the given trajectory and noise. As the position of the UAV was known, the quality of the background estimation could be determined with different values of the background noise.
The use of information about the background color and the drone is an interesting solution, but the research mainly considered grayscale images. This is due to the method of color registration in typical digital cameras. The Bayer color matrix causes the sampling resolution of the individual R, G, and B channels to be smaller than the matrix resolution. In particular, for objects with a pixel size, the signal may disappear completely; for example, a red-colored drone may be in such a place that its projection on the color matrix filter will not give a signal if it hits a green or blue pixel. Grayscale cameras do not have this problem. The second problem is the signal loss because, for a given pixel, filtration in the Bayer matrix causes a loss of 1/3 of the light at best. Light that corresponds only to the corresponding R, G, or B components is transmitted. For example, a white drone on a black background will be observed as a red, green, or blue pixel. In the case of a grayscale camera, the entire signal (light) will excite a specific pixel. The third problem is the infrared filter embedded in the optical system, which is usually a bandpass filter. It transmits light in a wavelength corresponding from red to blue. In practice, its transmittance is invisible and there is light (signal) loss. The fourth problem relates to sensor technology. Sensors made of silicon are most effective for near infrared, which means that using a grayscale camera without an infrared filter gives the strongest signal.
In this work, various color cameras were used to record the background and converted to grayscale image.
Database of Video Sequences
There are many types of clouds and their appearance additionally depends on the solar azimuth and elevation angle. The height of the observation site (camera placement) also affects the visibility of the clouds; for example, in mountainous regions, it is possible to observe the clouds from above. This article uses a publicly available database developed by the authors (https://github.com/sanczopl/CloudDataset, accessed on 5 January 2021). The recordings were made with different cameras with different settings. Exemplary image frames for different sequences are shown in Figure 1. The video sequences were recorded with 30 fps; the resolution was 1920 × 1080 and 1280 × 720, and the duration was 30 s (900 frames). This article assumed an analysis of the grayscale video sequences.
Augmentation Process
The use of real UAV sequences makes it difficult to analyze the algorithms because it is necessary to collect a very large number of video sequences for various conditions. This limits the ability to test algorithms. It is more effective to use sequences with combined UAV, background, and noise image. This makes it possible to control the trajectory of the UAV and the level of image noise. Additive Gaussian noise is controlled by the σ (standard deviation) parameter. The image-merging process is shown in Figure 2.
Background estimation algorithms may use the information of only one image frame in order to compute the background. Algorithms of this type are relatively simple or use a rich database of images. A more typical solution is the use of algorithms that use a certain sequence of a few or a dozen image frames for background estimation, which means that they can adapt to various conditions and are not limited by the database.
A certain disadvantage of using previous image frames is the creation of transient effects. In the case of algorithm studies, this can be controlled by placing the UAV within range not from the first frame. This ensures that the algorithm is initialized correctly. This article assumes that the UAV arrives from frame no. 300. Real tracking systems do not require this type of additional assumptions because they work continuously.
Method
The two main problems with the use of background estimation algorithms are selection of the right algorithm and selection of the parameters. Selection of the correct algorithm can be made by independent benchmarking, as done in the next section. Choosing the best set of parameters is a task for the optimization algorithm. In the case of this work, there are two cascade-connected algorithms: the background estimation algorithm and the threshold algorithm. In this case, optimization concerns all parameters of the algorithms. It is also possible to optimize the system via background estimation algorithm replacement with another and via related change in the number and significance of the parameters; however, this option was not used to compare the algorithms.
As the reference images (ground truth) are known, it is possible to use criteria for a comparison with the estimated images. A genetic algorithm was used as an optimization algorithm and a metric-fitness value-was proposed.
Background Estimation Algorithms
Background estimation algorithms allow for removing the background from the image through a subtraction operation. This allows us to reduce biases that affect further signal processing algorithms. In the case of thresholding, the chance of a false target detection for positive bias decreases and the chance of skipping target detection for negative bias is decreased. In TBD algorithms, the falsely accumulated signal value is reduced. Background estimation together with the operation of subtracting the estimated background from the current frame allows for background suppression. The tracked UAV is then easier to detect, although the image is disturbed by the noise related to acquisition. Depending on the background estimation algorithm and scene dynamics (moving clouds and lighting change), additional disturbances appear, the reduction of which is important for the tracking system. Limiting the number of false detections affects the computation complexity of multipletarget tracking algorithms.
The analysis of the relationship between the tracking algorithm and the background estimation algorithm is computationally complex; therefore, this research focused on the correctness of single UAV detection and reduction of tracking algorithm artifacts at different levels of image sensor noise.
The considered background estimation algorithms are listed in Table 1. Additionally, the programming itself without background estimation was used as the reference algorithm. Algorithm parameters and their range of parameters are specific for a given method (Table 1); therefore, it is practically impossible in the process of selecting the optimal algorithm to coordinate these results between the algorithms.
Background Estimation Pipeline
Dedicated optimization software was developed for pipeline implementation, which is shown in Figure 3. OpenCV 4.4.0 was used for image processing. The background estimation algorithms are also from the library https://docs.opencv.org/4.4.0/d7/df6 /classcv_1_1BackgroundSubtractor.html, accessed on 5 January 2021. Algorithms requiring a GPGPU (General Purpose GPU) with CUDA (Compute Unified Device Architecture) support were not considered. It is important to use an SSD (Solid-State Drive) to store video sequences as they are read frequently. The software was written in C++ so that other algorithms can be added. Evaluation of the algorithm's effectiveness also depends on the noise immunity in the image. For this purpose, the Monte Carlo approach was used, thanks to which it was possible to process data independently. This analysis uses distributed computing with 14 computers (8 cores per processor). JSON files were used to store the configuration of individual algorithms, including the range of parameter variability and the variable type (integer, float, double, and bool). The configuration for controlling the genetic algorithm was also saved in this format. The full pipeline was used in the selection phase of the background estimation algorithm and its parameters as well as threshold value. In the normal operation phase, the selected algorithm was applied and the parameters of the background estimation algorithm as well as threshold were fixed. This means that the block "selection of background subtraction algorithm" is fixed and is not controlled by the optimization algorithm. The input images were 8-bit coded (grayscale), and the image after thresholding was a binary image.
Metrics Used for the Evaluation of Algorithms
Since background estimation allows for subtraction from video sequences, ideally their difference is only the UAV image. Thus, the Ground Truth (GT) sequence is a synthetically generated sequence with UAV images. The arithmetic difference of images in the sense of absolute values or logical difference can be used to test the algorithms. The arithmetic difference allows us to take into account the level of background estimation, for example, the degree of shadow estimation, while the binary variant is well suited for detection analysis. For a binary image representation, the value "1" corresponds to the position of the UAV and "0" represents the background for the specified pixel. By processing all pixels of the image, the values of TP (True Positive), TN (True Negative), FP (False Positive), and FN (False Negative) can be determined, which are the basic determinants of the work quality of the entire system including background estimation and thresholding. The values TP, TN, FP, and FN can be considered separate criteria for evaluation of the algorithm's work quality. For the optimization process, however, it is necessary to determine one value of the objective function. For this reason, heuristics using the metric "fitness" have been proposed. The four individual metrics are determined by the following formulas: All of them take values in the range 0-1 . Finally, a fitness is determined from them, which takes values in the range 0-4 :
Optimization Algorithm
The individual algorithms were optimized independently of each other. Optimization was based on selection of the parameters of the background estimators and the threshold values. For optimization, a hybrid algorithm using genetic operators and gradient optimization were used [41,42]. Genetic algorithms are easy to implement, and because they do not use a gradient, they allow us to search for a global minimum. The block diagram of the algorithm is shown in Figure 4. Optimization is constrained as it affects known coefficient ranges (Table 1). A pseudorandom number generator was used to initialize values for a particular background estimation algorithm and threshold value. This enabled the determination of binary strings describing the starting 30 individuals of the initial population and computing a first set of fitness values.
The elitist selection block is responsible for selecting the best and worst individuals. If the best individual is weaker than the best in the previous population, the weakest individual in the current population is replaced with the best in the previous population. The best current individual remains the same. If the best individual in the current population is better than the best individual in the previous population, it becomes the best current one.
The select block is used for fitness proportionate selection, also known as roulette wheel selection. The genetic algorithm uses two operators: crossover and mutation to change parameters [43,44]. There are many variants of crossover, in the case of the current implementation of the genetic algorithm, it is a k-point crossover. For the mutation operator, the value change was used as in evolutionary programming [41] because the range of values for a given algorithm is known.
There are several conditions for terminating the optimization process: achieving identical binary images compared on the output (FN = 0 and FP = 0), and the maximum number of iterations is 20,000.
One of the problems with the optimization process is the lack of balance between the number of pixels in the image for the UAV and the number of background pixels. Without correction, a local minimum appears that is very difficult to remove. The introduction of corrections allows for balancing the error values related to the background and the UAV image. The values of FP and TP are corrected using the following formulas: where objectSize is the UAV size in pixels.
Results
The Monte Carlo test allows us to obtain unbiased estimators [45,46] for a given algorithm, with the assumed noise value. The generated random video sequences constitute the input of the algorithm for which the parameters are optimized. Monte Carlo tests were performed for 8 algorithms, with 20 noise values each, and were executed 10 times (1600 cases).
Many computers are required for the computations, and Monte Carlo calculations can be paralleled. For the calculations, 14 computers with AMD Ryzen 7 processors and 16 threads for 8 processor cores were used. A single optimization case was computed by one thread and took approximately 2 h.
In the following subsections, exemplary results for the fitness value and the impact of the parameter values of the background estimation algorithms are presented. The most important are the results for Monte Carlo tests for different algorithms with different noise values, which allow us to assess their quality of work.
Exemplary Fitness Values
Several examples of synthetically generated images are shown in Figure 5, for which specific fitness values were obtained. In this case, the UAV is shaped similar to a plus sign for simplicity. There are also extreme cases, such as UAV and no noise, or no noise and no UAV.
Influence of Example Algorithm Parameters on the Detection Process
The problem of choosing the background estimation algorithm is related to the choice of parameters. Their selection affects the quality of the estimation, but it depends on the type of data, which makes manual selection by trial and error practically impossible. These parameters, although independent of each other according to their names, can create complicated relationships with each other, which complicates the search for the optimal configuration. Due to the large number of background estimation algorithms, selection of the optimal variant is a serious challenge. The influence of selected parameters on one video sequence is shown in Figures 6 and 7.
Monte Carlo Analysis of Algorithms
The main computational goal in this article was to determine the fitness value for individual algorithms for different noise values.
In order to compare the results for the different algorithms, the random image sequences were the same. The number of repetitions (20) was experimentally selected due to a long processing time. In the case of bad converency of the genetic optimization processes or too little number of repetitions, the results from Figure 8 are very noisy. In the case of other applications of the method, it is necessary to test the degree of convergence each time to evaluate the algorithms.
Algorithms that better estimate the background and do not remove the UAV from the image have a fitness value closer to 4. Figure 8 shows the fitness averages for eight configurations, including no background estimation algorithm (NONE) as a reference. Selection of the background estimation algorithm for a given noise level may introduce the problem of an excessive adjustment to the current noise level. To reduce this type of bias in the results, the fitness values with increased noise (+10 and +20) are also shown for a given noise level. Thanks to this, it is possible to assess the sensitivity of the estimation algorithm and selected parameters to the increased noise level that did not occur during learning (optimization). Learning of the algorithm, through the selection of its parameters, is always limited by the training base used. This is particularly important when real (empirical) data are used, such as video sequences in this case. By adding noise in a certain range, the data can be augmented, which correlates to generalization in a learned system. This is a standard procedure to obtain an approximator rather than an interpolator that fits the data. The use of noise is also good for testing the resulting system. In the testing process, an increased noise level was used, where +10 and +20 noise were arbitrarily selected. If a given algorithm and its parameters are well matched, there should be no deterioration of the results for increased test noise by +10 and +20. Where there is a deterioration in the results, it means that another algorithm should be chosen as less sensitive.
Quantitative Results and Senstivity
One method of evaluating the algorithms is to determine the average value of the fit level, which is averaged. As the noise values ranged from 0 to 100, due to the sensitivity analysis, it was limited to the mean in the range 0 to 80. The mean values are presented in Table 2.
Computational Cost
The estimated computational cost was determined for a video sequence containing 1000 frames. The calculations were performed for 100 cases, and the mean value was determined. An image size of 300 × 300 (grayscale) pixels was assumed. As modern processors perform frequency scaling (frequency change depending on load), scaling was turned off on purpose. Additionally, the video sequences were placed in the ramdisk so that downloading them from the SSD (Solid-State Drive) did not affect the result. This corresponds to the typical situation when consecutive video frames are delivered from the camera. The values given in the Table 3 are estimates because better results can be obtained by using different methods of code optimization.
A computer with a Intel i7-9750H @ 4 GHz processor was used for the estimation. The computations were allocated to one processor core.
Discussion
The most important computational result is the fitness curves for different types of background estimation algorithms (Figure 8). By analyzing their shape and values, the quality of individual algorithms can be compared. As the optimization process was timelimited, it is possible to obtain different results due to the slow optimization convergence. However, experiments with different but similar noise values show that the curves are mostly smooth, which means that convergence is acceptable. Only in some cases, there are larger jumps in fitness values that may require additional, longer empirical analysis.
Influence of Example Algorithm Parameters
The simple case of the UAV and two clouds shown in Figures 6 and 7 shows that selection of the algorithms and parameter values for the algorithms should not be accidental. It is not possible to present all relations between several parameters for a given algorithm, so it was decided to show only a few examples for selected parameters.
In the case of the MEDIAN algorithm, too high a history delta value can lead to a lack of cloud elimination. For the MOG (MOG stands for Mixture of Gaussian) algorithm, the history parameter determines the balance between cloud edge detection and noise associated with cloud value changes. In the MOG2 (MOG stands for Mixture of Gaussian version 2) and KNN (K-Nearest Neighbor-based Background/Foreground Segmentation Algorithm) algorithms, a too low value of the Var threshold parameter leads to the detection of cloud edges and noise in the cloud area. The behavior is similar in the GMG (Godbehere-Matsukawa-Goldberg) algorithm for the decision threshold parameter. In the case of the GSOC algorithm and the hits threshold parameter, the number of detections can be highly variable depending on the image frame. The pair of pixel stability parameters can lead to the determination not so much of the detection but of the UAV trajectory. Other algorithms have a similar shape to the curve, but three parts of it are important: the fitness level for small noise values, the starting point of the curve descent, and the fitness level for large noise values. The MOG and GSOC algorithms have better properties for higher noise values than the others but at the cost of lower quality of work with low noise levels. The MEDIAN, MOG2, and NONE algorithms have very good properties for small noise values; however, as the noise value increases, they are worse than MOG and GSOC. The threshold algorithm NONE is inferior to MOG2 and may be rejected.
Monte Carlo Analysis
In the case of low noise, MOG2 is the best choice, although the MEDIAN algorithm is also interesting. The assessment of which one has better properties is carried out in the next subsection.
Sensitivity Analysis
Without the analysis of sensitivity to noise changes, the MEDIAN algorithm could be considered the best. Adding more noise to the video sequence than used during the training changes the characteristics of the algorithms. This is most evident for algorithms that are poor at low noise (CNT, KNN, GMG, and MOG). In the case of MEDIAN, the change is very large despite very good properties for low and high noise.
This experiment shows that, without empirical analysis of the sensitivity of algorithms, their evaluation or selection for a specific application can be very wrong. Using many different noise values for the algorithm testing process in one test can also be problematic due to averaging of the results.
Quantitative Results and Senstivity
Using this quantitative criterion, it can be seen ( Table 2) that the apparently best algorithm is MEDIAN (column +0). When a more robust algorithm is required to increased noise that did not occur during the selection of parameters, MOG2 or GSOC is a better solution (columns +10 and +20).
Thresholding and Tracking Approach
In this approach, we consider the selection of the background estimator for the tracking system, not the tracking system as a whole. Thresholding aims to define the criteria for selecting an algorithm and selecting coefficients. The selected algorithm along with the coefficients can then be used in the tracking system. With conventional tracking systems, the selected thresholding is still used and the data is passed to a tracking algorithm such as a Kalman filter (Figure 9). In the case of TBD systems, thresholding may be used, but it will reduce the tracking abilities. A much better solution is to omit thresholding so that the raw data (output of background subtraction algorithm) is processed by the TBD algorithm in accordance with the tracking-before-detection concept.
Computational Cost
As predicted, the simplest algorithm (NONE) is the fastest, in which the thresholding operation is performed without background estimation. The second fastest, but approximately 3 times slower, is the CNT algorithm. Another is the MOG2 algorithm, which is about 25 times slower than NONE.
The MOG2 and GSOC algorithms have been described as some of the most effective. This means that MOG2 is optimal both in terms of quality and computation time, as GSOC is very slow-about 10 times slower than MOG2. The MEDIAN algorithm has a computational cost similar to MOG2, and both can be treated as similar; however, MEDIAN has worse features due to the sensitivity to increased noise. For this reason, MOG2 seems to be the optimal solution. However, this assessment is burdened with an implementation error because, depending on the optimization of the source code and the compiler used, the results may differ.
A serious problem of most of the considered background estimation algorithms is the computational cost. In the case of a camera recording 25 fps images, the time to process one frame is 40 ms, and for a camera working at 100 fps, it is only 10 ms. The values extrapolated for a single 1 Mpix frame show that this time is mostly exceeded ( Table 2). Background estimation algorithms are suitable for parallel processing, so implementation of a real-time system is possible, but effective code optimization is very important. An alternative is a hardware-based implementation, such as FPGAs (Field-Programmable Gate Array).
Real-Time Adaptation Possibility
There are at least three strategies for adapting algorithms (changing parameters) and selecting them for a real-time system.
The first strategy is an offline strategy. Video sequences are recorded and then subjected to optimization processes as presented in the article. This means that a suitable sample of the video sequence is needed. In the case of a background that is quite stable, which depends on the weather conditions, it is possible to obtain correct selection results quickly, so a proper image processing system for UAV detection can obtain a good quality configuration. In the case of a highly variable background, it depends on the degree of background change, and this depends on the region, climate, and season.
Due to the scale of calculations, data processing is possible in cloud computing. Typical resources for a nested device implementing a smart camera are too weak to perform such an operation with the current state of computing technology.
The second type of strategy is an attempt to adapt online algorithms (real-time adpatation), which is possible (similar to the classical adaptation algorithms used for example for adaptive filtering). The problem is the analysis of convergence of algorithms to background changes. It should be noted that, on a sunny day with rapidly moving clouds at low altitude, changes in lighting are rapid, sometimes less than one second. This strategy seems very attractive but creates a problem related to the stability of UAV detection.
A third possible strategy is a combination of both. The selection of the algorithm can be made once, and the range of parameter changes can be determined on the basis of previous video sequences. In this case, the parameter adaptation can be done in real time. It is also possible to narrow the range of parameters to the range determined by a certain index classifying the image. In this case, it is possible to develop a classifier that determines the type of image (for example, clear sky, cloudy sky, clouds at low altitude, etc.), which narrows the range of optimization parameters or even selects them directly. It is possible to implement, although it is a very complex task.
The presented adaptation strategies are not considered in this paper due to their complexity and scale of calculations.
Optimizing Background Estimation in Other Applications
The detection of small objects in the image is a problem not only related to UAV detection. An example is a reverse configuration, where the UAV image is analyzed for objects on the ground. Detection and tracking of vehicles, boats, and people is often carried out using thermal imaging systems, where the background can be very complex [47]. This could be used for human search-and-rescue or surveilance purposes. The publication [48] shows an example of swimmer detection. The problem of background estimation concerns not only vision systems but also radar imaging [49,50]. Application of the detection of small objects in medicine is also very important [51].
The proposed method is universal and can significantly improve the quality of other systems. Of course, the selection of algorithms and parameters may be different than in the work obtained in relation to UAVs.
Final Conclusions and Further Work
MOG turned out to be the best algorithm because it has very good properties for low noise values and for high noise values and is robust to increased noise value. Considering the results from Figure 8, one can consider a hybrid solution switching between two algorithms depending on noise. By using image noise level estimation, it is possible to use MOG2 for small noise values and the MOG algorithm for larger values. Taking into account the speed of execution of individual algorithms indicates that MOG2 is the most interesting to use.
The optimization method used in this article is an offline process. A high calculation cost of optimization is required for one-time selection of an algorithm and calculation of coefficients. An alternative is implementation of the adaptation process, which selects factors depending on the current conditions. In this case, the approach proposed in this article can be used to select an algorithm for which the coefficients are calculated online.
The proposed optimization solution can be applied to other types of tracked objects, not only UAVs. This work did not consider tracking algorithms that additionally implement a data filtration process. Including them in the overall optimization process as well as in the initial processing of images will be the subject of further work. | 9,686.6 | 2021-02-26T00:00:00.000 | [
"Computer Science"
] |
Mathematical Indispensability and Arguments from Design
The recognition of striking regularities in the physical world plays a major role in the justification of hypotheses and the development of new theories both in the natural sciences and in philosophy. However, while scientists consider only strictly natural hypotheses as explanations for such regularities, philosophers also explore meta-natural hypotheses. One example is mathematical realism, which proposes the existence of abstract mathematical entities as an explanation for the applicability of mathematics in the sciences. Another example is theism, which offers the existence of a supernatural being as an explanation for the design-like appearance of the physical cosmos. Although all meta-natural hypotheses defy empirical testing, there is a strong intuition that some of them are more warranted than others. The goal of this paper is to sharpen this intuition into a clear criterion for the (in)admissibility of meta-natural explanations for empirical facts. Drawing on recent debates about the indispensability of mathematics and teleological arguments for the existence of God, I argue that a meta-natural explanation is admissible just in case the explanation refers to an entity that, though not itself causally efficacious, guarantees the instantiation of a causally efficacious entity that is an actual cause of the regularity.
rules from complex patterns, lends justification to new hypotheses by relating our past experience to new problems in order to demonstrate parallels with already accepted hypotheses, and provides a basis for the unification of theories about distinct phenomena. 1 When scientists observe unexplained, striking regularities, they begin to investigate possible explanations. However, not all logically possible explanations count as legitimate objects of scientific study. For example, in order to explain the spiralshaped arrangement of seeds in sunflower heads, a biologist would not consider the hypothesis that the seeds were arranged by beauty-loving angels. Rather, natural scientists investigate only strictly natural hypotheses. 2 This is different with philosophers. The investigation of striking order, patterns, and correlations also plays a major role in philosophy, but philosophers don't restrict their attention to empirical hypotheses. Rather, they also develop 'meta-natural' explanations, i.e. explanations that transcend the realm of scientific investigation. For example, moral realists posit the existence of abstract, non-observable moral entities (such as reasons or values), which some even describe as having quasi-causal powers. Typically, meta-natural hypotheses come into play when empirical methods offer no suitable way of investigating a striking order, pattern, or correlation (below I introduce four classical cases). Such meta-natural hypotheses are then judged according to theoretical criteria such as explanatory power, internal consistency, coherence with accepted philosophical views, etc.
However, no matter how well they meet such theoretical criteria, not all metanatural hypotheses are considered prima facie equally legitimate. Some appear to enjoy more initial credibility than others. For example, no matter how much explanatory power the hypothesis of theism has, many philosophers do not consider the existence of God a legitimate explanation for striking empirical regularities. By contrast, many philosophers and mathematicians are perfectly happy to subscribe to some form of mathematical realism, i.e. the view that mathematical statements are about mind-independent mathematical truths and have objective, determinate truth-values, even if this evidently implies some kind of abstract, 'non-natural' mathematical ontology. This seems like a double-standard. After all, given that both the existence of God and the existence of numbers defy empirical confirmation, both hypotheses ought to be considered prima facie equally legitimate. 1 See Bartha (2010) for a thorough investigation of the formal characteristics of analogical reasoning. For an excellent discussion of the role of analogical reasoning in major breakthroughs in 18 th and 19 th century physics, see Steiner (1989Steiner ( , 1998. 2 What counts as natural is not always clear and may change over time. For example, when Newton formulated his theory of gravity, which explains both terrestrial and celestial motions in terms of an unobservable force acting at a distance across empty space, Leibniz condemned this as positing an 'occult quality,' an illegitimate meta-natural explanation. From today's point of view, calling gravity a meta-natural posit seems absurd. However, now there are a number of new scientific hypotheses-say, the multiverse-whose status as natural or meta-natural hypothesis is unclear. Thus, the boundaries between the natural and the meta-natural may be fluid, yet as will become clear below, this paper is concerned with clear-cut cases of meta-natural posits only. In fact, however, there is a strong intuition that some meta-natural explanations are prima facie more warranted than others. The way this intuition is usually substantiated is by reference to the particular features of the entities in question and the problems these features raise in a particular context. For example, most metaphysicians reject Lewisian modal realism, according to which there exist infinitely many concrete yet causally disconnected possible worlds. The reason they reject this view is of course not that the view failed empirical testing. Rather, besides its counterintuitiveness and its vastly inflated ontology, modal realism raises a number of issues that are problematic within the greater context of the philosophy of modality, for example Kaplan's paradox 3 or the problem of island universes. 4 The goal of this paper is to sharpen the intuition that some meta-natural hypotheses are more warranted than others into a clear criterion for the (in)admissibility of meta-natural explanations of empirical facts. I adopt the criterion that a meta-natural explanation is admissible just in case the explanation refers to an entity that, though not itself causally efficacious, guarantees the instantiation of a causally efficacious entity that is an actual cause of the regularity. I show that only the theism hypothesis, but not the multiverse and the chance hypotheses meets the basic criteria for a program explanation.
Meta-Natural Explanations of Empirical Regularities
Here are four examples of meta-natural hypotheses that have been offered in a variety of philosophical contexts in order to explain particular kinds of order, patterns, or correlations observable in the physical world. I have sorted the examples in descending order of intuitive 'acceptability:' Mathematical entities: Ancient Greek philosophers like Pythagoras, Plato, and Euclid transformed mathematics-hitherto nothing but a tool for the solution of practical problems-into an abstract science whose clarity, precision, and rigour soon became a standard for all other sciences. However, this abstract conception of mathematics, according to which mathematical properties supervene on mathematical, rather than physical entities, also raises profound philosophical questions. For example, how can a purely abstract system be applicable to the empirical world (and be incredibly successful at at that)? As Wigner puts it: 'the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious' (Wigner 1960, p. 2). One way to explain this correlation between our mathematical and empirical facts is to endorse 'mathematical Platonism,' a philosophical view at the core of which stands the meta-natural hypothesis that there exist irreducibly abstract, mind-independent mathematical entities. Plato (1997) argues that the correlation between abstract entities ('Forms' or 'Ideas') and their physical counterparts is due to 'participation:' all particulars falling under a certain predicate 'participate' in the Form denoted by that predicate. More recently, the view that Platonism best explains the applicability of mathematics to science has been discussed in the context of 'enhanced indispensability arguments' (EDAs), whose central claims are (a) that we should be ontologically committed to all parts of a scientific theory that make a genuine explanatory contribution to the explanation of a physical phenomenon, and (b) that there are cases where mathematical entities play precisely such a role. 5 As Baker 2011, p. 266 writes, The challenge to explain the explanatory effectiveness of mathematics in science is one that can only be adequately met by a realist view of mathematics such as platonism. This is especially true if we combine the indispensability issue with the explanation issue and ask why mathematics plays an indispensable explanatory role in science. Compare this with a corresponding question for theoretical concrete entities. Why are electrons explanatorily indispensable? The natural response here is: because electrons exist! To try [n.b.: as nominalists do] to explain a phenomenon by reference to acknowledged fictions leads to a situation of rational instability. Thus considering the question of why mathematical entities play an explanatory role in science may provide the best route to a compelling defence of mathematical platonism.
Moral entities and properties: Also metaethicists have offered meta-natural hypotheses to explain empirical regularities. For instance, robust moral realism, i.e. the view according to which there exist objective, mind-independent moral entities (e.g. reasons and values) and properties (e.g. rightness or wrongness), has been suggested as an explanation for the empirical fact that people tend to take moral matters seriously, or more specifically, that in the face of moral disagreement, they tend to reason and act in ways analogous to the ones we use to resolve disagreements about strictly empirical facts. 6 Possible worlds and propositions: As mentioned above, meta-natural explanations of empirical regularities also feature in metaphysics. The most notorious example are possible worlds (and propositions, possible worlds' sidekickspropositions are frequently defined as sets of possible worlds, and possible worlds are even more frequently defined in terms of sets of propositions). In order to 5 Cases of allegedly indispensable mathematical explanations in science include the Magicicada, which will be discussed below; the falling pattern of sticks thrown into the air (Lipton 2004); the crossing of bridges at Königsberg (Pincock 2007); the geometrical properties of Minkowski space-time as exemplified in the bending of light near massive bodies, and in the Lorentz-FitzGerald contraction of moving bodies in special relativity (Colyvan 2001;; the location of the Kirkwood gaps (Colyvan 2010); the hexagonal shape of honeycomb cells (Lyon and Colyvan 2008;Lyon 2012); the spiral arrangement of sunflower seeds (Lyon 2012); and Plateau's laws for soap films (Lyon 2012;Pincock 2015). 6 See, for example, Enoch (2011), p. 23f. explain the fact that our reasoning about what is and isn't possible is so extraordinarily useful across a wide range of different contexts, modal realists have proposed the hypothesis that the modal statements employed in such reasoning are true of concrete physical worlds that are spatiotemporally and causally disconnected from our actual world, but in all other respects just like our actual world. 7 God: The oldest (and least accepted by Western philosophers) meta-natural hypothesis is theism, which posits the existence of a cosmic creator, conventionally referred to as 'God.' Different versions of this hypothesis have been offered for thousands of years as an explanation for various kinds of regularities in the empirical world. Such 'arguments from design,' or 'teleological arguments,' begin with a premise that points out some type of order observable in the physical cosmos, argue that this type of order would not exist had it not been intentionally created, and end with a conclusion that proclaims the (likely) existence of an intelligent being or 'designer' who created the physical cosmos. 8 Despite the fact that the kinds of order or regularity that are relevant for arguments from design have been a topic of philosophical discussion for over two thousand years, 9 and despite the fact that many of the phenomena once thought to cry out for metanatural explanation can now be explained in purely scientific terms, 10 and despite the fact that no formulation of the design argument has yet been found that is accepted by all participants in the debate, 11 design arguments continue to persist, and even philosophers who would reject theistic conclusions of any sort admit that the intricate functional organisation of our cosmos is striking.
Generally speaking, though, (Western) philosophers are much more comfortable positing the existence of mathematical entities or truths than they are with positing the existence of God. Is there any domain-independent fact, i.e. a fact independent of the particular objections occurring in each of the individual debates, that can account for our different intuitions regarding meta-natural hypotheses?
In order to answer this question, I now turn to the current debate about mathematical realism and arguments from explanatory indispensability cashed out in terms of the concept of 'program explanation.' The concept of program explanation illuminates how mathematics can play a genuinely explanatory role in our best scientific theories. Drawing an analogy with the case of theism, I argue that positing the existence of God as a meta-natural explanation for cosmic fine-tuning is a prima facie admissible hypothesis. However, I also argue that much more work needs to be done in order to show that the God-hypothesis can play an indispensable programming role analogous to the mathematical case, and even if this work could be done, only a very thin/ deflated notion of God could be shown to be warranted (perhaps even only a designer, who might as well be a demon). But this is an important result nevertheless, given that it shows us what fine-tuning arguments can potentially achieve (i.e. establish existence of designer), and what they can never achieve (establish existence of God with classical divine attributes).
Mathematical Explanations of Empirical Facts
For some time now, debates about mathematical realism have focused on issues concerning how best to understand mathematical explanation. At the centre of these discussions is the question whether, at least in some specific cases, mathematics plays an explanatorily indispensable role in the scientific explanation of particular empirical phenomena. Mathematical realists argue that the scientific explanations of those phenomena contain mathematical elements whose role in the scientific explanation cannot be reduced to a mere representation of empirical regularities. Here are three examples: Honeycombs: Bees build their honeycombs out of hexagonal cells. This striking fact calls for explanation. Darwin argued that minimising the amount of energy and wax used for the construction of honeycombs generates an evolutionary advantage, such that the bees who are most efficient in the use of energy and wax will be selected (Darwin 1859). In 1999, Thomas Hales proved that 'a hexagonal grid is the most efficient way to divide a Euclidean plane into regions of equal area with least total perimeter' (Hales 2001, p. 4). Cicadas: Certain types of North American cicada, Magicicadas, have primenumbered life cycles. They emerge from the ground every 13 and 17 years. This striking fact calls for explanation. Due to a number of ecological constraints, for example the need to minimise intersection with periodic predators with different cycle periods, having a prime-numbered life-cycle is advantageous for the Magicicada (Baker 2005(Baker , 2009Goles et al. 2001, p.33). The reason for this is the number-theoretic fact that prime numbers maximise their lowest common multiple relative to all lower numbers. 12 Sunflower seeds: Sunflowers arrange their seeds in a striking spiral pattern. The explanation for this is again a combination of evolutionary with mathematical facts. Fitting as many seeds as possible into the circular flowerhead constitutes an evolutionary advantage. Sunflowers grow their seeds at the centre of the flowerhead, with new seeds pushing older ones outwards. Whenever a new seed develops, it does so at some angle of rotation from the older one. Given this way of growing seeds, the optimal rotation angle, i.e. the rotation angle at which the highest amount of seeds can be fitted into the flowerhead, can be made mathematically precise: it is an irrational fraction of 360 degrees, roughly 137.5 degrees, which is equivalent to the complement of 360 ϕ mod 360 (where ϕ is the Golden Ratio; Lyon 2012, p. 4).
What distinguishes those examples from other scientific theories featuring mathematics is that the mathematical part of the explanation plays an explanatory role of its own, i.e. it constitutes part of the explanation of the empirical phenomenon. Moreover, it plays this role indispensably, i.e. there is no way of explaining the empirical phenomenon without reference to the respective mathematical facts.
The fact that mathematics is explanatorily indispensable to some of our best scientific theories can then be argued to support mathematical realism: if we believe that we should be ontologically committed to all and only those parts of a scientific theory that are explanatorily indispensable, i.e. that contribute to the explanation of the physical phenomenon the scientific theory aims to explain, then the fact that mathematical entities play this role in some-perhaps many-scientific theories commits us to the belief in the existence of mathematical entities, or at least, renders the belief in the existence of those entities pro tanto admissible. 13 The question now is: Could it be possible for the meta-natural posit conventionally referred to as 'God' to play an analogously indispensable role in the explanation of empirical regularities, such that, once we recognise this role, we are committed to believe in the existence of this entity? In order to answer this question, we need to develop a more precise idea of what exactly makes the explanations in the above example mathematical, i.e. how exactly the mathematics featuring in the above examples explains the empirical facts in question.
Explanations Featuring Mathematics
The natural sciences are full of mathematics: it is an indispensable tool for the formulation of theories about the physical world. In many of those theories, the mathematics involved plays a merely representational role, for example by representing quantities. However, the mathematical elements involved in the examples above are themselves an integral part of the explanation: Hales' theorem explains why bees build their honeycombs out of hexagonal cells; the number-theoretic properties of prime numbers explain the length of the life-cycles of Magicicadas; and the irrational number equivalent to the Golden Ratio explains the elegant arrangement of sunflower seeds. How exactly is it possible for a purely mathematical, i.e. causally inert fact to explain a strictly empirical regularity? To answer this question, let's look at two kinds of explanation of empirical regularities that involve mathematics without it playing an explanatory role.
Empirical Instantiations of Mathematical Truths
It is a theorem of mathematics that the sum of the internal angles of any triangle in a Euclidean space is 180 degrees. This mathematical truth is instantiated in every physically existing triangle in a Euclidean space (setting aside unavoidable discrepancies between mathematical ideals and corresponding physical approximations). However, it does not explain anything about why some particular physical triangle, for example a set square, exists; such an explanation would involve some story about how a chain of physical events led to the creation of a physical triangle.
Mathematical Representations of Empirical Facts
Einstein's mass-energy equivalence E=mc 2 states that the energy E of any entity with a mass is equivalent to its mass m multiplied by the speed of light c squared. Clearly, this equation is formulated with the help of mathematical language ('=', ' 2 '). However, the variables and constants featuring in it represent physical quantities: an object's mass (m) and energy (E) as well as the speed with which light travels (c); the equation employs mathematical language for the sole purpose of representing physical relations in a convenient and economical way. Hence, even though it clearly features mathematics, the mathematical elements do not contribute to the explanation of the relations holding between the physical properties and objects.
Mathematical Explanations of Empirical Facts
So how exactly is it possible for a purely mathematical fact to explain an empirical regularity? What distinguishes the honeycomb, the Magicicada, and the sunflower examples from mere instantiations of mathematical truths in the physical world on the one hand, and from explanations using mathematical language for purely representational purposes on the other? A theory introduced by Jackson and Pettit (1990) in the context of causal explanations, and later applied to mathematical explanations by Lyon (2012) provides a plausible answer: Mathematical properties featuring in mathematical explanations of empirical phenomena, though not causally efficacious, are causally relevant to the empirical phenomena in question by programming for the instantiation of the phenomenon. This needs some unpacking.
Let's begin with Jackson and Pettit. The central idea behind their account of 'program explanations' is that causal explanations of physical phenomena need not involve reference to causally efficacious properties.
A causally efficacious property with regard to an effect [i.e. a physical phenomenon] is a property in virtue of whose instantiation, at least in part, the effect occurs; the instance of the property helps to produce the effect and does so because it is an instance of that property... A property F is not causally efficacious in the production of an effect e if these three conditions are fulfilled together.
(i) there is a distinct property G such that F is efficacious in the production of e only if G is efficacious in its production [co-instantiated]; (ii) the F-instance does not help to produce the G-instance in the sense in which the G-instance, if G is efficacious, helps to produce e; they are not sequential causal factors [non-sequential]; (iii) the F-instance does not combine with the G-instance, directly or via further effects, to help in the same sense to produce e (nor of course, vice versa): they are not coordinate causal factors [non-coordinate]. (Jackson and Pettit 1990, p. 108) Causally efficacious properties thus contribute directly to the production of the physical phenomenon in question. For example, since an object of constant mass accelerates in proportion to the force applied to it, a causal explanation of the speed of a tennis ball at a given time t will refer to the force F acting on the ball when it was hit. The ball's mass and the force acting on it are causally efficacious properties because they contribute directly to the production of the physical phenomenon in question, i.e. the speed of a tennis ball at a given time t. Now, it is plausible to assume that causal explanations of physical phenomena must only invoke causally relevant (as opposed to causally irrelevant) properties. However, as Jackson and Pettit show, it is implausible to assume that causally relevant properties are necessarily causally efficacious properties. The structure of their argument is as follows: 1. Assume that all causal explanations invoke only causally relevant properties. 2. Assume that all causally relevant properties are causally efficacious. 3. Causal explanations thus invoke only causally efficacious properties. 4. Causally efficacious properties contribute directly to the production of X; they do not fulfil conditions (i), (ii), and (iii). 5. The properties invoked in explanations of physical phenomena in basic science, i.e. physics (e.g. 'having mass X' or 'having positive charge') do not fulfil conditions (i), (ii), and (iii); hence, they are causally efficacious. 6. Explanations of physical phenomena in the special sciences, e.g. sociology, psychology or biology, invoke properties like 'the property of a group that it is cohesive; of a mental state that it is the belief that p; of a biological trait that it maximizes inclusive fitness' (Jackson and Pettit 1990, p. 112). 7. The properties invoked in explanations of physical phenomena in the special sciences are not causally efficacious because they fulfil conditions (i), (ii), and (iii). 8. Conclusion: Explanations of physical phenomena in the special sciences are not causal explanations.
The conclusion that causal explanations are only to be found in basic science but not in the special sciences is, of course, absurd. The most plausible and straightforward way to avoid this conclusion is to resist premise (2), i.e. the assumption that causally relevant properties must be causally efficacious. In order to resist that assumption, what needs to be shown is that it is possible for a causally inert property to be causally relevant to the production of a physical phenomenon. Jackson and Pettit use the following example to demonstrate this possibility.
Question: What explains that a piece of uranium emits radiation over a certain period? Answer: The property of the uranium that some of its atoms were decaying.
Note that this answer involves existential quantification over some of the uraniums atoms, though not over such and such particular atoms. The property invoked in the explanation of the radioactivity of a piece of uranium is thus an abstract, higherorder property. However, higher-order properties fulfil conditions (i) -(iii): they are co-instantiated with other, causally efficacious properties, yet they relate to those in a non-sequential and non-coordinate manner. Higher-order properties are thus not causally efficacious. If the property of the uranium that that some of its atoms were decaying was efficacious, then only because the lower-order property that such and such particular atoms were decaying was efficacious. However, the higher-order property of the uranium that some of its atoms were decaying is the one that features in the causal explanation of the radioactivity of uranium. Thus, the higher-order property, though causally inert, is clearly causally relevant. How is this possible? Here is Jackson's and Pettit's answer: Although not efficacious itself, the [higher-order] property was such that its realization ensured that there was an efficacious property in the offing: the property, we may presume, involving such and such particular atoms. The realization of the higher-order property did not produce the radiation in the manner of the lower-order. But it meant that there would be a suitably efficacious property available, perhaps that involving such and such particular atoms, perhaps one involving others. And so the property was causally relevant to the radiation, under a perfectly ordinary sense of relevance, though it was not efficacious. It did not do any work in producing the radiation -it was perfectly inert -but it had the relevance of ensuring that there would be some property there to exercise the efficacy required. (Jackson and Pettit 1990, p. 114) The higher-order property thus ensures, without being itself part of the productive process leading to the empirical phenomenon, that the crucial, physically productive property is realised and the empirical phenomenon occurs. As a description of the relationship between such a property and an effect Jackson and Pettit choose the metaphorical term 'programming', which evokes the analogy with a computer program ensuring that certain events will occur, even though all of the actual physical work of producing them goes on at a lower, mechanical level. Their theory thus carves out an important distinction between two ways in which properties can play an explanatory role in empirical theories: It appears then that there are at least two distinct ways in which a property can be causally relevant: through being efficacious in the production of whatever is in question, or through programming for the presence of an efficacious property.' (Jackson and Pettit 1990, p. 115) This distinction enables us to explain what makes explanations in the special sciences causally relevant, even though they are not causally efficacious: higherorder properties, for example 'maximizing inclusive fitness,' program for particular lower-order properties, for example 'having strong teeth', which in turn contribute to the survival of individual species. In fact, it is plausible to assume that, perhaps with the exception of physics, most of the explanations scientists offer for empirical phenomena are 'program' rather than 'process' explanations.
As Lyon (2012) observes, the distinction between two kinds of causally relevant properties applies not only to physical, but also to mathematical explanations of empirical phenomena. Consider Putnam's classic peg-hole example (Putnam 1975, pp. 295ff). We imagine a wooden board with two holes, one circular with a diameter of one inch, the other square with a side-length of one inch. What explains the fact that a cubical peg with a side-length of 15/16ths of an inch on each side will fit through the square hole but not the round hole? The answer to this question will most certainly invoke mathematical properties. For example, we might say that any peg with a side-length of 15/16ths of an inch is too large for any hole with a one-inch diameter.
Strictly speaking, the properties that are efficacious in causing the peg to bump into the board rather than pass through the hole are the peg's micro-physical properties, such as its spatiotemporal coordinates, the forces acting on the bodies, their molecular structure, fundamental components, etc. However, there is a strong sense in which the peg's micro-physical properties provide only part of the explanation of the peg's failure to pass through the hole. A full explanation would also mention the peg's and the board's geometrical properties respectively. Using Jackson's and Pettit's terminology, the micro-physical properties of peg and board are causally relevant to the peg's failure to pass the board by being efficacious in the production of the 'bump;' the geometrical properties of peg and board are causally relevant because they program for the presence of the relevant micro-physical properties.
Also the honeycombs, the Magicadas, and the sunflower seeds can be analysed in this way: the explanation of the respective empirical regularities (hexagonal cell shapes, prime-numbered life-cycles, Golden Ratio rotation angles) involve a purely mathematical element as well as a causal element that works on the micro-physical level.
What distinguishes empirical instantiations of mathematical truths as well as mathematical representations of empirical facts from mathematical explanations is thus the way in which the mathematical element of an hypothesis contributes to the explanation of the empirical phenomenon in question, i.e. by playing a programming role. Through this role, it is possible for a purely mathematical fact to be causally relevant in the explanation of an empirical regularity. And if, at least sometimes, mathematics plays the programming role indispensably, it is admissible-perhaps even necessary-to draw ontological conclusions from this fact, i.e. to posit the existence of mathematical entities (Baker 2005;2009;Colyvan 2001).
In the final part of the paper, we will now turn to the case of theism and see how the programming account fares there.
Theistic Explanations of Empirical Facts
The question we raised earlier was: Could it be possible for the meta-natural posit conventionally referred to as 'God' to play an analogously indispensable programming role in the explanation of empirical regularities, such that, once we recognise this role, it is prima facie legitimate for us to believe in the existence of such an entity?
Theistic explanations of empirical regularities as they feature in arguments from design can be plausibly understood as attempts to establish precisely that. One of the earliest arguments from design can be found in Cicero's The Nature of the Gods: If the first sight of the universe happened to throw [philosophers] into confusion, once they observed its measured, steady movements, and noted that all its parts were governed by established order and unchangeable regularity, they ought to have realised that in this divine dwelling in the heavens was one who was not merely a resident but also a ruler, controller, and so to say the architect of this great structural project. (Cicero 1998, II.90, p. 79; see also Jantzen 2014, p. 37) Since Cicero's times, countless variations of the argument from design have been suggested, but their main structure is always roughly like this: 1. We perceive regularities (of some striking kind) in the physical world. 2. There is no plausible way of explaining these independent of deliberate intent. 3. Deliberate intent implies a designer. 4. The designer is God.
Arguments with this structure can be attacked in different ways, depending on which premise is considered implausible. However, premise 2 is the one that has come under attack most frequently, viz. whenever scientists developed purely natural explanations for phenomena that, at one point, seemed to call for supernatural explanations. And this makes sense, of course: The more empirically tractable allegedly design-like properties are, the less we accept them as actual marks of design and purpose. Despite the many successes of empirical science, however, attempts to account for striking regularities in theistic terms never completely vanished from the philosophical landscape. In fact, some versions of the design argument, most notably the argument from cosmic fine-tuning, have drawn a lot of attention lately and have developed the argument in great detail. 14 I will now briefly outline the main structure of fine-tuning arguments. I will then apply the concept of 'program' explanation in order to investigate whether the designer-hypothesis plays an indispensable programming-role in those arguments, such that recognition of this role makes it prima facie legitimate to posit the existence of God.
Fine-tuning arguments cash out premise 1, i.e. the observation that there are striking regularities in the empirical world, in terms of (a) the fine-tuning of the physical cosmos, more precisely, the fact that the universe appears to be functionally organised in a way that makes life possible, and (b) the improbability of the physical constants falling exactly into the range required for the development of life.
For example, if the cosmological constant , the parameter representing the expansion rate of the universe, were only slightly smaller than it is, then the universe would have collapsed back onto itself shortly after the Big Bang. If were only slightly greater than it is, stars could not have developed. And since stars are the only known sources in the universe capable of producing large quantities of the elements on which all living organisms crucially depend-oxygen, carbon, hydrogen, etc.life without stars would arguably not be possible. Taking into consideration all of the fine-tuning examples relevant to the formation of stars, the chance of stars existing in the universe has been estimated by theoretical physicists to be 1 in 10 229 . 15 There are various other examples of physical constants, such as gravitation or strong interaction, being just as life needs them to be. 16 However, the point on which all fine-tuning examples converge is the strikingly low probability of the universe being exactly as life needs it to be.
Naturally, fine-tuning arguments have inspired a number of counter-arguments. Some have appealed to an 'anthropic principle' to argue that the existence of life in our universe is not at all surprising (if our universe was not life-permitting, there would be nobody to wonder about the fact that the universe is life-permitting); others have argued that mathematical probability distributions are undefined over infinitely large space of possible outcomes (i.e. possible universes). 17 It has also been argued that there is no reason to believe that science will not find a natural explanation for fine-tuning, just as it managed to find natural explanations for other striking regularities in the past. 18 Finally, some have suggested that the universe is in fact a multiverse, consisting of vastly many or even infinitely many universes, which would increase the probability of there being one life-permitting universe significantly. 19 I will not enter these specific debates. Rather, I am interested in the question whether or not there is an in principle reason to be sceptical about the legitimacy of theistic explanations of empirical facts. To answer this question, I will now investigate whether God can be argued to play an indispensable programming role in the explanation of cosmic fine-tuning that is analogous to the role of mathematics in the explanation of other empirical regularities. If a case can be made that God does play such a role, then it seems at least prima facie legitimate to posit the existence of God as an explanation of cosmic fine-tuning. However, if such a case cannot be made, then this could be argued to constitute an in principle reason to be sceptical about theistic explanations of empirical regularities. 15 See Smolin (1999, p. 45): 'In my opinion, a probability this tiny is not something we can let go unexplained. Luck will certainly not do here; we need some rational explanation of how something this unlikely turned out to be the case.' Importantly, though, Smolin does not think that the required rational explanation will feature an intelligent creator (Huberman 2006, p. 282). 16 Cf. Collins (2003). 17 e.g. McGrew et al. (2001) 18 e.g. Harnik et al. (2006) 19 e.g. Kraay (2014)
Does God 'program' Cosmic Fine-Tuning?
Recall the examples of mathematical explanations introduced above. What distinguishes the honeycombs, the Magicicadas, and the sunflower seeds is the genuinely explanatory contribution of the mathematics featuring in the explanation. For simplicity, let's stick to one of the examples, say, the honeycombs.
On the face of it, there are two elements to the explanation of the phenomenon of hexagonal honeycombs. The first element is purely causal: every cell in a honeycomb exists because one or several bees extracted wax from their abdomens, manipulated it with their antennae, mandibles, or legs, and finally built it. In other words, every honeycomb exists due to a causal chain of physical events leading up from a group of bees manipulating wax to the finished honeycomb. However, the purely causal explanation leaves a crucial question open, namely, the question why the cells of the honeycomb have their striking hexagonal shape. In order to answer this question, the explanation needs to be supplemented.
The second element of the explanation, then, is purely mathematical: the most efficient way to divide a plane into regions of equal area with least total perimeter is by dividing it into regular hexagons. The geometrical properties of hexagons thus add the information needed (in addition to Darwinian explanations concerning the survival of the fittest etc.) to explain the striking shape of honeycomb shells, which is missing in the purely causal explanation. 20 Let's now look at the case of fine-tuning arguments for theism. The phenomenon calling for explanation is the precise attunement of physical constants, such that life becomes possible. For every 'ordinary' physical event-a hurricane, a supernova, an atomic fission-there arguably exists a purely causal explanation involving a chain of prior physical events leading up to it. The theory of the evolution of species by random mutation and natural selection can even account for much of the intricate functional organisation of organisms on an individual as well as on a collective level. However, the physical phenomenon of, say, the cosmological constant having the value it has cannot be explained by a purely causal story; it is a brute fact. Again, the purely causal explanation leaves a crucial question open, namely, why has precisely the value necessary for the existence of life. In this case, the gap in the explanation cannot be filled mathematically; there is no theorem that can account for the value of . Nevertheless, without an explanation of the value of , our explanation of the apparent fine-tuning of the physical cosmos is incomplete.
Recall that in the case of the honeycombs, a mathematical theorem is put forth as a meta-natural explanation in order to complement the purely causal explanation; the argument from the explanatory indispensability of mathematics to our best scientific theories is then argued to support mathematical realism.
At this point, two questions arise: 1. Is there a meta-natural hypothesis that can explain the value of and thus complete our explanation of the apparent fine-tuning of the physical cosmos? 2. Could such an hypothesis be argued to be explanatorily indispensable to our best scientific theories of the apparent fine-tuning of the physical cosmos?
The answer to question 1 is, of course, yes. There are numerous possible metanatural hypotheses that could be offered as an explanation of the value of (angels, demons, aliens), but the three most serious competitors are that the value of was intended by a designer capable of bringing about, that chance brought it about against the odds, or that our universe is only one of infinitely many spatiotemporally disconnected universes in which happens to have a value conducive to life.
Let's now turn to question 2: Could one of the three hypotheses be argued to be explanatorily indispensable to our best theories of fine-tuning? Recall the distinction between process and program explanations introduced above. Process explanations operate on the purely causal level by giving exact accounts of the empirical causes that led up to a particular physical event. A program explanation, on the other hand, is an explanation that refers to properties or entities that are themselves not causally efficacious, but that ensure the instantiation of a causally efficacious property or entity that is an actual cause of the explanandum. Which one of the three hypotheses, the designer-hypothesis, the chance-hypothesis, and the multiverse hypothesis, could be argued to play a programming role in the required sense?
Chance could have brought about any possible value for . In particular, it could have brought about a whole range of values for that would have made life impossible. Thus, the chance-hypothesis does not guarantee the value of and cannot be argued to play a programming role analogous to the mathematical case.
The multiverse hypothesis, on the other hand, does guarantee the existence of at least one universe in which the value of is identical with the one instantiated in our actual world. This is because the multiverse hypothesis guarantees the existence of all possible universes. However, the multiverse hypothesis is not an explanation that answers our initial question (Why is our universe life-permitting?), but one that answers a slightly different question (Why is there any universe that is lifepermitting?). Moreover, in order to answer the question about the unexplained fact concerning (Why does have the value it needs to have for life to be possible?), the multiverse hypothesis posits yet another unexplained fact, i.e. the existence of infinitely many universes. Finally, the multiverse hypothesis does not explain the value of in terms of properties or entities that are themselves not causally efficacious. Rather, it explains the value of by positing more causally efficacious 'stuff', i.e. infinitely many physical universes. Hence, also the multiverse hypothesis doesn't meet the criteria for a program explanation.
So we are left with the designer hypothesis, and it seems evident that this hypothesis does indeed play a programming role. A designer, let's call her 'God', who intended to have the value it has and who is capable of bringing the value of about would ensure the instantiation of the precise value has. God, understood as an entity that is itself not causally efficacious, 21 would thus guarantee the instantiation of the causally efficacious property in question -the value of -which, in turn, is an actual cause of the explanandum -the existence of life in our universe.
Conclusion
I have argued that meta-natural explanations of empirical facts are admissible just in case the explanation refers to an entity that, though not itself causally efficacious, guarantees the instantiation of a causally efficacious entity that is an actual cause of the regularity. I have then argued that only the theism hypothesis, but not the multiverse and the chance hypotheses, meets this basic admissibility criterion. I would now like to conclude this paper with a comment on why I believe that theism still fares somewhat worse as an explanatory hypothesis than the analogous case of mathematical realism I considered.
The reason is that the 'God-hypothesis' is extremely vague -too vague, perhaps, to play the programming role in a satisfactory way. In particular, it is far from being as precise as a mathematical theorem. Consider the mathematical case. What cries out for explanation in the honeycomb scenario is the peculiar shape of the honeycomb cells. Once we have identified their mathematical properties, we can (as it were) 'read off' the Honeycomb Theorem from the shape of the cells. Exactly the same holds in the cases of the Magicicadas and the sunflower seeds, although it is of course different mathematical facts doing the explanatory work there. In all three cases, however, ontological commitment to the mathematical properties and entities at work in a specific physical scenario can be used, with some additional argument, in order to ground a universal ontological commitment to mathematical entities.
It is not at all clear that the same holds of the theistic analogue. Consider the case of fine-tuning. What cries out for explanation in the example described above is the fact that the value of is within the range it has to be for the formation of stars to be possible, which, in turn, is a precondition for the existence of living organisms.
However, unlike in the honeycomb setting, it is not the case that there is any particular physical scenario from which we can 'read off' a theistic proposition that programs the value of in that particular scenario and that can be used to ground a 'universal' ontological commitment to God. Rather, we take a brute fact and offer an explanatory hypothesis about why that fact obtains. Yet there is no connection between the brute fact and the explanatory hypothesis that allows us to 'read off' any particular theistic proposition or 'theorem'. In other words, it is not clear which theistic proposition(s) exactly are doing the explanatory work.
Irem Kurtsal, Sam Lebens, Alan Love, Arash Naraghi, Sajjad Rizvi, Emil Salim, Aaron Segal, Josef Stern, Karl Svozil, Shira Weiss, and Karen Zwier for many helpful comments and discussions. Work on this project has been supported by the John Templeton Foundation and the European Commission, H2020-MSCA-IF-2018, grant agreement number 846522.
Funding Open Access funding enabled and organized by Projekt DEAL.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 10,539.2 | 2021-04-16T00:00:00.000 | [
"Philosophy"
] |
Synthesis and characterization of amine-functionalized sugarcane bagasse fiber magnetic nanoparticle biocomposites
Sugarcane bagasse is one of the by-products in the sugar industry which contains 60% of cellulose. Cellulose can be used as a matrix for biocomposite. The purpose of this research was to produce amine-functionalized sugarcane bagasse fiber magnetic nanoparticle biocomposites (SBB). The SBB was produced from sugarcane bagasse (SB) by solvothermal reaction. The SB was dried and blended into small size (±60 mesh), then lignin was removed with 1% NaOH (w/v) through the delignification. The biocomposites was made by adding delignified SB (SB-D) into a mixture of ethylene glycol, FeCl3.6H2O, and hexamethylenediamine (HMDA) in solution, and then heated for 6 h at 200 °C. HMDA as an amine source was applied different concentrations (5, 7, and 9 mL). The surface morphology of biocomposites was covered by the magnetic nanoparticles along SB-D which contained amine of about 17.78 mmol/g. The Fe content of SBB was 98.34% which had specific peaks for magnetite at 36°, 43°, and 57° which were measured by X-Ray Diffraction (XRD). The Fourier Transformed Infrared (FT-IR) identified N–H bending vibration on SBB at 1640 cm−1. The iron content and amine group on the surface may affect high adsorption capacity for a wide range of biological pollutants.
The researchers were interested to study about plant cellulose fibers because they are sustainable, natural, and environmentally friendly. Moreover, there are other advantages of using cellulose fibers such as easy to process, low cost, low energy consumption, lightweight, having excellent specific strength, harmless to the environment, and can be renewed and recycled compared to the conventional synthetic fibers [3]. Cellulose in sugarcane bagasse is coated by lignin which makes a strong structure. For further use as an adsorbent, lignin can disrupt cellulose to bind metal ions. Delignification is a process for the lignin removal. The delignification treatment used is a chemical treatment with NaOH solution. This solution can damage the structure of lignin, crystalline, amorphous parts, and cellulose bloating [4]. Meanwhile, biocomposite is a composite material consisting of natural polymers or biofibers (natural fibers) as a reinforcement that can be degraded.
Numerous investigations on the development of sugarcane bagasse have been reported, including the production of microcrystalline cellulose and nanocrystal [5], ethanol [2], biofuel [6] and resin [7]. Especially related to biocomposite materials, many researchers have paid their attention to use sugarcane bagasse as a matrix to form cardanol-formaldehyde composites [8], polyester matrix [9], and polyethylene matrix [10]. The utilization of sugarcane bagasse with magnetic nanoparticles is potential to be developed as biocomposites.
The solvothermal method through a one-step process of surface modification with an amine group has been carried out to synthesize the cellulose-based biocomposites with magnetic nanoparticles [11,12]. However, there has been no research that develops the modified magnetic nanoparticles based on sugarcane bagasse fibers by a one-step process; thus, further optimization process is necessary to improve the surface functionalization and stability of biocomposites. This research focuses on using sugarcane bagasse fibers to be functionalized by an amine group on magnetic nanoparticle biocomposites. The present work investigates the magnetic nanoparticle biocomposites of sugarcane bagasse which is optimized on the amine content that made through a one-step process of solvothermal method. The surface morphology, iron content, crystalline structure, and functional group of biocomposites are also investigated.
Delignification of sugarcane bagasse
Sugarcane bagasse (SB) was washed with distilled water and dried for 3 hours at 80 o C in an oven, mill-grinded to result in SB powder that could pass a 60-mesh sized sieve. The delignification process was carried out by adding the NaOH (1% w/v) solution and 45% w/v of SB powder into the flask and allowed for 2 hours at 80 o C until the lignin was removed. The flask and its content were cooled to a room temperature for 2 hours, then the samples were filtered by a filter paper. Distilled water was used to wash the samples until the pH of the filtrate became neutral. At last, the delignified fibers of SB (SB-D) were dried in an oven for 6 hours at 80 o C.
Synthesis of sugarcane bagasse fiber with magnetic nanoparticle biocomposites
Solvothermal reaction is one of the methods for SB biocomposites with magnetic nanoparticle synthesis. Firstly, 1.6 g of sodium acetate anhydrous and 0.8 g of iron trichloride hexahydrate were dissolved in the sugarcane bagasse fibers (SBF) (0.5 g) in 24 mL of ethylene glycol by robust stirring at 50 o C. The surface of amine-functionalized MNPs were synthesized by adapting the procedure by Wang, et al [13]. When hexamethylenediamine was added, the solution turned into dark orange. Next, the solvothermal was conducted for 6 hours at 200 o C then cooled at a room temperature. HMDA was added by 5, 7, and 9 mL. The SB biocomposites (SBB) were collected from the solution by employing an external magnet. Afterwards, the SBB were rinsed with deionized water followed by ethanol three times for each. The SBB were kept in deionized water for the subsequent use. This synthesis produced 3 types of biocomposites: SBB-5, SBB-7, and SBB-9. SBB-5, SBB-7, SBB-9 got the addition of 5, 7, 9 mL of HMDA, respectively.
Characterization
The investigation of the structural morphology of SB, SBF, SBB-5, SBB-7, SBB-9 was conducted using Field-Emission Scanning Electron Microscopy (FE-SEM, JOEL JSM-6500 LV). XRF measurement was performed on Energy-disperse X-ray Fluorescence while the operation voltage and the current were kept at 20 kV and 77 uA. The surface functional groups on SB, SBF, SBB-5, SBB-7, and SBB-9 was identified by Fourier Transform InfraRed spectrometry (Bio-rad, Digilab FTS-3500). Philips type X'Pert Scan Parameters by using Copper K-alpha (CuKα) radiation performed the X- where CrI is Crystalline Index (%), I002 is intensity of crystal part and Iam is intensity of amorph.
Analysis
The retro-titration method was used to determine the amine content on the samples [14]. 50 mg of samples drop into 0.01 M HCl (25 mL). The mixture was shaken for 2 hours at a room temperature. After the centrifugation, the supernatant (5 mL) was titrated with 0.01 N NaOH. The amine concentration was calculated by equation 2.
Results and discussions
The lignin removal from the sugarcane bagasse was confirmed by the morphological structure and color. The original color of sugarcane fibers was cream and changed to darker (closer to grey) after the delignification process ( figure 1). Based on the FE-SEM observation, SB looked slightly damaged due to the grinding process (figure 1a), but the lignin was still bound to cellulose and hemicellulose which were arranged in one direction so that the sugarcane fibers looked flat and intact. The contact area and porosity of the material could be increased by grinding. Meanwhile, the delignification process conduced the damage of SB-D from the surface to the inside. Consequently, the lignocellulose structure bonds began to open and the structure became irregular. Rodrigues, et al [7] observed unmodified sugarcane bagasse fibers which had large amount of extractives. After the delignification, the extractives on the surface of fibers were removed. Delignification with NaOH induced the decomposition of hemicellulose, lignin, and silica contained in the SB-D (figure 1b). The delignification process reduced the size of fiber. The average diameter of bagasse fiber (25 µm) was lower in the raw bagasse. The fiber had smooth surface after the removal of lignin, hemicellulose and pectin [15]. XRF analysis showed a decrease of 31.5% silica in SB-D after treatment. Indeed, SB contained 31.5% Fe which then significantly increased up to 98.7% due to the formation of magnetic nanoparticles. The Fe content may affect the materials as an adsorbent by enhancing the high-adsorption of sorbent capacity for reactivity toward a wide range of biological pollutants [16].
The morphological structure of sugarcane bagasse fiber magnetic nanoparticle biocomposites with different concentrations of HMDA is shown in figure 2. The addition of 5, 7, and 9 mL of HMDA produced biocomposites with the same Fe content around 98.70%. The magnetic nanoparticles were formed on the fiber surface. Hexamethylenediamine played a key role in diminishing the magnetic size during its growth in the solvothermal reaction. The amine contents in biocomposites for the addition of 5, 7, and 9 mL HMDA were 3.21; 17.78; and 3.83 mmol/g, respectively. The different concentrations of amine could be analyzed by the morphological structure of biocomposites. The formation of magnetic nanoparticles for the addition of 5 and 9 of HMDA turned to aggregate and did not distribute on the fiber surface (Figure 2a and 2c). In comparison to 7 mL of HMDA, the magnetic more clearly distributed on the fiber surface (Figure 2b). This is why the amine contents on the surface of magnetic had different amounts. On other hand, the different concentrations of HMDA in this research did not have a big impact on the size of particles. The results obtained were in line with the previous research for magnetic formation by the solvothermal method [13,17,18].
XRD analysis was carried out to determine the cellulose crystal structure contained in the samples and the Crystalline Index (CrI) of sugarcane bagasse fibers before and after the delignification. Cellulose crystals could be identified at an angle of 2θ between 20 o -40 o at which there were dominant peaks. Cellulose fiber was composed of several million microfibrils. These microfibrils fiber were divided into two different parts: the amorphous part formed from cellulose chains with crystalline and the flexible masses made of cellulose chains with strong bonds in a rigid linear arrangement. This crystal part was isolated to produce high-quality microcrystalline cellulose. Cellulose was a parameter that determined the strength of the fibers, hence the product could be influenced by the crystal structure of cellulose. The sugarcane bagasse structure before and after the treatment still had components in amorph form (hemicellulose and lignin) and crystal (cellulose). The characterization of peaks for SB which contained cellulose fibers were identified at 2 tetha ( o ) = 16.78 in the amorphic form and 21.69 in the crystal form (figure 3). Table 1 shows the increase of SB-D CrI value after the delignification from 37.919% to 51.537% or an increase in crystallinity of 35.91%. This finding was similar to the sugarcane bagasse treatment by alkaline which had crystallinity index about 63.15% (15). It could also be proven from the peak intensity of SB-D which was sharper compared to SB. The treatment of SB with NaOH could change the structure of amorphous cellulose to crystalline cellulose due to the loss of hemicellulose and lignin content after the delignification.
XRD analysis was also used to identify the formation of magnetic nanoparticles in biocomposites, as it clearly appeared peaks at 36 o , 43 o , and 57 o . These specific peaks were identified as magnetite (Fe3O4) which was appropriate for the crystalline magnetite standard pattern (JCPDS card 39-0664). figure 3 shows the different crystalline structures in the biocomposites compared to SB and SBF because of the existence of magnetic nanoparticles on SB-D surface. The peak for Fe3O4 particles was not found either in SB or SBF. In the case of different concentrations of 5, 7, and 9 mL of HMDA, all biocomposites had the same peak position due to the formation of Fe3O4. In addition, SBB-7 had the highest intensity at 36 o probably because of the distribution of magnetic nanoparticles on the surface of SBF. This was also confirmed by the FE-SEM images for all biocomposites.
FT-IR spectrum of SB, SB-D, SBB-5, SBB-7, and SBB-9 are shown in figure 4. This analysis was used to detect the functional groups contained in the samples. C-H stretching vibrations as the bonds in SB and SB-D were detected at peak 2900 cm -1 . The peak at 1640 cm -1 for N-H bending vibration detected the modification of the amine group on the biocomposites. Based on the calculation of amine | 2,744 | 2021-01-01T00:00:00.000 | [
"Materials Science"
] |
Loops and polarization in strong-field QED
In a previous paper we showed how higher-order strong-field-QED processes in long laser pulses can be approximated by multiplying sequences of"strong-field Mueller matrices". We obtained expressions that are valid for arbitrary field shape and polarization. In this paper we derive practical approximations of these Mueller matrices in the locally-constant- and the locally-monochromatic-field regimes. We allow for arbitrary laser polarization as well as arbitrarily polarized initial and final particles. The spin and polarization can also change due to loop contributions (the mass operator for electrons and the polarization operator for photons). We derive Mueller matrices for these as well.
Moreover, even if one does not measure the spin/polarization of the initial and final particles, one still has to sum over the spin/polarization of the intermediate particles in order to obtain the full approximation of the probabilities for higher-order processes. For trident and double Compton scattering in a constant field and for the probability summed/averaged over the spin/polarization of initial and final particles, it was *<EMAIL_ADDRESS>1 We use units with me = 1 and absorb e into the field eE → E.
shown in [4,5,9,17,22] how to perform the spin sums for intermediate particles. For example, the LCF version of the two-step part of trident is obtained by summing the incoherent product of nonlinear Compton scattering and Breit-Wheeler pair production over two orthogonal polarization vectors of the intermediate photon, rather than summing/averaging before multiplying. Note that on the probability level one cannot simply sum over an arbitrary spin/polarization basis, but at least in LCF there is a basis which does give the correct result. In [49] we showed that for a 0 ∼ 1 and fields that do not have linear polarization, one in general does not have such simple sums. It is of course always true that one can sum over any basis on the amplitude level, but on the probability level this gives in general a double sum, where the spin from the amplitude does not have to be the same as the spin from its complex conjugate. In LCF (summed over all the external spins/polarizations) there is a basis where the offdiagonal terms vanish. That is also the case for a 0 ∼ 1 if the field has linear polarization. In the general case, where there is no simple basis for which the off-diagonal terms vanish, we have found a way to treat these double spin sums by expressing spin/polarization in terms of Stokes vectors and spin transitions in terms of strongfield-QED Mueller matrices [49]. Thus, in [23,49] we showed how to obtain approximations of general higherorder tree processes using the O(α) Mueller matrices as building blocks. This generalizes the LCF approximation to fields with intermediate intensities a 0 1, arbitrary field polarization and field shape, and for arbitrarily polarized initial and final particles.
In addition to LCF, another case for which one can expect to find simple results is for a circularly polarized field with long pulse length, where one can use a locally monochromatic field (LMF) approximation [50][51][52]. Since our gluing approximation is valid for long pulses, it is therefore natural to derive LMF approximations of all the Mueller matrices.
In addition to the tree processes, nonlinear Compton and Breit-Wheeler, loop diagrams can also contribute to the changes in spin and polarization [43,53,54]. Here we will derive Mueller matrices for these loop contributions and study their role in the gluing/incoherent-product approach.
arXiv:2012.12701v1 [hep-ph] 23 Dec 2020
So, the aims of this paper are: • Derive LCF and LMF approximations for all components of the Mueller matrices of all O(α) processes.
• Derive the full Mueller matrices for the loop contributions to e − → e − and γ → γ (at O(α)). These include both diagonal and off-diagonal terms, related to e.g. spin flip and spin rotation, respectively.
• Show that, despite the vanishing contribution to spin flip at O(α), the O(α) Mueller matrices for the loops contain all the necessary information to approximate higher orders. We show in particular how to recover the exact spin-flip probability at O(α 2 ) from the product of two Mueller matrices, and the solution to the BMT equation and the Sokolov-Ternov effect from resummations of series of Mueller matrices.
This paper is organized as follows. In Sec. II we give definitions and summarize some results from [49]. In Sec. III we derive the LMF approximations of the Mueller matrices for nonlinear Compton scattering and nonlinear Breit-Wheeler pair production for a circularly polarized laser. In Sec. III A we show that this LMF approximation agrees well with the exact result for nonlinear trident. In Sec. IV we derive the LCF version of these Mueller matrices. In Sec. V we first present the general O(α) Mueller matrix for spin change due to the electron mass operator loop. In Sec. V B we consider a circularly polarized field in LMF. In Sec. V C we study the loop in LCF and combine it with the contribution from Compton scattering, in Sec. V D we consider the low-χ limit and recover literature results for the Sokolov-Ternov effect, and in Sec. V E we discuss what happens at larger χ. In Sec. V F we consider the low-energy limit and compare with the solution to the BMT equation. In Sec. V G we consider electrons with negligible recoil, which allows us to neglect Compton scattering and resum the Mueller-matrix series. In Sec. VI we derive the general O(α) Mueller matrix for polarization change due to the polarization-operator loop. We conclude in Sec. VII. There are several appendices where we collect most of the derivations.
For electrons the Stokes vector is given by where Σ = i{γ 2 γ 3 , γ 3 γ 1 , γ 1 γ 2 }, and similarly for positrons. Another, equivalent definition of n is via where and (cf. [41]) The probability of nonlinear Compton scattering by an electron or a positron, or nonlinear Breit-Wheeler pair production, can now be expressed as (cf. [62][63][64][65][66][67]) P = P + n γ ·P γ + n 1 ·P 1 + n 0 ·P 0 + n γ ·P γ1 ·n 1 + n γ ·P γ0 ·n 0 + n 1 ·P 10 ·n 0 + P γ10,ijk n γi n 1j n 0k , where n γ is the Stokes vector for the photon, and n 1,0 are the Stokes vectors for the fermions. Spin up and down along some direction n r (e.g. {0, 1, 0}) corresponds to n = ±n r . With (7) we can also study e.g. rotation from n r to some orthogonal spin. Similar expressions in QED without a background field can be found in [57][58][59][60][61], and [62][63][64] derived such representations for nonlinear Compton scattering and nonlinear Breit-Wheeler pair production. Our main focus here is how to use the P and P's in (7) as building blocks for higher-order processes. The vectors and matrices P are given by double φ integrals which depend on the longitudinal momenta but not on the spins and polarizations.
In [49] we presented two equivalent ways of how to glue together a sequence of first-order building blocks, each on the form (7), to construct the "N-step" part of higher-order processes. In the "averaging" approach, we write where n is the number of particles for which there is a sum rather than an average over spin/polarization (this includes all the intermediate particles), m is an integer that prevents double counting due to identical particles in the final state, and P i gives (7) for step i (i.e. emission of a photon or pair production). The bracket "operator" ... is defined by (for each n separately) where 1 is the unit matrix in 3D. The first two formulas are just what one would expect by averaging over any basis n = ±n r with arbitrary n r . The third formula is the nontrivial one, since clearly 1 2 nn cannot be equal to 1 for any basis. The reason that one can nevertheless sum over a certain basis in the LCF case or for linear polarization, is due to vanishing elements of the vectors and matrices that form products with the matrix 1 2 nn, i.e. the nonzero elements in 1 2 nn − 1 would multiply zeroes and then it does not matter whether one uses 1 or 1 2 nn. However, for the general case we need nn = 1.
In the second approach we replace the ... operator with Mueller matrices. For this we use 4D Stokes vectors The first-order probabilities can be expressed as where M is a 4 × 4 × 4 matrix and i, j, k = 1, ..., 4. The "N-step" can now be obtained by matrix multiplication. For example, if a photon is emitted at step m and decays at step n, then the sum over its polarization is included via M injnk . A similar matrix approach exists for QED in the absence of a strong field [60]. One can also compare this with the use of Mueller matrices for the propagation of light in optics.
In [49] we presented general results for the Mueller matrices of nonlinear Compton scattering and Breit-Wheeler pair production, which are valid for any polarization or field shape. The field could for example have a 0 ∼ 1 and elliptical polarization, or some sort of asymmetric structure. Of course, for this to give a good approximation one has to assume that the field is sufficiently long or intense. If the field is intense, i.e. if a 0 is sufficiently large, then it is useful to have a LCF approximation of these Mueller matrices, which we will derive in the following. If a 0 ∼ 1 one can find simple expressions for a circularly polarized field, which we now turn to.
III. THE LOCALLY-MONOCHROMATIC-FIELD APPROXIMATION
The expressions for P and M are given in [49]. In this section we will consider fields with long pulse and circular polarization, which is a case where one can expect to find simpler results. So, we consider fields on the form where h(x) gives the pulse envelope; it could for example be a Gaussian pulse h(x) = e −x 2 , but we will keep it general. For large T one can obtain LMF approximations, as in [50][51][52] for first-order processes. Large T is experimentally relevant, and it also means that one can approximate higher-order processes with our gluing method even for a 0 1 (in contrast to the standard LCF version of the N-step part). Since the building blocks in the gluing method are first order and since they all have similar structure, one can expect that parts of the calculation will be similar to [52]. However, here we calculate all terms that are needed for a general higher-order process.
In all terms we have two lightfront time integration variables, φ 1 and φ 2 . We change variables to σ = (φ 1 + φ 2 )/2 and θ = φ 2 − φ 1 and then to u = σ/T . The integrand can now be expanded to leading order in T . The exponent of each term is (before making any approximation) expressed solely in terms of the effective mass where In the LMF limit this becomes where a 0 (u) = a 0 h(u). The field enters the prefactor via where ∆ ij is given by We also use where σ are the Pauli matrices with a trivial third component added (recall e 3 · a(φ) = 0) In the LMF case we have and Note that the exponential part of the integrand has a u dependence given by (15), which is a smooth function and varies on the scale u ∼ 1. We see from (20) and (21) that some terms in the prefactor are proportional e.g. to sin(T u). For large T these terms oscillate rapidly and can be neglected. Consequently, several elements of the P vectors/matrices in (7) are negligible. But terms with e.g. X · V remain. We have in mind using the following first-order results as building blocks for higher-order processes. So, for example, the electron in the following photon-emission results could have emitted other photons before or itself been produced at an earlier step in the cascade. We use b 0 = kp to denote the longitudinal momentum of the original particle that entered the laser. All the other longitudinal momenta are expressed as ratios, s i = kp i /b 0 for fermions and q i = kl i /b 0 for photons.
For photon emission by an electron we find where s 0 and s 1 are the momentum ratios for the electron before and after emitting a photon with momentum ratio q 1 where J i denote three integrals that can be expressed in terms of sums of Bessel functions as in (A9), (A10) and (A11), κ = (s 0 /s 1 ) + (s 1 /s 0 ), e 3 = {0, 0, 1} and 2 = {0, 1, 0} etc., 1 = e 3 e 3 and 1 ⊥ = 1−1 . e i , 1 ⊥ and 1 form dot products with the fermions' Stokes vectors, and i with the photon Stokes vector. Photon emission by a positron is described in general (i.e. not just in LMF) by the same expressions but with the replacements a → −a and n e → −n p . For pair production we have similar expressions 3 ) (39) where κ = (s 2 /s 3 ) + (s 3 /s 2 ) and s 2 and s 3 are the longitudinal-momentum ratios of the electron and positron, respectively. The notation s 2,3 rather than e.g. s 0,1 is due to the comparison with trident, where s 0,1 would be used in the first, Compton step and s 2,3 for the second, pair-production step. But this is just notation and we are considering any sequence of photon emission and pair production, so at some later step we would have e.g. s n and s n+1 .
Note that if we sum over the spins of all the finalstate fermions then effectively n → 0 and so any multiplication of fermion matrices (1 ⊥ , 1 and σ (3) i ) ends with a dot product with e 3 , coming e.g. from R 1 . Since i · e 3 = 0 and 1 · e 3 = e 3 the terms with 1 ⊥ and σ (3) i drop out and for the remaining terms the matrix multiplication becomes trivial. So, we see that in this case it is not necessary to have nn = 1 for intermediate fermions; it is enough to have nn = 1 . This is something that can be obtained with a single (rather than double) sum 1 This corresponds to a basis with spin down and up along the laser propagation direction (−k = −e 3 ).
For the photon part, note that the only terms that involve 1 and 3 are the ones that couple all three Stokes vectors, i.e. the terms in R C γ01 and R BW γ23 with σ (3) i , but, since we effectively have σ (3) i → 0 in the case of unpolarized fermions, this means that 1 and 3 also drop out. So, for the intermediate photons we again do not need nn = 1, but just nn ij = δ i2 δ j2 , which can be obtained with a single sum over polarization vectors with n = ± 2 . From (1) we see that this is, as expected, a basis of circular polarization.
Thus, for the probability summed over all final-state spins, there is a basis for the spin and polarization of intermediate particles which allows one to obtain the full result using single spin/polarization sums, i.e. a basis for which the off-diagonal terms in the double spin/polarization sums vanish. However, if one is interested in the spin of one of the particles in the final state, then one needs in general the full gluing method with nn = 1.
Since the above LMF approximations are exactly linear in the pulse length T , we can see explicitly the volume scaling T N of the N-step. Corrections to the N-step approximation have a subdominant scaling with respect to T . In comparison, the dominance of the N-step in the LCF case for large a 0 is due to the a N 0 scaling (with χ as independent). For example the two-step part of trident scales as a 2 0 in LCF or T 2 in LMF. We have performed the oscillating θ integrals in terms of sums over Bessel functions, see (A9), (A10) and (A11). This has a huge numerical advantage, because theses sums converge quickly. To obtain the spectrum we now only have the u integrals left, but these are relatively easy to perform numerically since their integrands are determined by the envelope function h(u), which has a simple shape (e.g. Gaussian e −u 2 ). The u integrals can not be performed at this stage anyway, because when gluing together the above first-order results we should include step functions to ensure lightfront-time ordering u 1 < u 2 < ..., with u i corresponding to step i.
A. Trident
In this section we will benchmark the LMF approximation with trident as an example. Here s 1 , s 2 , s 3 = 1 − s 1 − s 2 and q 1 = 1 − s 1 are the longitudinal momenta of the two electrons, the positron and the intermediate photon, respectively, divided by the the initial longitudinal momentum b 0 = kp. Using either the gluing method with the LMF results presented above, or by applying the same LMF treatment directly to the exact expressions in [12] for the full probability, we find to leading where κ 1 = (1/s 1 ) + s 1 , κ 2 = (s 2 /s 3 ) + (s 3 /s 2 ), and J (i) is obtained from J in (A9), (A10) and (A11) by replacing u → u i and r → r i with r 1 = (1/s 1 ) − 1 and In [49] we presented sections of the spectrum with s 1 = s 2 and s 2 = s 3 for several different values of a 0 and b 0 for a circularly polarized field, and there we showed that our full gluing approximation agrees well with the exact result. Here we compare the LMF approximation of the gluing/Mueller-matrix approximation with the exact result. In Fig. 1 we have chosen the a 0 and b 0 values from [49] that are closest to the parameter values that are planned for the LUXE experiment [56]. We can see that, even after approximating the full gluing approximation with its LMF approximation, we still have a very good agreement with the exact results. We can also see that this is in a regime where the LCF approximation of the gluing approximation is not great.
The LMF approximation looks indistinguishable from the full result in [49], but for higher energies b 0 one will start to see a difference between the full version of the two-step part and its LMF approximation. However, as seen in the plots in [49], for larger b 0 the one-step terms will also become non-negligible, which means that one will also start to see a difference between full two-step part and the exact probability (two-step + one-step).
IV. LCF BUILDING BLOCKS
In this section we will obtain the LCF approximation of the Mueller matrices. This can be obtained from the large a 0 limit of the general expressions in [49]. As usual, the results are obtained by rescaling θ → θ/a 0 and expanding to leading order in 1/a 0 . All θ integrals can be expressed in terms of the Airy function Ai, its derivative Ai and the integral For nonlinear Compton we find where where κ = (s 0 /s 1 ) 3 , ξ = (r/χ(σ)) 2/3 with χ(σ) = |a (σ)|b 0 being the local version of χ = a 0 b 0 , r = (1/s 1 ) − (1/s 0 ), E(σ) andB(σ) are unit vectors parallel 2 to the local electric and magnetic fieldŝ and the vectors only form dot products with themselves or with Stokes vectors for (initial or final) photons. In order to replace the constant vectors 1 and 3 with ones that are related to the local field polarization, we writê E(σ) =: {cos Ω, sin Ω, 0}B(σ) = {sin Ω, − cos Ω, 0} .
(52) Then, a photon with linear polarization parallel toÊ(σ) corresponds to the following Stokes vector and − E corresponds to polarization parallel toB. Diagonal linear polarization lying betweenÊ andB, i.e.
E (σ), EB (σ) and 2 form a local basis for linear parallel (or orthogonal), linear diagonal and circular photon polarization. Using we can now express also the photonic parts of the P's in terms of the local direction of the field. For example, E · S ·Ê = E , so ±R C γ corresponds to a photon emitted with polarization parallel toÊ orB. Since we also have 2 ) ij =k l ε ijl , we can write all terms in a frame independent way.
When gluing together these first-order building blocks to approximate higher-order processes, one finds terms with e.g.B(σ 1 ) ·B(σ 2 ) which, for a rotating field, range from 1 to −1 since σ 1 and σ 2 are not forced to be within the same formation length, i.e. they can be e.g. at different field maxima.
Spin and polarization of all three particles in nonlinear Compton and Breit-Wheeler have recently been studied in LCF in [44]. The spin and polarization states considered in [44] correspond to the 3 components for the photon and to the e 2 components for the fermions, for a field with a polarized in the x direction. We have checked that the corresponding components of our LCF expressions above agree with those in [44]. However, the full Mueller matrices contain additional nonzero elements. There are two reasons for this: 1) We allow the field to rotate. 2) We allow for arbitrary polarization of the initial and final particles.
Consider for example an electron that emits several photons, which do not decay into pairs. If we sum over the polarization states of all these photons, then we only need R ,R 0 ,R 1 andR 01 . If we either average and sum over the spins of the initial and final electron or if we only consider initial and final electrons with e 3 · n = 0, then the 1 terms inR 01 drop out and the matrix multiplications reduce from 3 to 2 dimensions. If in addition the field has linear polarization and if we either average and sum over the spins of the initial and final electron or if we only consider initial and final electrons with Stokes vector parallel to the magnetic field, n = ±B, then the matrix multiplication reduces to a one-dimensional problem. In this case it is not necessary to have nn = 1 in the gluing approach; it is enough to have nn =BB, which one can achieve by simply summing over spin states for the intermediate electrons with n = ±B. So, if we sum (average) over all the spins/polarizations and if the field has linear polarization, then it is enough to know the probability for nonlinear Compton with initial and final spin parallel and antiparallel to the magnetic field. However, for the general case where the field is rotating or if one is interested in the spin/polarization of initial and final particles, there are more relevant terms and we need to use nn = 1.
We consider again trident as an example and for simplicity we sum and average over all the external spins. Compton scattering and Breit-Wheeler pair production are glued together according to P glue = (2 4 /2) P C P BW + (1 ↔ 2) (cf. Eq. (44) in [49]), which gives us For a linearly polarized field withÊ = e 1 ,Ê only depends on σ 1 and σ 2 via χ(σ 1 ) and χ(σ 2 ). For a circularly polarized field withÊ = cos(σ)e 1 + sin(σ)e 2 we haveÊ · S ·Ê = sin(2σ) 1 is now an oscillating term and will therefore tend to average out. So, although the field and therefore its polarization is locally constant, the two steps can occur at macroscopically separated σ 1 and σ 2 and therefore see a different polarization, which leads to a qualitative difference between linear and circular polarization.
Consider again trident in a linearly polarized field. We just saw that for the probability summed over all the external spins, we could replace the general gluing prescription nn = 1 with a sum over intermediate photons polarized with n γ = ± 3 , which corresponds to a polar-ization 4-vector with ⊥ = {1, 0} and {0, 1}, i.e. parallel and perpendicular to the field, as expected. Consider now instead an initial electron that was polarized along the laser propagation, n =k. We again sum over the spin of the final-state electrons, but we want to know the difference in the probability between a positron polarized up or down alongk. The only term that contributes to this difference isk ·R BW γ3 ·R C γ0 ·k and the relevant polarization states of the intermediate photon are n γ = ± 2 , which correspond to left-and right-handed circular polarization. So, for P(n 3 =k) − P(n 3 = −k) we also do not need the general gluing prescription nn = 1, but the two polarization states of the intermediate photon that we would have to sum over are n γ = ± 2 , while for P(n 3 =k) + P(n 3 = −k) we need n γ = ± 3 . So, even if we are in a regime where one can replace nn with single spin/polarization sums, it can still be that one needs to use different bases for different quantities. The general prescription nn = 1, on the other hand, works for all cases.
As noted in e.g. [34,35], when trying to find set-ups to produce polarized fermion beams one is faced with the problem that the field points in different directions during its oscillations, e.g. for a linearly polarized, almost monochromatic laser the magnetic field directionB flips between e.g. e 2 and −e 2 , which means that these terms that could induce a polarization tend to average out when integrated over such a pulse. Note, though, that even if we drop all these terms we can still have nonzero matrix products: If we drop the terms proportional toB (and S ·B, which also involves the electric field direction) then If we also average/sum over all the external fermion spins, or only consider fermions polarized alongk, then any sequence of 3 × 3 matrices for the fermion spin must start and end withk, e.g.k · · · · ·R C γ10 ·R 10 ·k. Since we have already dropped terms withkB, which could otherwise couple the e 3 with the e 1 and e 2 components, we see that also 1 ⊥ and σ (3) i drop out. So, the only 3 × 3 matrix that remains is 1 =kk. This means that the matrix multiplication reduces to a one-dimensional problem and one can simply replace the rule nn → 1 for fermions with a sum over two basis vectors n = ±k. Note that this special basis is not along the magnetic field; it is along the propagation direction of the laser. This is the same spin basis as the one in the previous section for a circularly polarized laser. For the photon polarization there does not seem to be a simple basis (that works for all terms), because both 2 andÊ · S ·Ê remain. If no pairs are produced and if we sum over the polarization of the emitted photons, then the probability separates into two parts: R R ... R + n 1 · R 10 · R 10 ... · R 10 · n 0 , and if we average and sum over the spin of the initial and final electron then we only have R R ... R with no matrix multiplication or spin sums at all, which would therefore make the study of cascades much simpler.
V. MASS OPERATOR
So, far we have shown how to use the O(α) Mueller matrices for nonlinear Compton and Breit-Wheeler as building blocks for higher-order tree-level diagrams. Now we will derive the O(α) Mueller matrix for the electron mass loop (and later the photon polarization loop) and show how to use it as an additional building block for higherorder processes that include loops. That such loops can be important for the generation of polarized electron beams in circularly polarized monochromatic lasers in the perturbative regime have been explained in [68,69]. Here we will study a general, pulsed plane wave with arbitrary polarization and in the nonlinear regime. The mass operator is also needed [70] to derive the corrections to the Bargmann-Michel-Telegdi (BMT) equation [71] for determining the time evolution of the spin of electrons in storage rings, i.e. for the Sokolov-Ternov effect [72]. Spin effects due to the mass operator have also been studied in [53]. The possibility that the spin of an electron can flip due to the loop was recently studied in [43], where it was shown that this effect is O(α 2 ). To obtain the O(α) Mueller matrix we have to consider general spin transitions.
A. General results
We present the derivation of the loop in Appendix-B. To zeroth order we have P (0) = (1/2)N (1) · N (0) . For the first order, we find for a general field where or equivalently and where r = (1/s)−1, κ = (1/s)+s, q = 1−s is the photon momentum fraction, D 1 = w 1 · w 2 , Y = w 2 − w 1 and X, V and σ are defined in (18) and (19). We can also write this as a 4D Mueller-matrix with N = {1, n}, where where 1 (4) is a 4D unit matrix, e 0 = {1, 0} (e.g. e 0 · N = 1), the difference between the 4D and 3D versions of only has off-diagonal terms and, as we will show below, it leads to rotation of the Stokes vector. The reason for pulling out a factor of 1/2 is because the sum over the spin of an intermediate electron state gives a factor of 2, e.g. gluing together two Mueller matrices gives (1/2)M · 21 · (1/2)M = (1/2)M · M, so any sequence of Mueller matrices will have an overall factor of 1/2. Also, since The first thing to note is that R L and R L 0 are exactly identical to the corresponding quantities in nonlinear Compton scattering [23,49] but with opposite overall sign 3 This has to be because by summing over all possible final states, the probability has to be 1, i.e. the loop has to exactly cancel the probability of single nonlinear Compton scattering. Since this should happen regardless of which initial state one starts with, this means that R L and R 0 should be the same as in the nonlinear Compton case. This cancellation is also what ensures that inclusive probabilities are infrared finite for unipolar fields [73,74] and is important for expectation values describing radiation reaction [75,76]. The second thing to note is that, since the zeroth order amplitude vanishes for two orthogonal spins, the loop cannot contribute to the spin-flip at O(α) [43], i.e. P L = 0 for n 1 = −n 0 . That this should hold for arbitrary initial spin n 0 and arbitrary field is ensured by the fact that 3 Actually, when we in this section compare with Compton scattering we only compare with the terms that remain after summing over the polarization of the emitted photon, i.e. P C , P C 0 , P C 1 and P C 10 , and for these terms the sum over polarization just gives an overall factor of 2. To avoid having factors of 2 everywhere we absorb it into the definition of these terms. So, when we in this section write e.g. P C it should be understood that this includes an overall factor of 2 compared to e.g. [49]. So, P L , P 0 , P 1 and the diagonal 1 part of P 10 could have been guessed from our results in [23,49] for Compton scattering.
Of course, this does not mean that the loop is not important, because, as expected from unitarity, for n 1 = −n 0 it is in general on the same order of magnitude as Compton scattering. Moreover, the loop also contains terms that cannot be obtained from Compton scattering. These are the off-diagonal terms in P L 10 . To obtain these off-diagonal elements of the Mueller matrix we need to consider general n 0 and n 1 = ±n 0 .
We find, though, that there is a relation between the θ integrands for the loop's off-diagonal terms and its diagonal terms, and hence with Compton scattering, given by where ε ijk is the Levi-Civita tensor with ε 123 = 1, R L 0i = e i · R L 0 = −e i · R C 0 and there is a sum over i = 1, 2, 3. If we rewrite the θ integral in (67) as an integral over only θ > 0 and define then Thus, the off-diagonal loop terms are given by the imaginary part of a quantity whose real part gives P L 0 = −P C 0 . We always integrate over the transverse momenta, and we showed in [49] that it is possible to perform these integrals for each O(α) step separately before gluing them together. In contrast, the longitudinal momentum s integrals are in general intertwined. For example, if an electron emits two photons then the electron has a lower longitudinal momentum in the second step. So, for e.g. Compton scattering one cannot in general integrate the O(α) steps separately before one glues together sequences of them. However, for the loop we can of course always perform the s integral before inserting the loop into a cascade diagram. This means, for example, that even though P L (s) = − P C (s) these two terms might not cancel if they are inserted into a general cascade, because the total s integrand is different for Compton scattering because the later steps depend on how much of the longitudinal momentum that was emitted, while the in-and outgoing momenta are the same in a loop step.
However, if we restrict to a single step (and integrate over s), and if we sum over the polarization of the emitted photon for the contribution from Compton scattering, i.e. we do not observe this photon, then the probability that the electron starts with Stokes vector n 0 and end up with n 1 is given by (1 + n 1 · n 0 ) This equation is exact, i.e. the contributions from P L and P C , and P L 0 and P C 0 cancel in any regime. Note that P only depends on the initial spin n 0 via terms that also depend on the final spin n 1 , and n1=±nr P = 1. In the following we will show that there is in general also a partial cancellation in the remaining terms. We will see below that in some regimes the P L 1 + P C 1 term is negligible, and then the only change is due to the P L 10 + P C 10 term. The off-diagonal terms of this matrix leads to a rotation of n, while the diagonal terms lead to a change of the degree of polarization. However, since the probability should not become negative or larger than 1 for n 1 = ±n 0 and n 2 0 = 1, these diagonal terms have to be negative, so . So, before the interaction the probability is equal to 0 and 1 for n 1 = ±n 0 (with n 2 0 = 1), but afterwards there is no direction n 1 that gives P = 1, and, hence, these negative diagonal elements lead to a lower degree of polarization. Thus, if one wants to increase the degree of polarization, then the P L 1 + P C 1 term should not be negligible. This can of course also be seen if we start with an unpolarized particle n 0 → 0 but want a polarized outgoing particle.
B. Circular polarization
If one starts with an unpolarized electron, then the spin of the outgoing electron is determined by R L and R L 1 . R L 1 is similar to and tends to cancel parts of the corresponding quantity in nonlinear Compton [23,49] but we see that in general they do not cancel each other exactly. However, there is a cancellation of the leading order of the soft-photon part (q 1, s ∼ 1). Moreover, as mentioned above, for a long pulse with circular polarization only the term proportional to X·V in R 1 contributes to leading order. Since this term is exactly the same as in the Compton case (but with opposite sign), this means that to leading order in the pulse length, the loop cancels the contribution from nonlinear Compton for P 1 . Note that, while the first term in R 1 only gives a small contribution because it is linear in the field and therefore averages out upon performing the φ integrals, this is not the case for the X · V term. So, if one just considers the change in spin due to photon emission, then one would find a significant effect for electrons polarized along the laser propagation, n ∝k. However, the resulting electron beam actually has a much lower polarization because the loop cancels this effect to leading order. This cancellation for circularly polarized fields is expected from the monochromatic case at O(a 2 0 ) in [68,69]. However, although P L 1 and P C 1 cancel, the terms that depend on both the initial and final Stokes vectors remain, so for circular polarization (79) reduces to P = 1 2 (1 + n 1 · n 0 ) + n 1 · (P L 10 + P C 10 ) · n 0 , The integrated probability for a circularly polarized field in LMF, where P is given by (81) and P L 10 2 ), where Ri(a0(u), b0). The plot shows R ⊥ , R and R2 at b0 = 1/2.
where P C 10 is given by (27). For the 1 part of the loop contribution (71) we have the same integral as in P C , so this part is given by minus (23). The Yk −kY part of (71) does not contribute to leading order. For the remaining part we have wherẽ where Θ is given by (15). We can see that this term can be important even without actually evaluating it, because the contribution from Compton scattering P C 10 in (27) is a diagonal matrix, so if we start with e.g. n 0 = {1, 0, 0} and calculate the probability that n 1 = {0, 1, 0} then only the loop contributes (thanks to the σ (3) 2 term). In Fig. 2 we plot these results for a circularly polarized field in LMF. We see that the diagonal terms in P L 10 + P C 10 are negative, which they have to be as explained after (79). We are considering O(α) here integrated over the longitudinal momentum of the emitted photon. As expected from the nonlinear Compton case [77], we can perform the longitudinal momentum s integrals in terms of cosine and sine integrals, as explained in Appendix. C 1. These s integrals can be performed for any field shape and polarization. For a circularly polarized field we can further approximate the effective mass, which appears in the argument of the cosine/sine integrals, as in (15), and then we can perform the θ integral numerically. It turns out that this way, i.e. performing the s integral analytically and then the θ integral numerically, is actually more convenient than first performing the θ integral analytically in terms of sums of Bessel functions and then the s integral numerically. Also the former approach works for general fields (with the full effective mass, of course), while the latter only works for circularly polarized field in LMF. However, when going beyond O(α) one might not be able to perform the s integrals, since they couple nontrivially because of the recoil due to photon emission.
C. Locally constant field approximation
For large a 0 we can rescale θ → θ/a 0 and expand to leading order in 1/a 0 with χ = a 0 b 0 kept fixed. This gives the LCF approximation. We find where andÊ andB are the local electricand magnetic-field direction as defined in (51). Given the above discussion about the exact expressions, (85) and (86) are of course exactly the same as (43) and (44) except for the opposite sign.
In (87), on the other hand, we find a term with Gi, which does not appear in any of the expressions in Sec. IV for the spin and polarization of nonlinear Compton. One might nevertheless have guessed this term from the general relation in (78), because Ai and Gi give the real and imaginary parts of the following integral (88) Note that thisÊk−kÊ term is the only one that couples Stokes vectors parallel toÊ andk, so if we have a linearly polarized field and an initial Stokes vector that is parallel to the laser propagation direction, n 0 = ±k, then the loop is necessary for the probability that the final Stokes vector is parallel to the electric field, n 1 = ±Ê, while Compton scattering does not contribute to this.
The Gi function appears in the O(α 2 ) results in [43] for spin flip. In order to compare with those results, consider N 0 = {1, n 0 } and N f = {1, −n 0 } with n 2 = 1.
The Mueller matrix has a matrix structure as in (73) with P L 0 ∝B and M L rot ∝Êk −kÊ. At O(α 2 ) the probability is given by P L(2) flip = (1/2)N f · (T σ /2)M L · M L · N 0 . We consider for simplicity a linearly polarized field, then the only terms that contribute to spin flip are the ones proportional to (e 0B +Be 0 ) 2 = e 0 e 0 +BB and (Êk −kÊ) 2 = −(ÊÊ +kk). The lightfront time ordering is trivial and simply gives an overall factor of (T σ /2) → 1/2. We find From this we can see that P For what follows, it is natural to combine the diagonal part ofR L 10 , R L 1 = − R C 1, with R C 10 from Compton scattering (47) (which only has diagonal terms) Since κ − 2 = q 2 /s > 0, Ai 1 > 0 and Ai < 0, these diagonal terms are all negative.
For an oscillating field, P 1 and P C 1 tend to average out (each term separately) becauseB(σ) changes direction. It has been realized in recent literature that one can prevent this by choosing asymmetric fields, e.g. two-colored fields [34,35]. Here we will study the σ integrand as a function of χ(σ), so the following results are relevant for a general (e.g. asymmetric) field shape.
Thus, the probability terms in LCF can be expressed as and We consider first the χ 1 and χ 1 expansions. A simple way to obtain these is to first calculate the Mellin transform with respect to χ, as explained in Appendix D.
For the first few terms we obtain for the low-energy expansion Note that the contributions from Compton scattering and the loop to J 1 cancel to leading order. We also see that the rotational term J r is the only term that contribute to the overall leading order. For large χ we find The appearance of fractional powers like χ 1/3 can mean a slow convergence [44,79], but with the Mellin transform it is in any case easy to obtain higher orders in these expansions. In Fig. 3 we have arbitrarily truncated the large-χ expansion at χ −2 for all terms (in some terms the correction is merely suppressed as O(χ −7/3 )). From these expansions we see that J C 1 and J L 1 do not cancel each other beyond the small-χ limit. However, they continue to be on the same order of magnitude for arbitrary χ.
In the asymptotically large-χ limit we have J C 1 ∼ −6J L 1 , but around the maximum they are closer and J C 1 + J L 1 ∼ J C 1 /2. Thus, in the LCF regime we find that the loop is numerically important for any value of χ.
D. LCF + low-energy approximation
In the low-χ limit we can sum up the α expansion explicitly. For this it is convenient to use 4D Mueller matrices. We can write the 3 × 3 matrix P 10 as a 4 × 4 matrix by just setting P 00 10 = P 0i 10 = P i0 1, 2, 3. The 3D vector P 1 becomes a 4 × 4 matrix by replacingB →Bê 0 , whereê 0 = {1, 0, 0, 0} andê 0 ·B = 0. We can then write the contribution from Compton scattering and the loop in terms of a 4D Mueller matrix as where For a linearly polarized field we have According to the gluing method, one can approximate the O(α 2 ) term as 4 In general the Mueller matrices would be connected also via the integrals over the longitudinal momenta, but here in the low-χ limit they are only connected via (lightfront) time ordering. So, the sum over all orders in α gives a time-ordered exponential where T σ stands for lightfront time ordering. Although we have derived these results with the goal of predicting what happens after the electron has left the pulse, i.e. for σ → ∞, we have written P(σ) as a function of a finite σ since this allows us to obtain a differential equation for a n(σ), which might be simpler to solve, even if one is only interested in σ → ∞. So, let then The first element is conserved, so N(σ) = {1, n(σ)}, and for the remaining 3D part we have Eq. (110) agrees with Eq. (3.24) in [70], which describes the time evolution of the Stokes vector in a magnetic field and for a high-energy particle. This is expected since a general field appears as a crossed (plane wave) field for a high-energy particle (χ can still be small even if γ 1). This is a nontrivial check of our gluing method as well as many of its building blocks. This is encouraging since this is a regime where the dominant contribution comes from low-energy photons, which one might otherwise have expected to be challenging for a gluing/incoherent product approach (cf. [80][81][82] for the break-down of LCF for soft photons). To understand this one should note that some problems due to soft (or infrared divergent) photons are either absent or can be expected to be less severe thanks to the inclusion of the loop, because of the soft-photon cancellation between the loop and Compton scattering.
The solution to (110) for a constant field can be found in [70], and in particular the solution for n 2 can be written down immediately. However, it might be difficult to obtain a simple differential equation away from the lowχ regime, because in addition to time ordering one also has "(longitudinal) momentum ordering" due to Compton scattering steps. So, in Appendix E we instead take a step back and calculate (107) directly. For an initially polarized particle, the Ω terms lead to rotation. For an initially unpolarized particle the probability to observe n in the final state is given by (112) The maximum probability is achieved with n =B. Since we have absorbed e into the definition of the background field, for e < 0B is actually anti-parallel to the magnetic field, so electrons will polarize anti-parallel to the magnetic field, which is well known. While the integrand in the exponent is small, if the pulse is sufficiently long then the exponential becomes small and one approaches the upper limit for the induced polarization of the electron beam, namely 8/(5 √ 3) ≈ 0.92 [70,72]. However, the pulse would have to be very long to compensate for αχ 3 /b 0 1 (there is of course also the problem that the field polarization would in general oscillate).
The maximum polarization can be obtained directly from the Mueller matrix in (105) without finding the complete solution. One just has to notice that m · is an eigenvector 5 of the Mueller matrix with zero as eigenvalue. This means that applying further Mueller matrices will not change this Stokes vector. We can also see from the differential equation (110) that this corresponds to dn/dσ = 0. Thus, 8/(5 √ 3) represents the maximum degree of polarization.
E. LCF for larger χ
We have just shown that at leading order in χ 1 we can explicitly resum the α expansion and we recover the results in [70]. The next question then is how small χ has to be in order for these results to give a good approximation. In Fig. 4 we see that the relative error for the individual terms is already ∼ 10% at χ ∼ 0.01. So, one might expect significant corrections even if χ is quite small. At larger χ Fig. 3 shows that the rotational term J r decreases while the other terms first increase and then slowly decrease. So, while the rotation is the dominant effect at χ 1, at larger χ one can expect that rotation and damping become on the same order of magnitude.
At larger χ one would in general expect it to be necessary to include the recoil on the electrons due to the emission of photons, i.e. radiation reaction. This would mean that we can no longer perform the integral over the longitudinal momentum of the emitted photon for each M C separately. So, in general one would need a numerical treatment. However, while waiting for such numerical results, we can try to go ahead and use the s integrated results (91) and (92) anyway, hoping that it will at least give a decent idea of the scaling. We can in general find an eigenvector of the Mueller matrix m with eigenvalue zero, given by where the e 2 component points along the magnetic field. This suggests a maximum degree of polarization given by n max . For χ 1 and χ 1 we have (114) The χ 1 limit has already been discussed. The χ 1 limit agrees with [83]. However, the exact result converges very slowly to this leading order, which can be seen from NLO/LO ∼ −2.4/χ 1/3 . In Fig. 5 we show that n max is a monotonically decreasing function of χ, so the low-χ/Sokolov-Ternov result is the overall maximum. It would be interesting to check these results with a numerical treatment that includes radiation reaction.
F. Low energy limit
In this subsection we will consider the low-energy limit, i.e. small b 0 . Here we are interested in the probability integrated over all the momenta and summed over the polarization of the emitted photon (for the contribution from Compton scattering), so the starting point is (79) with the terms expressed in terms of cosine/sine integrals as in Appendix C 1. For P C 10 and for the R L 1 part of P L 10 we obtain the leading order by expanding the integrand for large ϕ. We find that these two terms cancel to leading order. In the remaining part of P L 10 we rescale θ → 2b 0 θ and then expand the integrand in b 0 . Performing the resulting θ integral gives (omitting the argument Fig. 3, without re-expanding the resulting ratio. The leading order in the χ 1 limit (shown in (114)) is larger than one for χ 40, so cannot be used here.
of a(σ)) and if we choose the constant part of the potential so that a(−∞) = 0 then We also find that P L 1 + P C 1 vanishes to leading order. Hence, the only terms that remain in the low energy/classical limit are terms that come from the loop. We see that the field has to be unipolar to have a nonzero change in the low-energy limit. So, from (79) we finally find P = 1 2 (1 + n 1 · [1 + δ] · n 0 ) .
From this we can read off the new spin state Note that, since δ is an antisymmetric matrix, the new Stokes vector is a unit vector to the order of α that we are working with, i.e. n 2 f = 1 + O(α 2 ). Note also that the change in the Stokes vector is orthogonal to the initial vector, i.e. ±n 0 · δ · n 0 = 0, which holds for an arbitrary n 0 because δ is antisymmetric.
So far we have considered the probability at O(α). In [23,49] we showed how the Mueller matrices for nonlinear Compton and Breit-Wheeler can be glued together to form approximations of higher-order processes. We show in Appendix C that the obvious generalization of the gluing method to processes with loops is indeed correct. Hence, we should replace (no sum over j) n j n j → 2 n j n j → 21 for each intermediate Stokes vector n j . In the low-energy limit we have (cf. (79)) so at O(α 2 ) the gluing prescription applied to the intermediate Stokes vector n 1 gives where the restriction to σ 1 < σ 2 comes from demanding that the second step (either photon emission or a loop) happens after the first step. Continuing in this way, we find the expansion of a σ-ordered exponential. So, in this low-energy limit we can resum all the α orders, and we find that the probability to go from n 0 to n is given by where In general one would only expect the gluing/product approach to give an approximation, but we will now show that this low-energy limit agrees exactly with the solution of the BMT equation [71] dα where α µ is the spin 4-vector related to the Stokes vector as in (5), and µ is the anomalous magnetic moment. The momentum p µ in (6) should be replaced by the timedependent momentum, and since we know that in the classical limit this is to leading order given by π(φ), we now have the following φ-dependent basis to cancel the first term on the right-hand side of (124), α (i) µ α (j)µ = −δ ij to project onto a singleṅ i , α (i) π = 0 to cancel the last term in (124) 6 ,ḟ = kpdf /dφ, and finally α (i) F α (j) = −kp[a ik j −k i a j ] we find that the BMT equation reduces to 6 We have projected with the three vectors α (i) . The last term is needed since π[F − µππF ] = 0 means that there is zero overlap with πµ. and hence the solution is given by where n 0 = n(φ 0 ). This is exactly the same as the vector (123) that gives the maximum probability (P = 1 in this case).
For linear polarization we have F(φ) = a (φ)F, wherê F =Êk −kÊ is a constant matrix andÊ is as before a unit vector pointing in the electric-field direction. Sincê F 2 = −(ÊÊ +kk) andF 3 = −F we find (assuming a(φ 0 ) = 0) whereB is again a unit vector in the magnetic-field direction. For example, if we start with n 0 =k then n =k cos[µa] +Ê sin[µa]. Here we have considered the leading order in the low-energy expansion, where the degree of polarization is constant, n 2 (φ) = 1. At higher orders, also the degree of polarization can change, as in (112) and [70] in the LCF regime.
G. Electrons with negligible recoil
We have now seen how the higher orders can be resummed into a time-ordered exponential in the lowenergy limit. In general it is not possible to obtain such compact results, because, after an electron has emitted a photon carrying a significant fraction of its longitudinal momentum, the second step has effectively a different b 0 (although we still use b 0 for the initial momentum). However, if we consider the probability that the final electron has a longitudinal momentum close to the initial one, then any photon that is emitted must be soft and so all the longitudinal momentum integrals for the Compton steps are restricted to small values of the photon momentum q. For a field with a(∞) = a(−∞) there is no IR divergence [73,74], and in [80,81] it has been shown explicitly that the longitudinal momentum spectrum, P C (s 1 = s 0 − q 1 ) in our notation, has a finite, constant soft-photon limit q 1 → 0. So, the momentum integral in each Compton step has to leading order a constant integrand, and is therefore simply proportional to the length of the integration interval, which is smaller (or equal if only one photon is emitted) than the difference between the final and initial electron momentum, 1 − s f 1. Hence, the contribution from Compton scattering can be made small by choosing the final electron momentum to be very close to the initial momentum. The contribution from loops, on the other hand, is not restricted at all by this, and we still have the same longitudinal momentum integrals. Thus, in this limit of negligible recoil, we can neglect Compton scattering but still have a nontrivial spin effect due to the loop. This simpli-fies the calculation tremendously, as we can again resum the α expansion into a time-ordered exponential.
In terms of a 4D Mueller matrix we have According to the gluing prescription, the O(α 2 ) term is approximately given by and similarly for higher orders. Thus, Things simplify further if we assume a linearly polarized field. With a(φ) = a(φ)e 1 , the Mueller matrix in (73) where we have used P L = − P C and P L 0 = −P C 0 to write loop contributions in terms of Compton terms. Eq. (133) can be seen as a generalization of Eq. (14) in [53] to arbitrary electron polarization (including an initially unpolarized electron). Considering spins that are not parallel or anti-parallel, n f = ±n 0 , also allows us to see an effect already at O(α). If the initial particle is unpolarized, N 0 = e 0 , then which means that the electron tends to become polarized parallel to ±e 2 , i.e. parallel (or anti-parallel) to the magnetic field, as one might expect. Recall that in this fermion-loop section we have absorbed a factor of 2 into P C and P C 0 to account for a trivial factor of 2 coming from summing these photon-polarization independent terms over the polarization of the emitted photon. So, 2 P C ± 2P C 0 gives the total probability of Compton scattering by an initial electron with polarization n = ±e 2 and summed over the polarization of the emitted photon and the spin of the final-state electron, where the latter gives the (explicit) factor of 2. Note that (134) comes solely from the loop, but it is written in terms of the Compton scattering probabilities in order to show that the induced electron polarization is a consequence of the fact that the probability to emit a photon is higher for n = sign(P 0 )e 2 , so in the forward direction there will be more electrons with n = −sign(P 0 )e 2 .
It might at first seem natural to find LCF approximations of the above results. However, this can be problematic because in this section we have assumed that Compton scattering can be neglected, which is justified if the Compton spectrum is bounded for low photon momentum, but in the LCF regime the spectrum is IR divergent (see [73,74,80,81] for a comparison of IR in LCF and non-LCF). Although it happens to be an integrable singularity, it might anyway lead to a too large contribution from soft photons, which would mean that Compton scattering is not negligible. Also the formation length might become too large (compared to the pulse length) for the gluing approach (at least if the loop and Compton scattering are considered separately). However, the rotational term from the loop has no counterpart in Compton scattering, so it makes more sense to consider it separately. The LCF approximation of this term is given by (87) and if we identify this with µa(φ) in the solution (128) to the BMT equation, then we find a field-dependent anomalous magnetic moment that agrees with the literature [33,43,70] (this is immediately clear by comparing with Eq. (25) in [43]). To leading order in χ 1 this reduces to the usual µ = α/(2π). Note though that the difference between µ(χ) and α/(2π) can be expected to be on the same order of magnitude as the other, nonrotational terms.
We can find very similar expressions for a circularly polarized field in the LMF approximation. Here the Mueller matrix can be written as where P L 0 and M L rot are obtained by matching with (73) (we use the same notation, but P L 0 and M L rot are of course different from the linear case (132)). This has exactly the same matrix form as in (132), except that e 2 ↔ e 3 , which means that the spin structure of the probability is obtained by making the replacement e 2 ↔ e 3 in (133) or (134). So, the rotational terms lead to spin rotation in the e 1 , e 2 plane, i.e. the plane that contains the rotating field polarization, and (e.g.) an unpolarized initial particle will tend to become polarized in the ±e 3 direction, i.e. parallel to the laser propagation. As mentioned above, the oscillation of a circularly rotating field does not lead to an averaging out of the induced spin polarization due to the loop or Compton scattering separately, in contrast to a linearly oscillating field. We saw above that there is nevertheless a cancellation between the loop and Compton scattering for such a field. However, in this subsection we are in a regime where Compton scattering is negligible, so it cannot cancel the loop contribution. Thus, if we select those electrons that have kept most of the initial momentum, then a circularly polarized field may lead to electron polarization.
VI. POLARIZATION OPERATOR
In this section we will consider the polarization dependence of the polarization operator. The photon polarization is given by (1). As in the fermion case, we describe the initial state by a wave packet where the annihilation and creation operators satisfy where in the lightfront gauge we have The amplitude for an initial photon with momentum l µ and polarization µ to a final photon with l µ and polarization µ is given by where U is the evolution operator. While the momentum is conserved, the polarization can change and the probability for this is given by P = |M | 2 . The leading order amplitude M 0 is given by the same expression as for the mass operator (B9), with ρ 0 and λ 0 for the initial photon and ρ 1 and λ 1 for the final photon. The calculation of M 1 is similar e.g. to [54], e.g. renormalization of the UV divergence leads to the subtraction of the field-independent part. So, we simply state the results. The probability takes the same form as for the mass operator (66) and (67), where now r = 1 s + 1 1−s and where κ = s 1−s + 1−s s , S k = δ k1 σ 1 + δ k3 σ 3 , and ε ijk is the Levi-Civita symbol with ε 123 = 1. It is again easy to check that the loop contribution at O(α) vanishes for n 1 = −n 0 , but it is in general nonzero. As expected, R L and R L 0 are, apart from the overall sign, exactly the same as in the nonlinear Breit-Wheeler case. As in the electron mass operator case, this follows from the fact that the sum of the probabilities of all possible final states at O(α) has to be 1, so by summing these loop results over the final polarization the result has to exactly cancel the probability of nonlinear Breit-Wheeler summed over the spin of the electron-positron pair, and this should happen for arbitrary initial polarization. Then the diagonal R L 1 term in R L 10 and the fact that R L 1 = R L 0 ensure that the loop gives no contribution to spin flip n 1 = −n 0 at O(α). So, one could have guessed these terms from the corresponding results for Breit-Wheeler pair production. The off-diagonal terms in R L 10 cannot be obtained directly from P BW . However, we can immediately see that we have the same relation between the off-diagonal terms R L 10,ij − R L δ ij and R L 0 as in (76) for the electron mass-operator loop.
If no pairs are created, then we can resum the sum of products of Mueller matrices as in (131). The restriction to only loop diagrams is less restrictive in the case of polarization loops, because pair production is a threshold process which is exponentially suppressed at low energies, while in the electron case we have to e.g. restrict ourselves to electrons with negligible recoil to be able to neglect photon emission.
We write the first-order result in terms of a 4D Mueller matrix as in (129). The R L δ ij term in R L 10 combines with R to form a term proportional to the 4D unit matrix 1 (4) , which hence commutes with all the other contributions and can be separated from the time-ordered exponential. Since 2 R L = −4 R BW , this part becomes e −4 P BW , where 4 P BW gives the probability of nonlinear Breit-Wheeler pair production, summed over the spins of the fermions and averaged over the polarization of the photon.
The remaining part of m simplifies for a linearly polarized field. With a = a(φ)e 1 we have and where w 1 = w 1 e 1 . Hence, the Mueller matrix separates into three simple and mutually commuting matrices m = ...( 0 3 + 3 0 ) + ...( 1 2 − 2 1 ) + ...1 (4) . (146) The time ordering becomes trivial and using ( 0 3 + we find where where we have a factor of 4 in the first line because P BW γ gives the dependence of the Breit-Wheeler probability on the Stokes vector of the photon but for definite spins of the electron and positron, so summing over their spins gives a trivial factor of 4 for this term, and In particular, for an initially unpolarized photon, N 0 = 0 , we have so it will tend to become polarized with n ∝ 3 . Recall that + 3 and − 3 correspond, respectively, to a polarization 4-vector with ⊥ = {1, 0} and ⊥ = {0, 1}, i.e. parallel and perpendicular to the field polarization. However, note that the pair production probability for a photon with n = ± 3 is given by 4 P BW ∓ ν, so ν cannot be larger than 4 P BW . So, if tanh ν is not small then e −4 P BW will be significantly smaller than 1. In other words, the price for this induced polarization is that a significant fraction of the initial photons will decay into pairs. In the LCF regime we find (84) witĥ while R L andR L 0 are simply obtained from the corresponding expressions for nonlinear Breit-Wheeler. As a curiosity, we note that the Scorer-Gi function appear in R L 10 both in the electron case (87) and in the polarization operator. One could again have expected this from the general relation in (78), because To compare with the literature, we consider for simplicity a linearly polarized field withÊ = e 1 . We have In the LCF regime, (154) is already written as the sum of three mutually commuting matrices which give and This agrees with Eq. (11) in [84] and Eq. (81) in [85]. For low χ, P BW and ν become exponentially suppressed, while and hence The second term shows that the probability to flip polarization, n = ± 1 → ∓ 1 , is to leading order given by P flip ≈ ϕ 2 /4. This agrees with Eq. (41) in [54] (for the factors of 2, note that n = ± 1 corresponds to a polarization 4-vector with ⊥ = (1/ √ 2){1, ±1}). Note that we have obtained P flip by gluing together the Mueller matrices for P L = O(α) (two Mueller matrices for the leading order). Hence, P L contains the necessary information to obtain via the gluing approach the full spin-flip probability at O(α 2 ), even though (a single factor of) P L vanishes for spin flip. The flip probability has a quadratic scaling P flip ∼ (αb 0 ) 2 . However, an important point made in [84] is that there are terms that have linear scaling ∼ ϕ. These correspond to the off-diagonal terms of the Mueller matrix.
Thus, in the low-χ limit only rotational terms remain, i.e. the degree of polarization is constant. This can be compared with the low-energy limit of the propagation of an electron through the laser, where one also finds that only rotational terms contribute to leading order. However, in that case the non-rotational terms are only suppressed by a higher power (and give the Sokolov-Ternov effect), while for the propagation of a photon the nonrotational terms are exponentially suppressed.
We can also find simple results for a circularly polarized field in the LMF regime. From (35) we have and from the similarity between (142) and (143) we can immediately see that only the ε 2ij part of the rotational term remain, i.e.
So, we have essentially the same matrix structure as in the linear case, we just have to replace 1 → 3 , 3 → 2 and 2 → 1 . Thus, we find where ν = 2 2 · P L 0 and ϕ = 2 3 · P L 10 · 1 . In the low-energy limit, b 0 1, the pair-production probability becomes exponentially suppressed, and consequently R L and R L 1 too become exponentially suppressed. In contrast, the sign(θ)-part of R L 10 only leads to a power-law scaling and is therefore much less suppressed. We obtain the leading order by rescaling θ → b 0 θ and performing the resulting θ and s integrals. We find where E corresponds to polarization parallel to the local electric field, see (53) (and the Levi-Civita tensor has a trivial zeroth component, ε ijk N k = ε ijk n k ). Eq. (162) holds for arbitrary (e.g. elliptical) polarization of the background field. For a linearly polarized field we recover (157), which was obtained from the low-χ limit of the LCF approximation, while (162) holds even if a 0 is not large. The reason for this is that reducing b 0 or increasing a 0 both lead to dominant contribution from small θ. We note again that we have a nonzero result already at O(α) because we are considering a general polarization transition. From (cos µ, sin µ) · σ 1 · (cos µ, sin µ) = sin(2µ) and (cos µ, sin µ) · σ 3 · (cos µ, sin µ) = cos(2µ) we see that the σ integral in (162) tends to average to zero for a circularly rotating field, but we can at least see that for this term to be nonzero the Stokes vector of either the initial or the final probe photon needs to have a nonzero 2 component, i.e. the probe photon should have a nonzero degree of circular polarization, as pointed out in [84] (in the LCF regime).
VII. CONCLUSIONS
We have studied the spin and polarization dependences in the O(α) processes in a plane wave (nonlin-ear/nonperturbative in the field), i.e. the tree processes e − → e − + γ (or e + → e + + γ) and γ → e − + e + , and the O(α) loop contributions to e − → e − and γ → γ (i.e. the cross-term between the zeroth-and first-order amplitude terms). We have allowed for arbitrary field polarization and arbitrary spin and polarization of the scattering particles. The dependence of the probability on the spin/polarization of any incoming or outgoing particle is expressed in terms of Stokes vectors N = {1, n} and Mueller matrices M.
We have calculated all elements of these Mueller matrices. These include diagonal and off-diagonal terms that describe e.g. spin flip in any direction and spin rotation. There are several reasons for considering completely general spin transitions: 1) The off-diagonal, rotational terms can be much larger than the non-rotational terms. This is the case e.g. for the spin precession of low-energy electrons, or for vacuum birefringence [84]. Thus, considering the full Mueller matrix can lead to a larger signal.
2) Even if one does not measure the spin/polarization of the initial and final particles, one has to consider spin sums of intermediate particles in order to approximate higher orders with sequences of first-order processes. A spin sum on the amplitude level becomes a double spin sum on the probability level, and one cannot always find a basis where these double sums reduce to single sums. In such cases we can still use the Mueller-matrix approach.
In [49] we derived the Mueller matrices for the tree processes in the most general case. Already at O(α 2 ), these general results give a huge simplification compared to an exact calculation. But there are important special cases where one can derive even simpler expressions. So, in this paper we have derived LMF and LCF approximations approximations for the Mueller matrices. LMF and LCF are of course well-used methods, so some elements of these Mueller matrices correspond to quantities that have been obtained before, but expressed in different ways, i.e. not as Mueller matrices. Thus, in addition to providing the building blocks needed for general cascades, the full Mueller matrices also complement the literature by allowing completely general spin transitions.
Having these approximations of the Mueller matrices is of course useful in practice. For example, here we have shown that the LMF approximation agrees very well with the exact results for trident, which is encouraging for studying higher order processes, for which an exact treatment would be impossible and a full Muellermatrix approach potentially more time consuming than necessary. However, even without a numerical evaluation, these approximate Mueller matrices also show us which spin/polarization states that are important and under which conditions one could use single spin sums instead of the Mueller-matrix approach. Here we have shown that one spin basis may reduce the double sums to single sums for one part (e.g. the spin average), while a different basis may do the same for another part (e.g. the spin difference).
In this paper we have also derived the full Mueller matrices for the first-order loops e − → e − and γ → γ, P L = (1/2)N 1 · M L · N 0 . Since these come from the cross term between the zeroth and the first-order amplitudes, and since the zeroth order vanishes for two orthogonal spin states, one finds that P L vanishes for spin flip, n 1 = −n 0 (cf. [43]). However, M L is of course nonzero and in general P L is on the same order of magnitude as nonlinear Compton or Breit-Wheeler, as can be expected from unitarity. In fact, the loop contribution tends to cancel parts of nonlinear Compton, either partially or completely. Also, the loop contains offdiagonal/rotational terms that are not present in nonlinear Compton. And, importantly, we have shown that, although P L vanishes for spin flip, M L nevertheless contains all the spin/polarization information needed in order to approximate a general higher-order cascade process. For example, spin flip can be obtained from M L ·M L or from higher-order products M L · M L . . . M L .
For photons that travel through the laser field without pair production, we have resummed the sum of products of M L into a time-ordered exponential of M L . We have found simple expressions for a general linearly polarized field, and in LCF we find agreement with the results in [84], which were obtained with a different approach. We have also found similar results for a circularly polarized field in LMF. For an electron traveling through the laser one in general has to consider photon emission and the loop. Due to radiation reaction, the product of Mueller matrices are not only time ordered, but also occur at a different longitudinal momentum 7 , which makes the general case challenging for an analytical approach. However, for low-energy electrons we can to leading order neglect the recoil, which allows us to resum the series in M L + M C into a time-ordered exponential. We have found agreement with the solution to the BMT equation and with the extra terms [70] due to the Sokolov-Ternov effect.
These time-ordered resummations are nontrivial checks of the general gluing/Mueller-matrix approach, and clearly illustrate the importance of loops; indeed the nontrivial part of the BMT equation comes only from the loop. By restricting ourselves to final-state electrons that have lost only a negligible fraction of their longitudinal momenta, we have also been able to obtain time-ordered exponentials for higher-energy electrons. However, for the general (and potentially most important) cases one would need to resort to a numerical treatment. It would, in particular, be interesting to use the Mueller-matrix approach to study the generation of polarized particle beams due to the interaction with the laser, and to compare with other, numerical (PIC) approaches. 7 The transverse momenta can and have all been integrated at each step separately.
ACKNOWLEDGMENTS
G. T. thanks Anton Ilderton for useful comments on a draft of this paper.
Appendix A: Bessel functions in LMF
In this section we will show how to rewrite the θ integrals that appear in LMF for circular polarization. All components are expressed in terms of three integrals, where r = (1/s 1 ) − (1/s 0 ) for photon emission and r = (1/s 2 ) + (1/s 3 ) for pair production, and where Θ is given by (15). As expected from the literature (see e.g. [52,63,64]), we can perform the θ integrals in terms of Bessel functions. In order to use some well-known formulas for Bessel functions, we first have to simplify the θ dependence of the exponent. We do this by introducing new integrals over p 1 and p 2 . We rewrite 1 in the θ integrand as one of the following components 8 We choose the first and second component for terms in the integrand of J i proportional to 1/θ and 1/θ 2 , respectively. The point of doing this is that now the θ part of the exponent is much simpler, and we can use the Jacobi-Anger expansion [86] exp ip 1 sin where J is the Bessel function. All the odd order vanish because they give antisymmetric p 1 integrals, so we replace m → 2n. Next we change to cylidrical integration variables, p 1 = p cos ν and p 2 = p = sin ν. We have three different ν integrals, which can be performed using e.g. the tabulated integrals in [86], giving where the suppressed arguments on the right-hand side are p/2. The θ integrals are now trivial and give delta functions, which we use to perform the p integral. We can simplify the result using recurrence relations [86] between J n−1 , J n and J n+1 . We find and where the argument of the Bessel functions is This implies a minimum n, These three Bessel-function combinations also appear in [63,64]. To compare with the results in [63,64] for the spin/polarization structure, it is important to recall that we have integrated over all transverse momenta. So, our results can be compared with section 4.3 in [63], and we have checked that we have agreement for the terms there that have been written out explicitly, which correspond to our R C , R C 0 , R C 1 , R C γ , R C γ0 and R C 01 . So, although we have taken a rather different approach, using in particular spin and polarization bases that are common in lightfront quantization and which are especially convenient when dealing with plane-wave backgrounds, we can nevertheless compare with previous treatments of spin and polarization [63,64]. Spin and polarization effects in Compton scattering and Breit-Wheeler pair production in a circularly polarized laser have also been studied in [87].
Appendix B: Derivation of loop
As in [23] we use a basis that is common in the lightfront quantization formalism [88][89][90][91], (B2) A general spinor is given by which corresponds to a Stokes vector as in (2) and to the following mode operator where the mode operators are normalized according to {b r (q),b r (q)} = 2p −δ (q − q )δ rr withδ(...) = (2π) 3 δ −,⊥ (...). Although we will not consider any nontrivial wave-packet effects here, it is still convenient to start with an electron in an initial state given by a wave packet where dp = θ(p − )dp − d 2 p ⊥ /(2p − (2π) 3 ). The amplitude for the no-emission process is given by where p µ and p µ are the momenta of the initial and final electron, respectively, and U is the evolution operator. While the momentum is conserved, the spin can change.
The probability for this is given by With a sharply peaked wave packet, this simplifies to At zeroth order we have (note that M 0 = 0 for two orthogonal spins, e.g. for λ 1 = λ 0 and ρ 1 = ρ 0 + π) and where N (i) are the 4D Stokes vectors obtained by substituting ρ i and λ i into (2) and (10). So, at zeroth order the Mueller matrix is simply given by the identity matrix, as expected.
The calculation of O(α) is similar to the double nonlinear Compton case, as described in the appendix of [23]. One can use either the standard covariant approach or the lightfront-quantization approach. There are two terms in the amplitude. One comes from the instantaneous part of the lightfront Hamiltonian, and contributes to e.g. double Compton scattering. However, in this case, it only gives a background-field-independent term. Since the effect of renormalization is to subtract the fieldindependent part, only the non-instantaneous part of the lightfront Hamiltonian gives a nontrivial contribution 9 . Thus we find where l µ and P −,⊥ = (p−l) −,⊥ are the momenta of the intermediate photon and electron, respectively, L µν is given by (139), and the scalar and spinor parts of the Volkov solution are given by and For the first-order probability we have The zeroth order amplitude can be expressed as and then we can express the spin dependence in terms of the Stokes vectors right from the start by using 9 For more details about this renormalization, see [92] where Ω αβ := 1 2 {u ↑ū↑ + u ↓ū↓ , u ↓ū↑ + u ↑ū↓ , The spinors u are ordinary spinors with 4 elements (normalized asūu = 2), but if we restrict to the 2D space spanned by the electron spinors then Ω acts as the vector of the Pauli matrices {1, σ 1 , σ 2 , σ 3 }.
In simplifying P L = 2Re M 0M1 we use for example where in the second expression the integration contour is equivalent to θ → θ + i with > 0. The reason for writing it like this rather than with factors of ∂Θ/∂θ as in [54] is that we want to compare with the results in [49] for nonlinear Compton and Breit-Wheeler.
Appendix C: Gluing together loops
In [49] we showed how to glue together the probabilities of nonlinear Compton and Breit-Wheeler pair production for tree-level diagrams. The outcome is that a higher-order diagram is obtained by multiplying the firstorder Mueller matrices. The obvious generalization to diagrams with loops is that the Mueller matrix describing the first-order loop contribution, i.e. 2ReM 0M1 , should also be multiplied in the same way. This is the case, but the proof is somewhat longer than the tree-level case. So, we will show this in this section.
For comparison, let us first recall how the Muellermatrix multiplication emerges in tree-level diagrams. For such diagrams there are no coherent diagrams (in the sense made clear below), and an intermediate electron has a spin sum given by where the spin sums are over e.g. r =↑, ↓, A describes the steps that lead to this intermediate state and B describes all the subsequent steps. We can for any combination of the two spins r and r write The double sum over r and r corresponds to a single sum over 4 different Stokes vectors N. If we sum over r =↑, ↓ then we have N = {1, 0, 0, ±1} and N = {0, 1, ±i, 0}. This gives This should be compared with the probability that the first steps, represented by A, lead to a final-state particle with a real Stokes vector N, which can be expressed as and the probability that an initial particle with a real N leads to the steps represented by B, i.e.
By comparing with (C3) we see that we should: express the probability of producing the intermediate state as if it were a final state with Stokes vector N as P = N · A, which gives A; express the probability of the subsequent steps happening as if the intermediate state were an initial state with Stokes vector N as P = N · B, which gives B; the probability for the whole process is then given by P = 2B · A. The factor of 2 can be seen as a consequence of the fact that there are two orthogonal spin state, but it should be noted that 2B · A comes from a double spin sum on the probability level, which can in general not be expressed as a single spin sum. The fact that there are no other overall factors is shown in [49]. Since this factorization happens for all intermediate particles, the total probability can be expressed as a sequence of first-order Mueller matrices. If we are only interested in a single fermion line and if we sum over the polarization of the emitted photons, then it is convenient to write the probability of nonlinear Compton as P C = (1/2)N 1 · M C · N 0 , because then the factors of 1/2 cancel against the factors of 2 from the spin sum, and the total probability is simply given by (1/2)N f · M C · M C . . . M C · N 0 . Now we turn to loops. In lightfront-time ordered perturbation theory the first order amplitude is given by where H 1 is the non-instantaneous part of the lightfront Hamiltonian. There are of course several different loops at O(α 2 ), but here we focus on only the one that is expected to give the leading order for long pulses or intense fields, i.e. the one that can be thought of as ∼ M 1 M 1 . More precisely, this part is obtained by inserting the projection operator r dPb(P, r)|0 0|b(P, r) between H 1 (x + 4 )H 1 (x + 3 ) and H 1 (x + 2 )H 1 (x + 1 ), which gives where the spin sum r is over any two orthogonal spin states, r 0 (r f ) is the arbitrary initial (final) spin state, and where the product M 1 M 1 has the following lightfront-time ordering. The initial time ordering θ(x + 3 − x + 2 ) already gives a separation into a second step that happens at a later lightfront time than the first step, but to leading order we can replace this by θ(σ 43 − σ 21 ), where σ ij = (φ i + φ j )/2, which treats φ 1 and φ 2 (and φ 3 and φ 4 ) symmetrically, and which allows us to perform the integrals over θ ij = φ i − φ j for each step separately.
To perform the matrix calculations it is convenient to express everything in a 2D space rather than with the 4D spinors. For this we write an arbitrary spinor as The Stokes vector is now N a = u * · σ i · u, where a = 0, ..., 3, σ 0 = 1 and σ 1,2,3 are the usual 2 × 2 Pauli matrices. Now we can write The higher-order terms can be expressed in a similar fashion, so we can resum them into a time-ordered exponential Using where i, j = 1, 2 and with a sum over a = 0, ..., 3, we can write the probability as where the Mueller matrix is given by whereT means anti-time-ordering. In order to simplify this we restrict the lightfront-time σ integrals from ∞ to σ and then we take the derivative with respect to σ, The idea is that this derivative should be given by the first-order Mueller matrix, which is obtained by expanding (C14) to first order in w ∝ α, Since any 2×2 matrix can be written as a sum of the four Pauli matrices, with coefficients obtained using trσ a σ b = 2δ ab , we can write and substituting this into (C15) gives the desired result Thus, the total Mueller matrix is given by the timeordered exponential of the first-order Mueller matrix, So far in this section we have considered the loop correction to the electron line. However, the corresponding calculations for the series of polarization loops for the photon line are basically the same. For example, instead of (B16) we have where where (1) µ and (2) µ are the two (lightfront gauge) polarization vectors with µ . The rest of the calculations is the same, and therefore the conclusion is also the same, i.e. one should express the polarization dependence of the loop at O(α), 2Re M 0M1 , in terms of a Mueller matrix M and then higher orders can be approximated by a time-ordered product of a sequence of Mueller matrices.
Since the Mueller matrix for the loop is constructed from the O(α) probability P L , and since P L = 0 for spin flip, it might not be obvious how the Mueller-matrix approach can describe spin flip. To explain this we consider O(α 2 ). For a general spin transition there are two contributions, which we can express as so the Mueller-matrix approach can handle spin flip even though the one-loop contribution P L(1) flip = 0. In fact, while the higher-order amplitudes have been approximated as M n ∼ M 1 . . . M 1 , M 1 is exact, so the Muellermatrix approach actually gives the exact spin-flip probability at O(α 2 ). Note that, while M L contains all the information needed to describe spin flip, the converse is not true; knowing |M 1 | 2 is not enough to find the full Mueller matrix.
The final momentum integral
If no parameter is large or small we can in general not approximate the φ integrals as in e.g. LCF or LMF. However, just like in the nonlinear-Compton case [77], we can perform the last remaining momentum integral in terms of sine and cosine integrals for arbitrary pulse shape. In fact, in a cascade we would not be able to integrate the probability of Compton scattering over the longitudinal momentum before gluing together the steps, but each loop has an independent longitudinal momentum integral which mean that we can perform all the momentum integrals in the loop before gluing together. We find where and for Compton scattering we havê R C 10 = −Xk[S − ϕC] +kX ϕC − ϕ 2 (1 − ϕS) where Ci and si(ϕ) = Si(ϕ) − π 2 are cosine and sine integrals (see [78]) with argument ϕ = Θ/(2b 0 ).
Appendix D: Series expansions from the Mellin transform
A simple way to obtain the χ 1 and χ 1 expansions in LCF is to first calculate the Mellin transform [83,93] with respect to χ, defined bỹ It turns out to be convenient to rescale the variable of the transform S → 2t. We first change variables in J(χ) from s 1 to ξ = (r/χ) 2/3 , r = (1/s 1 ) − 1. Then we change order of integration, and first integrate over χ. This leads in general to a simpler ξ integral, which can also be performed explicitly. For these two integrals over χ and ξ to be convergent, one finds a condition on t on the form t 1 < Re t < t 2 , where t 1 and t 2 are two constants. For example, for J C 1 we find −1 < Re t < −1/6. The inverse is given by where the integration path γ starts at t 0 − i∞, ends at t 0 + i∞ and goes through the real axis in the interval t 1 < Re t < t 2 . For all terms we find thatJ can be expressed explicitly in term of Γ functions and csc(2πt) (which could also be written in terms of two Γ functions). For example, for J C 1 we havẽ (D3) This means that it is simple to find the poles and the corresponding residues. All poles lie on the real axis, and we can deform the integration contour such that it encloses either t < t 1 counterclockwise, or t > t 2 clockwise; the small-and large-χ expansions are obtained from the first and second choice, respectively. In this way it is straightforward to obtain any number of terms in these expansions.
Appendix E: Solution in LCF + χ 1 regime In this section we will calculate (107) directly without first turning it into a differential equation. Of course, in general we would also not be able to find an exact resummation (exact at the level of the gluing approach, that is), but we would have sums of sequences of Mueller matrices, so this calculation could still give some relevant insights. Let us first separate the total Mueller matrix into four parts. In this 4D space we have (Bê 0 ) 2 =Bê 0 · 1 ⊥ =Bê 0 · 1 =Bê 0 · (Êk −kÊ) = 0 , (E1) which means theBê 0 part of m can only appear in the first step. In the 3D formulation, this means that a term with P 1 can only appear in the first step. Contrast this with the general case where one can have e.g. terms with (omitting all the arguments) (P 0 · P 10 . . . P 10 · P 1 )(P 0 · P 10 . . . P 10 · P 1 ), where a matrix multiplication can start at one step (with P 1 ) and then end (with P 0 ) at a an intermediate step, and then a new sequence of matrix products can start at a later step (with a second factor of P 1 ). However, this is not possible here since, after integrating over all the momenta (which we can do independently at each step since we are in the low-χ regime where we can neglect radiation reaction), there is no P 0 (and no P ) term in the sum of the loop and Compton scattering. So, after a matrix product has started with a factor of P 1 or the initial Stokes vector n 0 it cannot end at any intermediate step, and since we have the same number of indices at each step (in contrast to a general cascade where the number of spin/polarization vectors increases with the production of particles) we find that P 1 can only appear in the first step. So, we have two different contributions: one with a factor ofBê 0 in the first step and the other with no factor ofBê 0 .
For the first contribution we haveBê 0 · N 0 ∝B, which means that the rotation part,Êk −kÊ, drops out and we are left with a trivial matrix multiplication, where in the last line we have taken the limit σ → ∞ for an electron that has left the pulse. This part does not depend on the initial Stokes vector n 0 , and it is the only nontrivial contribution for an unpolarized initial particle, n 0 = 0. For the second contribution we first note that theBB part commutes with the rest of the Mueller matrix. For the rest of the Mueller matrix we write and then we choose the constant c such that m 2 r ∝ EÊ+kk, which gives c = −8/(9T ) and m r = Ω(−δÊÊ+ δkk +Êk −kÊ), where δ = 1/(9T Ω) ∝ χ 2 1. Hence, we have now separated the Mueller matrix (minus thê Bê 0 part) into mutually commuting matrices, and, since we are assuming a linearly polarized field,BB,ÊÊ and kk also commute at different lightfront times, so the time ordering for these parts becomes unnecessary. So, if N 0 = {1, cB} with some constant −1 < c < 1, then However, if N 0 also has components alongÊ ork, then we also need m r , and, at the moment, m r does not commute with itself at different ligthfront times. So, let us for simplicity consider a constant field. Then, from m 2 r = −Ω(1 − δ 2 )(ÊÊ +kk) we see that the corresponding exponential separates into N p = cos ∆σΩ 1 − δ 2 (ÊÊ +kk) where we have used δ 1 (we have already neglected such small terms). This part depends on the initial n 0 , but for sufficiently long pulses we have N p → {1, 0}. If the initial particle is unpolarized, then we have N p = {1, 0} (even for a inhomogeneous field). These results for N u + N p agree of course with [70]. | 23,431.8 | 2020-12-23T00:00:00.000 | [
"Physics"
] |
The Fungicide Chlorothalonil Changes the Amphibian Skin Microbiome: A Potential Factor Disrupting a Host Disease-Protective Trait
: The skin microbiome is an important part of amphibian immune defenses and protects against pathogens such as the chytrid fungus Batrachochytrium dendrobatidis (Bd), which causes the skin disease chytridiomycosis. Alteration of the microbiome by anthropogenic factors, like pesticides, can impact this protective trait, disrupting its functionality. Chlorothalonil is a widely used fungicide that has been recognized as having an impact on amphibians, but so far, no studies have investigated its effects on amphibian microbial communities. In the present study, we used the amphibian Lithobates vibicarius from the montane forest of Costa Rica, which now appears to persist despite ongoing Bd-exposure, as an experimental model organism. We used 16S rRNA amplicon sequencing to investigate the effect of chlorothalonil on tadpoles’ skin microbiome. We found that exposure to chlorothalonil changes bacterial community composition, with more significant changes at a higher concentration. We also found that a larger number of bacteria were reduced on tadpoles’ skin when exposed to the higher concentration of chlorothalonil. We detected four presumed Bd-inhibitory bacteria being suppressed on tadpoles exposed to the fungicide. Our results suggest that exposure to a widely used fungicide could be impacting host-associated bacterial communities, potentially disrupting an amphibian protective trait against pathogens.
Introduction
Amphibians around the world are increasingly threatened by diseases caused by fungi, viruses, bacteria, and parasites [1,2]. Particularly, the infectious skin disease, chytridiomycosis, is one of the main diseases impacting amphibian health [3,4]. This disease caused by the chytrid fungal pathogens Batrachochytrium dendrobatidis (Bd) and B. salamandrivorans (Bsal) can cause mass die-offs in amphibian species [5,6]. The skin microbiome is considered one of the first lines of defense against pathogenic infections and can mediate disease susceptibility [7][8][9], suggesting it is an essential part of the amphibian's innate immune system. In amphibians, protection against pathogens has been linked to distinct characteristics of the skin bacterial communities, such as bacterial species richness, microbial community assemblage, and the presence and abundance of members in the bacterial communities capable of producing metabolites that suppress pathogen infections [10][11][12][13]. In addition, formerly common throughout the mountain ranges of Tilarán, Cordillera Central, and Talamanca in Costa Rica and western Panama [37]. This species suffered population declines and disappearances across its entire range in the late 1990s and was considered possibly extinct. Six years after disappearing, this species has been re-encountered at different sites in isolated populations in Costa Rica and is reproducing in high numbers [38]. The disease chytridiomycosis was one of the main drivers of declines for L. vibicarius, possibly in combination with habitat disturbance, pesticides, and climate change [38]. The skin bacterial community may provide protection against Bd in this species [39].
We hypothesized that exposure to chlorothalonil would change the skin bacterial communities of tadpoles. We predicted that tadpoles exposed to chlorothalonil would have different bacterial communities than those not exposed to chlorothalonil and that this change would be more emphasized at higher fungicide concentrations. Thus, we investigated changes in the relative abundance of members in the bacterial communities when exposed to varying fungicide concentrations. We identified the presence of putative Bd-inhibitory bacteria with differential abundance between treatments. The present study serves as a baseline to understand the effect of a toxic fungicide on the microbially mediated immune defenses of a montane tropical amphibian, as well as the potential risk faced by remaining populations of L. vibicarius in landscapes where chlorothalonil can potentially be present.
Tadpole Collection and Maintenance
In October 2017, we collected 80 tadpoles of L. vibicarius with similar developmental stage (Gosner stage [27][28][29], body size (mean ± standard deviation (SD): 41.5 ± 4.4 mm). and weight (0.71 ± 0.19 g) from a permanent lagoon in the Juan Castro Blanco National Park, Alajuela, Costa Rica. Due to the high abundance of tadpoles we observed in the lagoons of the Juan Castro Blanco National Park during our monitoring program, we concluded that the number of collected animals did not have an impact on this population. The study and ethical procedures were approved by National Commission for the Biodiversity Management of Costa Rica (R-057-2019-OT-CONAGEBIO) and the Ministry of Environment and Energy of Costa Rica-National System of Conservation Areas (SINAC-ACAHN-PI-R-010-2017).
We captured the animals with nets and placed them in sterile plastic trays containing pond water. Animals were transported to a laboratory at the University of Costa Rica to carry out the chlorothalonil exposure experiment. All tadpoles were placed in an aquarium with filtered water and acclimatized to laboratory conditions for 8 days before starting the experiment. We consider an 8-day acclimation period an adequate time to acclimatize both host and microbiome to experimental conditions previous to any manipulations [40,41]. We know that the amphibian skin microbiome can change under captive conditions [42]; therefore, all tadpoles were set up under the same conditions to maintain the same initial microbiome baseline between treatments and reduce any potential effect in the results.
Chlorothalonil Exposure and Sampling
Following the acclimation period, we conducted an 8-day exposure experiment to investigate the effect of chlorothalonil on the skin microbiome of tadpoles. We established four treatments, which consisted of a negative control (filtered water), a solvent control (SC; methanol), and two concentrations of chlorothalonil; low concentration (1 µg/L) and high concentration (5 µg/L) (nominal concentrations). In Costa Rica, chlorothalonil has been detected in the environment (e.g., soil, air, and water), and concentrations above 11 µg/L have been reported in water bodies [18,26,43]. In addition, the concentrations of chlorothalonil used in this experiment are similar to demonstrably nonlethal concentrations used in another study using a species of the same genus, Lithobates taylori (E. Ballestero, unpublished data). Therefore, the exposure levels were chosen to reflect conditions that the species may experience in the wild. We prepared chlorothalonil exposure solutions by adding an aliquot of a stock solution to the exposure medium (filtered water). Stock solution (1061 µg/mL) was prepared from 97.5% pure chlorothalonil standard (Dr. Ehrenstorfer, Germany) dissolved in HPLC-grade 99.97% methanol (J.T. Baker, Phillipsburg, Unites States) and kept at 4 • C. Aliquots were taken using a microvolume syringe (SGE Analytical science, Australia). The quantitative analyses of chlorothalonil were performed with solid-phase extraction (SPE) and gas chromatography-mass spectrometry (GC-MS) at the Instituto Regional de Estudios en Sustancias Tóxicas (IRET), Universidad Nacional, Costa Rica. The actual concentrations for the low-concentration and high-concentration treatments at the beginning of the experiment were 0.9 µg/L and 5.3 µg/L, respectively. We did not detect chlorothalonil in water samples at the end of the experiment (<0.1 µg/L).
In water, the half-life of chlorothalonil ranges from 0.18 to 8 days [44]. The amount of methanol added in the solvent control was the same as that one used in the highest concentration of chlorothalonil. We measured temperature ( • C), pH, and dissolved oxygen (mg/L) during the experiment (Supplemental Data Table S1).
Our experimental unit was one randomly chosen tadpole in a 1 L glass jar containing 800 mL of filtered water. We established 20 replicates per treatment (Table S2). Tadpoles were randomly assigned to one of the experimental treatments. We fed animals ad libitum with organic Spirulina on Days 0, 3, and 6.
On Day 8, we collected skin bacterial samples (skin swabs) from each animal. Swabbing consisted of moving a sterile rayon-tipped swab (Peel Pouch DryswabTM Fine Tip) across the animal skin. The swabbing protocol consisted of 12 strokes on each side (along body and tail), 12 strokes on the dorsal surface of the body, and 12 strokes on the mouth. We placed swabs in sterile vials with 300 µL of DNA/RNA Shield (Zymo Research). The tubes were transported to Ulm University, Germany, and stored at −20 • C until DNA extractions and sequencing. We used tricaine methanesulfonate (MS222) to euthanize all tadpoles at the end of the experiment.
DNA Extraction and 16S rRNA Gene Amplicon Sequencing
We extracted bacterial genomic DNA from swabs using the NucleoSpin Soil kit (Macherey-Nagel, Düren, Germany) following the manufacturer's protocol. We amplified the hypervariable V4 region of the 16S rRNA gene using the primers 515F (5 -GTGCCAGCMGCCGCGGTAA-3 ) and 806R (5 -GGACTACHVGGGTWTCTAAT-3 ). We followed the Fluidigm scheme (Access Array System for Illumina Sequencing Systems, Fluidigm Corporation), in which PCR and barcoding occur simultaneously. The PCR and barcoding (15 µL volume) were performed as described in Jiménez et al. [39]. Barcoded samples were purified using NucleoMag NGS Beads (Macherey-Nagel, Düren, Germany) and quantified with picogreen on Tecan F200. Then, we pooled all samples to an equal amount of 12 ng of DNA and diluted the pool down to 6 nM. Finally, the pooled sample library was paired-end sequenced in a single run on an Illumina MiSeq platform at the Institute of Evolutionary Ecology and Conservation Genomics, Ulm University, Germany. Raw sequence data were deposited into NCBI Repository, BioProject ID PRJNA703661.
Bioinformatics
The initial processing of the sequences was performed using QIIME 2 (version 2019.1) as described in Jiménez et al. [39]. For dada2 analysis, we trimmed the first bases of each read to remove primers (-p-trim-left-f 23, -p-trim-left-r 20) and truncated forward and reverse reads to 200 bp due to decreasing average quality scores of the sequences at the end. We collapsed reads into amplicon sequence variants (ASVs) and assigned bacterial taxonomy using the Greengenes database (version 13_8) as reference (http://greengenes.lbl.gov; accessed on 30 November 2020). We removed sequences classified as chloroplast, mitochondria, archaea, eukaryota, and unclassified phylum. We built a phylogenetic tree of the bacterial ASVs for further diversity analyses using MAFFT [45] and Fast Tree 2 [46]. Then, we imported our data into the R environment version 3.6.3 (https://www.r-project.org/; accessed on 30 November 2020) for further processing of the sequences using the R pack-age "phyloseq" [47]. We removed ASVs with less than 20 reads in the entire dataset and excluded samples with fewer than 9000 sequences. The resulting mean library size across individuals was 18,597 reads (range 9415-43,854). For alpha and beta diversity analyses, we rarefied the ASV table according to the sample with the lowest number of reads.
Statistical Analysis in R Environment
Because we used methanol as a carrier solvent, we first compared the negative control to the solvent control to detect statistical differences in alpha and beta diversity. We did not detect significant differences between negative and solvent controls considering the three alpha diversity measures (ASV richness: p = 0.94, Shannon diversity index: p = 0.97 and PD: p = 0.95; Figure S1). However, we observed that beta diversity differed between negative and solvent controls based on the ASV presence-absence composition (unweighted UniFrac: R 2 = 0.05; p = 0.040) and the ASV abundance-weighted composition (Bray-Curtis: R 2 = 0.12; p = 0.001) ( Figure S2). Thus, the solvent control was used as the basis of comparison in further analysis.
To investigate the effect of chlorothalonil treatments on the skin bacterial alpha diversity measures (ASV richness, Shannon diversity index, and phylogenetic diversity (PD)), we used Generalized Linear Models (GLMs) with Gaussian distribution. We log-transformed the alpha diversity measures prior to model fitting.
To examine the effect of chlorothalonil treatments on skin bacterial beta diversity, we calculated the unweighted UniFrac (based on ASV absence/presence) and Bray-Curtis dissimilarity (based on abundance pattern of ASV) metrics using the R package "phyloseq". We fitted permutational multivariate analyses of variance (PERMANOVAs) using the adonis function of the R package "vegan" [48] to statistically test the effect of fungicide treatments on both beta diversity metrics. We performed permutational pairwise post-hoc tests with a Bonferroni correction to evaluate statistical differences of bacterial beta diversity between chlorothalonil treatments and solvent control. Additionally, we quantified the extent of the difference between group centroids (treatments) as a measure of effect size by calculating Cohen's d and 95% CI with the R package "compute.es" [49]. We performed principal coordinate analysis (PCoA) to visualize the beta diversity distances between treatments.
Then, to identify ASVs that were significantly suppressed or overabundant in the two fungicide treatments, we used a negative binomial model-based approach (exact binomial test generalized for overdispersed counts) using the R package "edgeR" [50]. We present only ASVs that differed significantly between fungicide treatments and solvent control (FDR-corrected p-values at p < 0.001). We used the unrarefied dataset and the Trimmed Mean of M-values (TMM) method for the normalization of samples. Additionally, we explored the presence of putative Bd-inhibitory bacteria being suppressed or overabundant from the results of the previous analysis. To identify the putative Bd-inhibitory ASVs, we queried our ASV sequences against a database of culturable anti-Bd bacteria identified from different amphibian species (Antifungal Isolates Database; [51]). We retained ASVs with a ≥99% sequence identity match to those in the mentioned database following the methods outlined by Muletz-Wolz et al. [52] with the software Geneious version 20.1.2.
Chlorothalonil Disturbs the Skin Microbiome Beta Diversity
We did not detect a significant effect of chlorothalonil treatments on the three alpha diversity measures (ASV richness: p = 0.90, Shannon diversity index: p = 0.90, and PD: p = 0.33; Figure S1). The PERMANOVA models revealed a significant effect of treatments on the ASV presence-absence composition (unweighted UniFrac: R 2 = 0.09, p = 0.02, Figure 1a) and the ASV abundance-weighted composition (Bray-Curtis dissimilarity: R 2 = 0.09, p = 0.004, Figure 1b). Pairwise PERMANOVA tests indicated significant differences in the bacterial community composition between the tadpoles kept in the solvent control and the treatment with a high concentration of the fungicide, whereas those between solvent control and the treatment with a low concentration were similar (Table 1).
Pairwise PERMANOVA tests and Cohen's d effect sizes showed that the R 2 and effect sizes between tadpoles of the solvent control and the high-concentration treatment were higher than between tadpoles of the solvent control and the low-concentration treatment, indicating a greater difference in the higher concentration of fungicide (Table 1). Furthermore, skin bacterial communities of tadpoles exposed to a high concentration of fungicide clustered more distantly from solvent control treatment on the PCoA than those exposed to a low concentration of fungicide (Figure 1a,b). 0.33; Figure S1). The PERMANOVA models revealed a significant effect of treatments on the ASV presence-absence composition (unweighted UniFrac: R 2 = 0.09, p = 0.02, Figure 1a) and the ASV abundance-weighted composition (Bray-Curtis dissimilarity: R 2 = 0.09, p = 0.004, Figure 1b). Pairwise PERMANOVA tests indicated significant differences in the bacterial community composition between the tadpoles kept in the solvent control and the treatment with a high concentration of the fungicide, whereas those between solvent control and the treatment with a low concentration were similar (Table 1). Pairwise PER-MANOVA tests and Cohen's d effect sizes showed that the R 2 and effect sizes between tadpoles of the solvent control and the high-concentration treatment were higher than between tadpoles of the solvent control and the low-concentration treatment, indicating a greater difference in the higher concentration of fungicide (Table 1). Furthermore, skin bacterial communities of tadpoles exposed to a high concentration of fungicide clustered more distantly from solvent control treatment on the PCoA than those exposed to a low concentration of fungicide (Figure 1a,b).
Chlorothalonil Shifts Relative Abundance of Bacterial Strains
We detected that ASVs differed significantly between fungicide treatments and SC (Figure 2a,b). We found three ASVs suppressed when tadpoles were exposed to a low concentration of chlorothalonil, and 14 ASVs showed an increased abundance (Figure 2a). Tadpoles exposed to a high concentration of chlorothalonil showed 13 suppressed ASVs and seven overrepresented (Figure 2b). Tadpoles exposed to a high concentration had a higher number of bacterial taxa with reduced abundance than those exposed to a low concentration of fungicide. In both treatments, a low and high concentration of chlorothalonil, tadpoles showed a significant decrease in abundance for ASVs of the genera Sulfuricurvum and Janthinobacterium, whereas an abundance increment was observed for ASVs of the genera Nevskia, Flavobacterium, and Runella. We identified four putative Bd-inhibitory ASVs with significantly lower abundance in tadpoles exposed to chlorothalonil, one in tadpoles exposed to a low concentration and four in those exposed to a high concentration (Figure 2a,b). These putative Bd-inhibitory ASVs were assigned to the family Comamonadaceae and the genus Janthinobacterium, Acinetobacter, and Novosphingobium. and seven overrepresented (Figure 2b). Tadpoles exposed to a high concentration had a higher number of bacterial taxa with reduced abundance than those exposed to a low concentration of fungicide. In both treatments, a low and high concentration of chlorothalonil, tadpoles showed a significant decrease in abundance for ASVs of the genera Sulfuricurvum and Janthinobacterium, whereas an abundance increment was observed for ASVs of the genera Nevskia, Flavobacterium, and Runella. We identified four putative Bd-inhibitory ASVs with significantly lower abundance in tadpoles exposed to chlorothalonil, one in tadpoles exposed to a low concentration and four in those exposed to a high concentration (Figure 2a,b). These putative Bd-inhibitory ASVs were assigned to the family Comamonadaceae and the genus Janthinobacterium, Acinetobacter, and Novosphingobium.
Discussion
The present study highlights the impact of chlorothalonil, a widely used fungicide, on the immunologically important skin microbial community of a threatened frog that persists despite ongoing exposure to Bd. We provide evidence that exposure to chlorothalonil changes the skin bacterial community of tadpoles of L. vibicarius and, even more importantly, that chlorothalonil can suppress putative Bd-inhibitory bacterial strains at high concentrations. These results raise new concerns and hypotheses that need to be addressed to have a broader understanding of the impact of fungicides on the protective relationship between skin microbiomes and amphibian hosts.
Discussion
The present study highlights the impact of chlorothalonil, a widely used fungicide, on the immunologically important skin microbial community of a threatened frog that persists despite ongoing exposure to Bd. We provide evidence that exposure to chlorothalonil changes the skin bacterial community of tadpoles of L. vibicarius and, even more importantly, that chlorothalonil can suppress putative Bd-inhibitory bacterial strains at high concentrations. These results raise new concerns and hypotheses that need to be addressed to have a broader understanding of the impact of fungicides on the protective relationship between skin microbiomes and amphibian hosts.
Our results show that the skin bacterial community differed when tadpoles were exposed to higher concentrations of chlorothalonil. The effect sizes on beta diversity indicated that differences in bacterial communities of tadpoles increased as animals were exposed to higher concentrations of the fungicide. These findings indicate a change, and potential disruption, of skin bacterial communities as the concentration of chlorothalonil increases. Given that the microbiome can play a role in host health maintenance and that disruption of the natural range of microbial communities may lead to increased incidence of diseases in their hosts [15,53,54], our results suggest that exposure to chlorothalonil may increase susceptibility to diseases. Further work is needed to corroborate these hypotheses. A previous study found an alteration of the skin bacterial communities of tadpoles of Blanchard's cricket frog (Acris blanchardi, family Hylidae) when exposed to an herbicide [55]. Together, these studies suggest that changes in bacterial communities occur with a distinct type of pesticides and highlight an interest in investigating how pesticide mixtures (for example, a mixture of fungicides and herbicides) could be interacting to impact the function of microbial communities of amphibians.
Previous studies indicate that microbial communities are among the first taxa to respond to chemicals exposure [56][57][58]. Chlorothalonil could be directly and/or indirectly interacting with the skin bacteria of L. vibicarius tadpoles by altering their cutaneous bacterial communities. Microorganisms are functionally or nutritionally connected to each other, and changes in one component of a microbial community (e.g., the fungal community (mycobiome)) can influence the structure of the entire community [59][60][61]. Therefore, the fungicide chlorothalonil could be changing the skin mycobiome and thereby influencing the observed changes in the bacterial communities. In this study, we did not evaluate the skin mycobiome, so further research investigating interactions between fungal and bacterial communities after chlorothalonil exposure will provide broader insight into the impact of this fungicide on host-associated microbial communities. In addition, chlorothalonil could be affecting tadpoles' physiology, endocrine, and immune systems, as previously observed in different amphibian species [34], indirectly altering the host bacterial communities through different mechanisms. Here, we have not evaluated the host physiology mechanisms that could be changing these communities. However, we suspect that chlorothalonil could be contributing to an increment of stress on exposed individuals, as observed in tadpoles of the Cuban tree frog Osteopilus septentrionalis (family Hylidae) [34], thus affecting the tadpoles' skin bacterial community and resulting in a potential detriment of the host immune defenses. It is also possible that chlorothalonil may have interfered with the host skin peptide secretions that act as a selective force that controls which microbes can grow on each host's skin. The properties of the skin peptides have been shown to be affected by exposure to the insecticide carbaryl in yellow-footed frogs (Rana boylii, family Ranidae) [62]. These potential alterations to the skin properties by chlorothalonil may have disrupted the appropriate conditions for some bacteria to grow, consequently suppressing their abundance and altering host microbial communities. Further research is needed to investigate these potential mechanisms and will provide a better understanding of the impact of chlorothalonil on an amphibian immune defense trait.
We found different patterns in the relative abundances of bacterial taxa across chlorotha lonil treatments. We also observed the suppression of some putative Bd-inhibitory bacterial strains (for example, strains of genus Janthinobacterium and Acinetobacter) when exposed to chlorothalonil, particularly in the high-concentration treatment. These protective bacteria are known to be capable of producing metabolites that suppress Bd infections [10,11,63]. Together, this suggests that changes in bacterial abundances from chlorothalonil exposure could be disrupting the adequate production of defensive bacterial metabolites that facilitate disease resistance. Further, the suppression of some Bd-inhibitory bacteria provides evidence that chlorothalonil can interfere with these protective taxa, highlighting a potential risk in the disruption of host susceptibility to chytrid infections in early and/or later life stages. This knowledge is relevant as a reduction in bacterial abundances could represent the loss of key bacterial species and functions linked to host health. Previous evidence suggests that amphibian larval stages that have been exposed to chlorothalonil have higher Bd intensity and greater Bd-induced mortality when challenged with Bd after metamorphosis [36]. Therefore, exposure to chlorothalonil in early life might alter the normal bacterial community that establishes a healthy and Bd-protective skin microbiome after metamorphosis, making exposed animals vulnerable to future Bd infections. Based on this information, it would be interesting to investigate if this early-life disruption of bacterial abundances has lasting impacts on host Bd resistance later in life. It is also possible that opportunistic microbes and/or parasites with tolerance to chlorothalonil increase their abundance altering the natural microbial structure. Addressing this gap in our knowledge will allow a better understanding of the development of the immune system and will provide information that will help prevent early disruption of host microbiomes to confer better protection against diseases, such as chytridiomycosis.
We observed some individuals in the high-concentration treatment showing reddish skin, suggesting some type of external dermatitis probably attributed to the fungicide exposure. This is an interesting observation to highlight from our study because skin irritation has been linked to chlorothalonil exposure in animals [64]. This is merely an observation during our experiment but brings out the need for further investigation because chlorothalonil exposure has been associated with skin irritation and contact dermatitis in humans [65][66][67]. Atopic dermatitis can also allow for the colonization of certain types of bacteria that trigger immune response such as inflammation that can worsen symptoms and jeopardize host health [68].
Our understanding of how pesticides influence the amphibian microbiome is still in its infancy [16]. Our study reveals the effect of the exposure to environmentally relevant concentrations of the fungicide chlorothalonil on the skin microbiomes of amphibians in the early-life stage, which may, in turn, impact the stability of host-microbe interactions and microbiome-fitness correlations. Further studies are desperately needed, so we can fully understand the interaction between pesticides, the disease-causing organisms, and how these effects scale up to play a role in amphibian disease dynamics.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/applmicrobiol1010004/s1, Figure S1: Alpha diversity metrics of skin bacterial communities of tadpoles exposed to chlorothalonil, Figure S2: Principal coordinate analysis (PCoA) of skin bacterial communities of tadpoles kept in water and solvent controls, Table S1: Physicochemical parameters measured in water and solvent controls, and chlorothalonil treatments.
Informed Consent Statement: Not applicable.
Data Availability Statement: All raw sequence data were deposited into NCBI Repository, BioProject ID PRJNA703661. | 5,689.6 | 2021-04-10T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
An optimization approach to determining the power of active filters
. The coal grading plants in Vietnam extensively apply asynchronous motors with frequency control. They consume reactive power and non-sinusoidal current from the supply network. The non-sinusoidal currents and voltages lead to additional energy losses in electrical equipments, reduce their service life and cause an economic damage. Because of the low load power factor the companies with such plants pay penalties to power supply utilities. These problems can be solved by the active filters. The paper suggests an optimization algorithm to calculate the power of the active filter, which provides a load power factor corresponding to the normative documents and power quality indices corresponding to the standard requirements. The algorithm is used to calculate the power of the active filter for the coal grading plant owned by the company "Kua Ong-Vinakomin".
Introduction
Coal mining is one of the most important economic industries of Vietnam. Mines and quarries have coal grading plants. The manufacturing equipment of the plants is driven by asynchronous motors, primarily with frequency control. They are nonlinear loads for power supply systems and distort power quality. The harmonic factors of voltage exceed the established standards [1]. Voltages and currents contain both harmonics and interharmonics. The load power factors are lower than the value agreed by the normative documents [2].
The paper presents an analysis of the power supply system of the coal grading plant, results of the tests of electric power, formulates an optimization problem and describes an algorithm to determine the power of the active filter, presents results of calculation of the power of active filter for the coal grading plant of the company "Kua Ong-Vinakomin".
Characteristic of the power supply system and loads of the coal grading plant
Coal is mined in the quarry and transported to the warehouse of the coal grading plant. Fig.1 presents a scheme of the power supply of the plant. Electric power from the 22 kV substation buses of the power supply utility (node 6643) is supplied to the 0.4 kV network of the power supply system of the plant (node 4143) by the 1000 kVA step-down transformer that belongs to the coal grading plant. The distance between nodes 4143 and 45038 is 60 meters. The total length of the 0.4 kV electric network exceeds 12 km. The plant has two coal grading shops (shop No.1, shop No. 2), a shop of power and water supply, a coal warehouse. In the warehouse the excavator loads coal on the conveyer, which delivers coal to the shops for grading coal pieces by size. The manufacturing equipment of the shops is put into operation by 58 asynchronous motors with a capacity from 4 to 185 kW primarily with frequency control. To assess the quality of voltage and current in the 0.4 kV network, the following indices and norms for their values were established in [1]: deviation of the voltage value -δU ≤ ±5.0%; the total harmonic distortion -KU ≤ 6.5%; the n-th harmonic factor of voltage -KU(n) ≤ 3%; the n-th harmonic factor of current KI(n) ≤12%. In [2] the load power factor (cosφ) at the connection node of the plant to the supply network should comply with the condition: cosφ ≥0.85. If this condition is not fulfilled, the company with the plant pays penalties to the power supply utility. The value of cosφ can be low because of large reactive power consumption by the electric motors of the plant, active power losses when transmitted over the electric network and losses caused by harmonics and interharmonics. The power quality and the value of cosφ at node 4143 were tested.
Results of the electrical energy tests
The tests included measurements of the indices of voltage and current quality and cosφ by the device PQ-Box150 [8] for 24 hours with a time interval of measurements equal to 1 second. Table 1 presents the measured values of δU, which correspond to the requirements [1]. The Table also shows the measured values of KU and their normative values from [1].The measured values of KU exceed the norm more than twice. They are shown in bold type. Table 2 presents the measured and normative values of KU(n) and KI(n) for the harmonics with the most frequent excess of the norms [1]. The measured values of KI(n) are not higher than the normative value. The measured values of KU(m) for interharmonics and the values of currents I(m)max of interharmonics (m -the number of the interharmonic) are given in Table 3. The normative values for them are not determined. The measured values of cosφ and its normative value determined in [2] are shown in Table 4. At phase В the value of cosφ is lower than the norm. The problems caused by harmonics, interharmonics and cosφ can be resolved using the active filter that is connected to the network in parallel to the nonlinear load.
Optimization problem to determine the power of the active filter
The power of the active filter is determined by the reactive power that it must generate to compensate for the reactive power of the load and the apparent power required to get rid of harmonics and interharmonics of the current at the point of its connection to the network. This can be achieved by solving an optimization problem with the objective function assumed to be the minimum active power losses in the power system network after installation of the active filter, i.e.
where N -the highest number of harmonic, M -the highest number of interharmonic. In this case the following constraints are to be met: where i -the network node number; The optimization problem consists of three subproblems: 1) calculation of the apparent power of the active filter to provide the normative cosφ; 2) calculation of the apparent power of the active filter to eliminate current harmonics; 3) calculation of the apparent power of the active filter to eliminate current interharmonics.
The first subproblem is formulated in such a way: the total active power losses in the network of the fundamental frequency ( Constraints (2) and (3) in this case should be fulfilled. If the node for active filter installation is not assigned, all nodes of the network must be considered as candidates for the active filter installation. The algorithm must consider transformer capabilities of voltage control on the lower side with the connected load. The suggested algorithm was developed based on the software "SDO" to calculate electric network modes of the fundamental frequency [9,10]. The block-diagram of the algorithm is presented in Fig. 2 where "d" abbreviated from "desired". The value of c φ tg is determined based on cosφc. The value of d φ tg is calculated using constraint (2). As far as the admissible value of cosφ is within the interval cosφmin ≤ cosφ ≤ cosφmax. The value of the reactive power of the active filter is also within the interval The required phase angle φd should be calculated using cosφmin and cosφmax. The highest and lowest values of Block 9. Calculation of the total active power losses in the network on the basis of results obtained by the software "SDO".
Block 10. Comparison of the active power losses of the preceding calculation step with the losses of the current step to determine minimum losses, namely, if The management company of the coal grading plant has chosen node 45038 for installation of an active filter in power supply system (Fig. 1). The transformer supplying electric energy to the Table 7. The active filter to eliminate harmonics of current must be no less than 17 kVA in each phase. Table 8 presents the values of phase powers of interharmonics and the total powers of three phases that were calculated by the measured parameters. The apparent power of interharmonics at three phases accounts for 313.7 VA.
Conclusions
The measurements have shown that the load current contains harmonics and interharmonics, the indices KU(n) and KU exceed the normative values, and cosφ is lower than the norm.
The total harmonic distortion reduction and the load power factor increase can be achieved by installation of the active filter. The power of the active filter was determined by the developed optimization algorithm and using the results of measurements of mode parameters of network.
The power of the active filter was determined for the coal grading plant of the company "Kua Ong-Vinakomin". | 1,954.4 | 2020-01-01T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Entanglement asymmetry in CFT and its relation to non-topological defects
The entanglement asymmetry is an information based observable that quantifies the degree of symmetry breaking in a region of an extended quantum system. We investigate this measure in the ground state of one dimensional critical systems described by a CFT. Employing the correspondence between global symmetries and defects, the analysis of the entanglement asymmetry can be formulated in terms of partition functions on Riemann surfaces with multiple non-topological defect lines inserted at their branch cuts. For large subsystems, these partition functions are determined by the scaling dimension of the defects. This leads to our first main observation: at criticality, the entanglement asymmetry acquires a subleading contribution scaling as $\log \ell / \ell$ for large subsystem length $\ell$. Then, as an illustrative example, we consider the XY spin chain, which has a critical line described by the massless Majorana fermion theory and explicitly breaks the $U(1)$ symmetry associated with rotations about the $z$-axis. In this situation the corresponding defect is marginal. Leveraging conformal invariance, we relate the scaling dimension of these defects to the ground state energy of the massless Majorana fermion on a circle with equally-spaced point defects. We exploit this mapping to derive our second main result: the exact expression for the scaling dimension associated with $n$ of defects of arbitrary strengths. Our result generalizes a known formula for the $n=1$ case derived in several previous works. We then use this exact scaling dimension to derive our third main result: the exact prefactor of the $\log \ell/\ell$ term in the asymmetry of the critical XY chain.
Introduction
Symmetries play a pivotal role in the foundations of modern physics.Their presence implies conservation laws that have deep consequences in the behavior of physical systems and facilitate enormously the resolution of many problems, which would otherwise remain open.As crucial as the existence of symmetries is their breaking, both explicit and spontaneous.Such breaking is responsible for a plethora of very important phenomena across different branches of physics.A relevant aspect that has received little attention so far is the quantification of how much a global symmetry is broken.Local order parameters have been usually employed to discern whether or not a quantum state respects a symmetry.However, they present the disadvantage that, while a non-zero value manifests that the symmetry is broken, the converse is not always true.Furthermore, in extended quantum systems, the question of measuring symmetry breaking is intrinsically tied to consider a specific subsystem.In fact, there may exist long-range correlations between the parts of the system that do not respect the symmetry and are not taken into account by any local order parameter.
In this context, an appealing idea is quantifying symmetry breaking by leveraging tools from the theory of entanglement, as they capture non-local correlations.A quantity based on the entanglement entropy and dubbed entanglement asymmetry has been recently introduced as a measure of how much a symmetry is broken in a subsystem.The entanglement asymmetry has proven to be a powerful instrument to identify novel physical phenomena.It has been applied to investigate the dynamical restoration of a U (1) symmetry from an initial state that breaks it after a quench to a Hamiltonian that respects the symmetry [1].Surprisingly, the entanglement asymmetry shows that the restoration of the symmetry may occur earlier for those states that initially break it more, a quantum version of the yet unexplained Mpemba effect (the more a system is out of equilibrium, the faster it relaxes).This quantum Mpemba effect has been observed experimentally by measuring the entanglement asymmetry in an ion trap [2] and the microscopic mechanism and the conditions under which it occurs are now well understood for free and interacting integrable systems [3][4][5], although they remain elusive for non-integrable ones.In addition, the entanglement asymmetry has been applied to examine the dynamical restoration of a spontaneously broken Z 2 symmetry [6] and the relaxation to a non-Abelian Generalized Gibbs ensemble in the exotic case that the symmetry is not restored [7].It has been also generalized to study the quench dynamics of kinks [8].Beyond non-equilibrium physics, the entanglement asymmetry has been employed to understand the implications of quantum unitarity for broken symmetries during black hole evaporation [9].
A significant point in the characterization of the entanglement asymmetry is its asymptotic behavior with the size of the subsystem considered.As this observable is based on the entanglement entropy, one may wonder whether it inherits some of its properties.For example, the entanglement entropy follows an area law in the ground state of one dimensional systems with mass gap.In contrast it grows logarithmically with the subsystem size when the mass gap vanishes; this logarithmic growth is proportional to the central charge of the conformal field theory (CFT) that describes the low energy physics of the critical point [10][11][12].Conversely, the entanglement asymmetry exhibits a fundamentally distinct behavior.It has been shown in Ref. [13] that, for matrix product states, the entanglement asymmetry for a generic compact Lie group grows at leading order logarithmically with the subsystem size, with a coefficient proportional to the dimension of the Lie group, while, for finite discrete groups, the entanglement asymmetry satisfies an area law, saturating to a value fixed by the cardinality of the group.Similar results have been obtained in the ground state of the XY spin chain when studying the particle number U (1) symmetry that this model explicitly breaks [4] and the spin-flip Z 2 symmetry, spontaneously broken in the ferromagnetic phase [6,14].
In this paper, we examine the implications of quantum criticality for the entanglement asymmetry, which remain barely unexplored, using CFT methods.Only Ref. [15] reports calculations for the entanglement asymmetry in certain particular excited states of the massless compact boson.To this end, we develop a general scheme to compute the entanglement asymmetry in (1+1)-dimensional quantum field theories in terms of the charged moments of the subsystem's reduced density matrix.Employing the path integral formulation, the charged moments can be identified with the partition functions of the theory on Riemann surfaces with defect lines inserted along its branch cuts.These defect lines are associated with the elements of the symmetry group under analysis [16,17].A symmetry is considered broken when the associated defects are not topological, and any continuous deformation of these defects leads to a change in the partition function.Therefore, within this framework, the entanglement asymmetry can be naturally interpreted as a measure of how much the defects are not topological.We apply this approach to determine the entanglement asymmetry in the ground state of the XY spin chain at the Ising critical line for the U (1) group of spin rotations around the transverse direction.After fermionizing it through a Jordan-Wigner transformation, the scaling limit of this model is described by the massless Majorana fermion theory and the defect lines corresponding to this group are marginal.We then exploit conformal invariance to map the Riemann surfaces to a single cylinder with defect lines parallel to its axis.In this setup, the calculation of the partition functions for large subsystems boils down to computing the ground state energy of the massless Majorana fermion on a circle with equally-spaced marginal point defects.The spectrum of this theory has been studied on the lattice in Refs.[18,19].Here we revisit this problem and diagonalize systematically its Hamiltonian for an arbitrary number of equi-spaced point defects of different strengths.The study of defects in the massless Majorana fermion and Ising CFTs has a long story, see e.g.[16,[20][21][22][23][24][25][26][27][28][29][30][31].Partition functions on Riemann surfaces with (topological and non topological) defect lines also arise in the analysis of the entanglement across inhomogeneities, interfaces, or junctions and after measurements [32][33][34][35][36][37][38][39][40][41][42][43][44][45][46]; in particular, those with topological defect lines appear in the symmetry resolution of entanglement measures , which has recently been investigated in profusion.
The paper is organized as follows.In Sec. 2, we review the relation between symmetries and defects in (1+1)-quantum field theories, we introduce the entanglement asymmetry, and we show how to compute it from the partition function on a Riemann surface with defect lines.We also derive the asymptotic behavior of the entanglement asymmetry for a generic compact Lie group in the ground state of a one dimensional critical system.In the rest of the sections, we focus on the critical XY spin chain and the associated CFT, the massless Majorana fermion theory.In Sec. 3, we introduce these systems and we review the known previous results for the entanglement asymmetry.In Sec. 4, we calculate the partition function of the Majorana CFT on the Riemann surfaces that enter in the calculation of the entanglement asymmetry.In particular, by conformal invariance, these partition functions are given by the ground state energy of a massless Majorana fermion with evenly-spaced point defects.We carefully diagonalize its Hamiltonian for an arbitrary number of defects with different strengths.In Sec. 5, we apply these results to obtain the entanglement asymmetry of the critical XY spin chain, checking them against exact numerical computations on the lattice.Finally, in Sec.6, we draw our conclusions and consider future prospects.We also include several appendices where we discuss with more detail some technical points of the main text.
Symmetries, topological defects, and entanglement asymmetry
In this section, we briefly review the identification between symmetries and topological defects.
Then we introduce the Rényi entanglement asymmetry as a quantifier of symmetry breaking and we Figure 1: Each element g of a group G acts on the Hilbert space of an extended quantum system as a unitary operator UΣ t ,g defined along a line Σt at a fixed time t.If G is a symmetry of the theory, then any continuous transformation of Σt, as the ones performed in the figure, leaves invariant the partition function with insertions of these operators.We indicate this by the symbol = between the three diagrams.When two operators UΣ t ,g and U Σ t ,g ′ overlap, as in the right diagram, they can be fused according to the composition rule interpret it in terms of defects.With simple scaling arguments, we derive some general results for the asymptotic behavior of the Rényi entanglement asymmetry in the ground state of a critical one dimensional quantum system in the thermodynamic limit.
Symmetries and topological defects
Global symmetries in spatially extended quantum systems are realized through extended operators that form a unitary representation of the symmetry group.In fact, if we consider a generic (1+1)dimensional quantum field theory whose spacetime is a flat surface M, then the action of an element g of the group G (either discrete or continuous) is implemented in its Hilbert space H by a unitary operator U Σt,g that has support on a spatial line Σ t ⊂ M at a fixed time t.A familiar instance is the case of a U (1) symmetry.The Noether theorem ensures the existence of a conserved current j µ .Therefore, the associated charge at Σ t is Q Σt = Σt dx j 0 (x) and the group is represented by the operators U Σt,α = exp[iαQ Σt ], with α ∈ [0, 2π).
The extended operators U Σt,g representing symmetries possess the crucial property of being topological.This means that continuous deformations of Σ t do not affect any expectation value that contains the insertion of an operator U Σt,g .For example, since a symmetry operator commutes with the Hamiltonian of the theory, it will not evolve in the Heisenberg picture and then U Σt,g = U Σ t ′ ,g , as depicted in the first equality of Fig. 1.When the support of two extended operators U Σt,g , U Σt,g ′ coincides, the operators fuse according to the standard composition rule U Σt,g U Σt,g ′ = U Σt,gg ′ , as we illustrate in the second equality of Fig. 1.
The transformation of a field ϕ of the theory under the group G is described by a matrix R g such that Therefore, within the path integral formalism, the insertion of an operator U Σt,g in an expectation value is equivalent to performing a cut along the line Σ t and imposing for the fields the following gluing conditions where ϕ(x ± ) denote the field ϕ(x) at each side of the cut as we indicate in Fig. 2. The composition property U Σt,g U Σt,g ′ = U Σt,gg ′ can be then understood as the fusion of two cuts with gluing conditions R g and R g ′ into a cut with gluing condition R g R g ′ = R gg ′ .In Euclidean spacetime, U Σ,g is not needed to be defined along a line Σ t orthogonal to the time direction, but it can have support on any curve Σ on the surface M. Due to the previous considerations, the extended operators U Σ,g are 2).The insertion of an extended operator UΣ,g associated with the element g of a group G and with support on the line Σ corresponds, in the path integral approach, to a defect line along Σ with the gluing condition (2) for the field ϕ(x) at each side of the defect.
commonly referred to as defects, and when they enforce symmetries, they are topological defects [17].A more detailed introduction to the role of topological operators in quantum systems can be found in, e.g., the recent review [70].
The question of whether a system is symmetric under a certain group can thus be reformulated as asking whether the defects associated to the symmetry are topological.In this paper, we are interested in quantifying the extent to which a symmetry is broken or, in other words, measuring how much the corresponding defects are not topological.This can be done with the entanglement asymmetry, which we now introduce.
Definition
Let us take an extended quantum system in a state described by the density matrix ρ.We consider a spatial bipartition Σ = A ∪ Ā in which A consists of a single connected region such that the total Hilbert space H factorizes into H = H A ⊗ H Ā. We assume that the extended operators that represent the group G decompose accordingly as U Σ,g = U A,g ⊗ U Ā,g .The state of subsystem A is given by the reduced density matrix ρ A = tr Ā ρ, obtained by tracing out the degrees of freedom in the region Ā.Under an element of the group G, it transforms as ρ A → U A,g ρ A U † A,g .Therefore, the state ρ A is symmetric if [ρ A , U A,g ] = 0 for all g ∈ G.
To define the entanglement asymmetry, we introduce the symmetrization of ρ A as the average over G of the transformed density matrix U A,g ρ A U † A,g ; that is, if G is a compact Lie group, where dg is its Haar measure and vol G its volume.An analogous formula can be written up for a finite discrete group G of cardinality |G| replacing the Haar integral by a sum over its elements.To lighten the discussion, we focus on compact Lie groups and we refer the reader to Refs.[6,13,14] where the entanglement asymmetry has been examined for discrete groups.The density matrix ρ A,G is by construction symmetric under G and has trace one.Note that ρ A is symmetric if and only if ρ A = ρ A,G .Then the entanglement asymmetry is the relative entropy between ρ A an ρ A,G [1], Given the form of ρ A,G , and applying the cyclic property of the trace, ∆S A can be rewritten as where S(ρ) is the von Neumann entropy of ρ, S(ρ) = − tr(ρ log ρ).The entanglement asymmetry satisfies two essential properties as a measure of symmetry breaking in the subsystem A: it is nonnegative, ∆S A ≥ 0, and it vanishes if and only if A is in a symmetric state, i.e. ρ A = ρ A,G [71,72].The defects are associated respectively with the group elements g1 and g2.The quotient (12) of the partition functions on this surface with and without the line defects gives the normalized charged moment Z2(g), defined in Eq. ( 9).The Dirac delta in Eq. ( 8) will set g2 = g −1 1 .
In general, the direct calculation of the entanglement asymmetry is complicated due to the presence of the logarithm in the von Neumann entropy.Alternatively, a much simpler indirect way of computing it is via the replica trick [10][11][12].By replacing in Eq. ( 5) the von Neumann entropy by the Rényi entropy, S (n) (ρ) = 1 1−n log tr ρ n , we introduce the Rényi entanglement asymmetry Observe that the entanglement asymmetry (5) is recovered in the limit lim n→1
∆S (n)
A = ∆S A .The advantage of the Rényi entanglement asymmetry is that, for integer n, it can be expressed in terms of charged partition functions.If we plug the definition of ρ A,G in Eq. ( 6), we obtain where G n = G× n • • • ×G and g stands for the n-tuple g = (g 1 , . . ., g n ) ∈ G n .This integral can be rewritten as where Z n (g) are the (normalized) charged moments of
Interpretation in terms of defects
In a (1+1) quantum field theory, using the path integral representation of the reduced density matrix ρ A , the neutral moments tr(ρ n A ) can be identified with the partition function on an n-sheet Riemann surface M n [11].If we consider the ground state |0⟩ of the theory, i.e. ρ = |0⟩ ⟨0|, and a single interval of length ℓ as subsystem A, the surface M n is constructed as follows.We take the spacetime M where the theory is defined, which is the complex plane when working in Euclidean time and in the thermodynamic limit (infinite spatial direction).To obtain M n , we perform a cut on M along the interval A = [0, ℓ], we replicate n times this cut plane, and we sew the copies together along the cuts in a cyclical way, as we show in Fig. 3 for n = 2. Denoting as Z(M n ) the partition function on this surface, the neutral moments of ρ A are given by tr( Following the discussion in Sec.2.1, the insertion of the operators U A,gj in this trace, as in Eq. ( 9), corresponds to putting a defect line along the branch cut [0, ℓ] of each sheet of M n with a gluing condition (2), being g = g j , as depicted in Fig. 3.If Z(M g n ) stands for the partition function on the surface M n with these n defect lines, then we have that Therefore, in the ground state, the normalized charged moments Z n (g) introduced in Eq. ( 9) are the ratio of the partition functions on the surface M n with and without n defect lines inserted at the branch cut of each sheet, If ρ A is symmetric under G, then [ρ A , U A,g ] = 0 for all g ∈ G.As we have previously seen, this implies that the defect lines associated with the insertions U A,gj are topological and they can be moved between the sheets of M n under continuous transformations leaving the partition function Z(M g n ) invariant.In that case, it is possible to fuse them in the same sheet, which is equivalent to the equality tr(ρ A U A,g1 . . .ρ A U A,gn ) = tr(ρ n A U A,g1•••gn ).Since the Dirac delta in (8) forces the product of all the group elements g j to be the identity, the fusion yields U A,g1,...gn = 1.Consequently, Z n (g) = 1 and, according to Eq. ( 8), the Rényi entanglement asymmetry vanishes.On the other hand, if ρ A is not symmetric, [ρ A , U A,g ] ̸ = 0, then the defect lines associated to U A,gj are not topological.In that case, any continuous deformation of them does change the partition function Z(M g n ) and, as a result, Z n (g) ̸ = 1.In this sense, the entanglement asymmetry quantifies how much the defect lines associated with a group are non topological.
From generic scaling arguments, we can determine the asymptotic behavior of the partition functions Z(M n ) and Z(M g n ).In two dimensions, the leading order contributions to the free energy − log Z are proportional to the area |M n | of the surface M n , on which the partition function Z is defined.[Of course, strictly speaking, the area |M n | is infinity, but it can be regularized, for instance by imposing periodic boundary conditions both spatial and imaginary time directions for each sheet of M n , far away from the interval.]Therefore, in the absence of defects, where f bulk is the bulk free energy density.In the presence of defects, we expect that each of them contributes with an additional term proportional to the volume of the defect, which in this case is the length ℓ of the interval A. The free energy in that case is and t(g j ) can be interpreted as the line tension of the defect associated with the insertion U A,gj .All these terms are cut-off dependent and, therefore, non universal.Plugging Eqs. ( 13)-( 14) into Eq.( 12), one sees that the bulk contribution in the free energy cancels, and the charged moments Z n (g) decay exponentially with the subsystem length ℓ as Z n (g) = e −Tn(g)ℓ+O (1) .
If the theory is critical, the conical singularities at the branch points of the surface M n give rise in Eqs. ( 13) and ( 14) to an extra universal (cut-off independent) term, which, as argued for instance in Ref. [73], behaves as The presence of defect lines may in general modify the coefficient of this term, so and it does not cancel in the ratio ( 12) of partition functions that gives the normalized charge moments Z n (g).Therefore, for a critical system, we expect where the coefficient β n (g) is universal and can be computed in the infrared (IR) CFT that describes the critical system.It depends on the specific CFT and the nature of the defects corresponding to the group G under study, and we do not have a generic expression for it.Its computation has to be worked out case by case.In this paper, we calculate it in the massless Majorana fermion field theory for a U (1) group for which the defects are marginal.
Asymptotic behavior
Before delving into the study of the charged moments and entanglement asymmetry in a particular theory, it is insightful to explore the implications of the generic result of Eq. ( 18) for the asymptotic behavior of the entanglement asymmetry in the limit of large subsystem size ℓ.
When we plug Eq. ( 18) in Eq. ( 8), we have to perform an n-fold integral over the group G. Since the leading term in Eq. ( 18) decays exponentially with ℓ, the main contribution to this integral comes from the points h ∈ G n where Z n (h) = 1 (i.e.where both T n (h) and β n (h) vanish).These correspond to the elements of G that leave the reduced density matrix ρ A invariant and form a symmetry subgroup H of G. i.e.
Therefore, the strategy is to perform a saddle point approximation of the integral (8) around the points h ∈ H n ; see also Refs.[4,5,7,13].
For simplicity, let us assume that H is a finite subgroup.In the integral (7) the numerator tr Consequently, all the saddle points h ∈ H n contribute equally.Then, to calculate the integral (8) for ℓ ≫ 1, we can expand it around the identity point (Id, . . ., Id) ∈ G n , where Id is the identity in G, and multiply the result by the total number of saddle points, which is given in terms of the cardinality |H| of H as |H| n−1 .We finally perform the integral by choosing some local coordinates on the group around the identity.
In a neighborhood U Id ⊂ G of the identity, the group elements g can be written as g = e iX , where X is an element of the Lie algebra g associated with G, of dimension d = dim G. Let {J a }, a = 1, . . ., d, be generators of g, if we take the local coordinate chart x = (x 1 , . . ., x d ) ∈ R d → g(x) = e i a xaJa , then, for an arbitrary function f (g) on G, we have where µ(x) dx is the Haar measure of G in the local coordinates x.Since we have to perform an n-fold integral over G, we denote by x the coordinates for G n , that is x = (x 1 , . . ., x n ) ∈ R dn .Now we can express the exponents T n (g) and β n (g) of the charged moments (18) in coordinates and expand them around the identity, which corresponds to x = 0, where H Tn and H βn are dn × dn Hessian matrices, made of n × n blocks of dimension d × d.Therefore, in the local coordinate chart that we are considering, for large ℓ the n-fold integral (8) reads Here the factor |H| n−1 counts the total number of saddle points.In coordinates, the Dirac delta δ j g j over the group G is replaced by δ x j=1 x j /µ(0).Notice that we have also expanded the measure µ(x) around x = 0 and restricted to the zeroth order term µ(0) since the next order terms yield subleading corrections in ℓ.
Since T n (g) is the sum of the contributions of each defect line according to Eq. ( 14), H Tn is block diagonal, H Tn = 1 n ⊗ H t , where H t is the d × d dimensional Hessian of t(g(x)).Due to the cyclic property of the trace, the coefficient β n (g) is symmetric under cyclic permutations of the entries g j of g.Thus H βn is a block-circulant matrix; that is, it has the block structure with blocks C j of size d×d.A block-circulant matrix can be diagonalized in blocks D p , p = 0, . . ., n−1, with a Fourier transform of the blocks C j , Therefore, if we apply the change of variables then the integral (22) becomes Integrating out the variable ω 0 , we find The remaining integral is Gaussian and we can easily perform it, if we assume that H t ℓ + D p log ℓ is a symmetric definite-positive matrix, Plugging it in Eq. ( 27), we obtain This result is independent of the local coordinate chart that we consider to perform the integration.In fact, under a change of local coordinates x → y(x), the measure µ(x) transforms as µ(x) = det(∂y σ /∂x b ) µ ′ (y) and, because quadratic forms are (0, 2)-tensor fields, the determinant of the Hessian H t transforms as det H t (x) = det(∂y σ /∂x b ) 2 det H ′ t (y).Therefore, the quotient µ(0)/ √ det H t is coordinate independent.The same applies for the terms det(1 + H −1 t D p log ℓ/ℓ).Finally, applying (29) in Eq. ( 8) and using the identity log det M = tr log M for a matrix M , we find that the Rényi entanglement asymmetry for a compact Lie group G in the ground state of a critical one dimensional quantum system behaves as where and Eq. ( 30) is the first main result of this paper.We stress that the first two terms in (30), of order O(log ℓ) and O(1) respectively, have already been observed in the XY spin chain when considering the particle number U (1) (a)symmetry [4], and more generically for matrix product states in Ref. [13].
Crucially, what is new here is the last term in Eq. ( 30), of order O(log ℓ/ℓ).While the terms of order O(log ℓ) and O(1) are present in the ground state of critical and non-critical systems alike, the term of order O(log ℓ/ℓ) only appears when the system is at a critical point.
This log ℓ/ℓ term appears only when the exponent β n (g) in Eq. ( 18) is non-zero.Although this exponent is universal, the coefficient b n is non-universal since it also depends on the defect tension T n (g) (via H t ), which is cut-off dependent.Semi-universal corrections of the form log ℓ/ℓ have been found in, e.g., the corner free energy in critical systems [74] and in the ground state full counting statistics of the critical XY spin chain [75].
We finally discuss the group structures that were not considered earlier.When both G and H are finite, it is straightforward to show that the logarithmic term is vanishing, the O(1) term is just log(|G|/|H|) and there are no log ℓ/ℓ corrections (see also [13]).When both G and H are continuous, the leading log ℓ term has a prefactor equal to (dim G − dim H)/2, but the explicit expressions for the subleading terms are more cumbersome and not very illuminating.
The XY spin chain and the massless Majorana fermion field theory
In the rest of the paper, we focus on a particular gapless system: the XY spin chain at the Ising critical line.We consider its ground state, and compute the charged moments and the Rényi entanglement asymmetry associated with the rotations of the spin around the z-axis.
The Hamiltonian of the XY spin chain is where σ α j are the Pauli matrices at the site j.The parameter γ tunes the anisotropy between the couplings in the x and y components of the spin and h is the strength of the transverse magnetic field.The XY spin chain is gapless along the lines γ = 0, |h| < 1 and γ ̸ = 0, |h| = 1 in parameter space, and the scaling limits along those lines are respectively the massless Dirac and Majorana fermion field theories.
For γ ̸ = 0, the Hamiltonian of Eq. ( 33) is not invariant under the rotations U α = e iαQ around the z-axis, generated by the transverse magnetization except for α = π, which corresponds to the Z 2 spin flip symmetry.The entanglement asymmetry associated with this U (1) symmetry has been thoroughly studied in Ref. [4] for the ground state of (33) outside the critical lines γ ̸ = 0, |h| = 1 using exact methods on the lattice.In that case, the charged moments Z n (α) decay exponentially for large subsystem size ℓ as in Eq. (15), where the coefficient T n (α) = n j=1 t(α j ) is the sum of the string tension t(α j ) of each defect, which here is given by [4] Note that the string tension t(α) is not real; as we will see in Sec. 4, this is related to the fact that the gluing conditions of the defects associated with this U (1) group make the theory non-Hermitian.
To obtain the Rényi entanglement asymmetry, we can apply the general result of Eq. (30).In this case, since G = U (1), we have that dim G = 1, vol G = 2π, µ(0) = 1, and the symmetric subgroup is the Z 2 spin-flip symmetry, H = Z 2 .Since dim G = 1, the block H t is a scalar and it is given by Eq. ( 35) such that In Ref. [4], the following result was derived for the XY chain out of the critical line |h| = 1: with Notice that t ′′ (0) is continuous at |h| = 1, reflecting the fact that this result also applies along the critical line.Indeed, in this case, the string tension T n (α) is still given by Eq. ( 35), and following the same steps as in Ref. [4] one arrives at the same result.
However, crucially for this paper, along the critical line γ ̸ = 0, |h| = 1, we also expect that the charged moments Z n (α) contain the algebraically decaying factor of Eq. ( 18), according to the general reasoning of Sec.2.2.3.However, an analytical expression for the coefficient β n (α) is unknown.In what follows, we will obtain it by exploiting the conformal invariance of the underlying field theory.
As we have already mentioned, the scaling limit of the XY spin chain (33) along the critical lines γ ̸ = 0, |h| = 1 is the massless Majorana fermion field theory, whose Hamiltonian is where the Majorana fields ψ(x) and ψ(x) satisfy the algebra and {ψ(x), ψ(y)} = 0.The U (1) charge operator in Eq. (34) corresponds in this field theory to The details on the derivation of the Hamiltonian (33) and Q in the continuum limit are reported in Appendix A.
The transformations generated by the charge (42) in a subsystem A act on the fields ψ(x), ψ(x) in the following way with The group action consists of a rotation that mixes ψ and ψ.In general, this is not a symmetry of the theory, unless α = π, for which ψ → −ψ and ψ → − ψ.For α purely imaginary, the defect can be realized in the classical 2d Ising model by rescaling the couplings on all the bonds that intersect the defect line.A dictionary between the two realizations is given, e.g., in [36].
Crucially for our analysis, the field ψ(x) ψ(x) has scaling dimension 1, therefore the line defect implemented by U A,α corresponds to a marginal perturbation of the CFT action along the line.This is very important for the calculations reported in Section 4, as it it introduces a non-trivial dependence of the CFT partition function depend on the defect strength α.Indeed, if the perturbation were instead irrelevant, then the effects of the defect would be renormalized to zero in the IR limit.If the perturbation were relevant, then the defect would flow to some fixed point in the IR, corresponding to some boundary condition along the line, and the CFT partition function would also be independent on the precise value of α.For instance, such a situation would occur if we looked at the asymmetry with respect to rotations around the x-axis, as opposed to the z-axis, corresponding to replacing σ z j with σ x j in Eq. ( 34).In the CFT, this would then correspond to a perturbation by the relevant operator σ(x) with scaling dimension 1/8.The defect line would flow to a fixed boundary condition in the IR, and this would completely change the way we do the analysis, see in particular Ref. [76] for more details on that situation.
Calculation of the scaling dimension associated to n defects in the Majorana CFT
In Sec.2.2, we have seen that the charged moments Z n (α) can be cast as the ratio Z(M α n )/Z(M n ) between the partition function Z(M α n ) of the model on the n-sheet Riemann surface M n with n defect lines with strengths α = (α 1 , . . ., α n ) along its branch cuts and the one, Z(M n ), without them.As we discussed, in critical systems, this ratio contains a universal term Z CFT (M α n )/Z CFT (M n ), fully determined by the CFT that describes the low-energy physics.In this section, we study it in the massless Majorana fermion theory (39) for the marginal defect lines (43).
When there are no defects, it is well-known [10][11][12] that, for a generic CFT, where c is the central charge of the CFT, which for the massless Majorana fermion is c = 1/2.
In the massless Majorana fermion theory, when we insert the n marginal defect lines along each branch cut of the surface M n , the result (46) changes as as we will show below.The contribution of the n marginal defects is encoded in the exponent ∆n (α).
Then the ratio of the partition functions on the surface M n with and without defects is and, comparing with Eq. ( 18), β n (α) = 2 n ∆n (α).The rest of the section will be devoted to deriving Eq. ( 47) and computing explicitly the coefficient ∆n (α).
Conformal mapping to the cylinder with n defect lines
To determine the partition function Z CFT (M α n ) with the n marginal defect lines, we perform the conformal transformation If at the branch points z = 0 and z = ℓ of the Riemann surface M n we remove a disk of radius ϵ as a UV cut-off, then Eq. ( 49) maps M n to a cylinder with circumference 2π and height W = 2 n log(ℓ/ϵ), which we denote as C, see Fig. 4. We choose as coordinates of the cylinder w = x + iτ , with x ∼ x + 2π and τ ∈ [− 1 n log(ℓ/ϵ), 1 n log(ℓ/ϵ)].Under Eq. ( 49), the n branch cuts [0, ℓ] of M n are mapped to the equally-spaced lines x j = 2πj n , j = 1, . . ., n (x n = 2π is identified with x = 0) on the cylinder, as we illustrate in Fig. 4. Thus, on the cylinder C, the n marginal defects are inserted along these lines.We assume the Majorana fields ψ, ψ to have trivial monodromy on M n along the cycle that connects all the replicas.Therefore, after the map (49), these fields satisfy anti-periodic boundary conditions on the cylinder since they have half-integer spin [77].The next step is to carefully determine the gluing condition that is satisfied by the Majorana fields ψ and ψ across each defect after the conformal map (49) to the cylinder.In the previous section, we found that, on the Riemann surface M n , the gluing condition across a defect with strength α j is given by Eq. ( 44), i.e.
where the 2 × 2 matrix Rαj is defined in Eq. (45).Crucially, this gluing condition changes under the conformal transformation (49).Indeed, since the Majorana fields ψ and ψ are primaries with conformal dimension 1/2, they transform as Combining this with Eq. ( 50) and noting that a point slightly above the defect on M n (i.e. at z = x + i0 + ) is mapped to a point slightly to the left of the defect on the cylinder (i.e. at w = x j + iτ + 0 − ), we find that the condition that ψ and ψ must satisfy across the defect at the line where we define Observe that, in the first equality, we can take out a factor dw dz −1/2 from the first matrix and a factor dw dz 1/2 from the third one.Taking into account that dw dz / dw dz = i for z = x + i0 + , we find the second equality.In summary, using the conformal transformation (49), the partition function Z CFT (M α n ) of the massless Majorana fermion with n marginal defect lines at the branch cuts of the surface M n is equal to the partition function Z CFT (C α ) of the theory on the cylinder C with n equally-spaced defect lines along its longitudinal direction and described by the gluing conditions (52).If we impose conformal boundary conditions |a⟩ and |b⟩ at the extremes of the cylinder C, the partition function Z CFT (C α ) can be written as where H is the Hamiltonian of the free Majorana fermion (39) defined on a circle of length 2π with the fields ψ and ψ satisfying the gluing conditions (53) at the points x j = 2πj n , j = 1, . . ., n with strengths α = α 1 , α 2 , . . ., α n respectively.Alternatively, as we show in detail in Appendix B, these conditions can be explicitly implemented in the Hamiltonian (39) by including in it n point defects of the form ψ(x j ) ψ(x j ), where the the parameters µ j are related to the the strength of the defects by µ j /2 = i arctan(α j /2) see Appendix B for a derivation.
For W ≫ 2π, i.e. for large subsystem length ℓ, the dominant term in the partition function ( 54) is given by the the ground state energy E(α) of the Hamiltonian with defects (55), The ground state energy should satisfy the usual CFT formula where ∆n (α) takes into account the contribution of the defects and, consequently, it vanishes, ∆n (0 0 0) = 0, in their absence.It may be interpreted as the scaling dimension of a n-defect insertion operator.Combining the two previous equations, and taking into account that W = 2 n log(ℓ/ϵ), we arrive at Eq. (47).Therefore, since ∆n (α) = E(α) − E(0), the problem of computing the scaling dimension ∆n (α) boils down to determining the ground state energy of the Hamiltonian (55) with n point defects.We will devote the rest of this section to calculating it.However, before proceeding, it is important to note that the gluing condition (53) on the cylinder presents an issue: if α ∈ R, it does not respect the self-adjointness of the Majorana fields ψ(w) and ψ( w).The same problem arises in the Hamiltonian with defects (55), which is not Hermitian for α ∈ R. To calculate ∆n (α), it is important that the Hamiltonian is Hermitian to ensure that its spectrum is real and, therefore, the energy of its ground state is well-defined.In order to cure this problem, we can analytically continue the defect strength α → −iλ with λ ∈ R.This changes the 2 × 2 gluing matrix ( 53) Since all its entries are real, it is now compatible with the self-adjointness of the Majorana fields.This analytic continuation also makes the Hamiltonian with defects (55) Hermitian.In the following, we carry out the calculation of the ground state energy assuming that the gluing matrix is (58) with λ ∈ R. We will eventually take λ → iα in the final result, which we check against exact numerical calculations in the XY spin chain.
Ground state energy for a single defect (n = 1)
We start by solving the case of a single marginal defect.We take the spatial coordinate x defined on the interval x ∈ [−π, π] with the points x = −π and π identified and, for simplicity, we put the defect at x = 0. We impose the following boundary conditions for the fields ψ(x), ψ(x) at x = 0 and x = π: The first one is the gluing condition for the defect, while the second one is the anti-periodic boundary condition.With these boundary conditions imposed on the fields, the Hamiltonian is
Diagonalization of the Hamiltonian
The goal now is to diagonalize the Hamiltonian (60).To do this, we look for pairs of functions (u(x), v(x)) that satisfy the same gluing and anti-periodic boundary conditions as Ψ(x), and are eigenstates of the differential operator 1 i D. These are piecewise plane waves, If they satisfy the boundary conditions, then such wavefunctions are automatically eigenfunctions of The conditions (61) impose the following constraints on the amplitudes: This linear system of equations admits a non-zero solution if and only if Let us introduce the polynomial of degree 2. Eq. ( 64) is equivalent to the polynomial equation for the variable z = e i2πk .From the explicit form of R λ , we find that P λ (z) is with Then the full set of solutions k of Eq. ( 64) is For each solution k ∈ S λ , the pair (u k , v k ) can be used to construct a Bogoliubov mode for the Hamiltonian (60), taking the scalar product with the two-component field (ψ, ψ) which automatically satisfies [H, η k ] = k η k .Then, using the orthonormality of the set of functions (u k (x), v k (x)) we get where the sum in k runs over all the solutions in Eq. ( 69) and the modes satisfy the anticommutation relations {η † k , η q } = δ k,q .Notice that, taking the complex conjugate of the eigenvalue equation for Thus we can set u . This implies that η −k = η † k , and Eq. ( 71) can be rewritten in the form H = 1 2 Alternatively, we can express it as a sum restricted to the set of positive solutions k,
Ground state energy
From Eq. ( 74), it is clear that the ground state of the single-defect Hamiltonian ( 60) is the state annihilated by all the modes η −k for k ∈ S + λ .The ground state energy is These infinite sums can be evaluated by zeta-regularization.Taking into account that where ζ(s, a) = ∞ m=0 (m + a) −s is the Hurtwitz zeta function, the ground state energy can be written as Using the identity , we arrive at Identifying this expression with the standard formula for the ground state energy in a CFT, with c = 1/2 for the massless Majorana fermion, we find that the scaling dimension associated with the insertion of a single marginal defect of strength λ is
Connection with previous works
The scaling dimension associated with a single defect was computed in Ref. [19] applying lattice methods in the quantum Ising chain and in Ref. [23] using a boundary CFT approach.In the latter, the Ising CFT with a defect is folded along the defect, obtaining a Z 2 orbifold of the compact boson in which the defect is encoded in the boundary condition.The relation between such bosonic boundary condition and our gluing parameter λ can be found in Ref. [36].
In the case n = 1, the charged moments (9) specialize to Z 1 (α) = Tr(ρ A e iαQ A ).This is the full counting statistics, i.e. the cumulant generating function, of the charge Q A in the subsystem A. In our setup, it corresponds to the expectation value of a single defect line on the single replica surface M. In the ground state of the critical XY spin chain, this quantity was calculated in Refs.[78,79] employing lattice methods, see also [75,80,81], and obtaining that Z 1 (α) = e −t(α)ℓ ℓ −2 ∆1(α) , with t(α) given by Eq. ( 35), and the exponent ∆1 (α) = ∆ 1 (−iα) that we have found in Eq. ( 80) using CFT.
Ground state energy for n equally-spaced defects
We now extend the calculation of the previous section to the case of multiple defects.In this section we take the spatial coordinate x in the interval [0, 2π], and we put the defects at positions x j = 2πj n with j = 1, . . ., n.We also define x 0 = 0.The Hamiltonian is with the following gluing conditions corresponding to n equally-spaced defects of strengths λ 1 , λ 2 , . . ., λ n , where the matrix R λ was defined in Eq. ( 58), and the anti-periodic boundary condition Ψ(0) = −Ψ(2π).
Diagonalization of the Hamiltonian
To diagonalize that Hamiltonian, we proceed as in the n = 1 case.We look for pairs of functions (u k (x), v k (x)) that satisfy the same gluing conditions as Ψ(x) and are eigenstates of 1 i D. We look for solutions in the form of piecewise plane waves, The gluing conditions (82) imply the following relations between consecutive amplitudes while the anti-periodicity condition implies This system of equations admits a non-zero solution if and only if k is such that It is convenient to define the polynomial of degree 2n.This polynomial is palindromic, i.e. it satisfies P λ (z) = z 2n P λ (1/z), and it has real coefficients.Let us call z j , j = 1, . . ., 2n, the roots of that polynomial.When λ ∈ R n , the roots lie on the unit circle |z j | = 1.This corresponds to having real solutions for k in Eq. ( 86).Therefore, in this case, we can write z j = e iθj with θ j ∈ [0, 2π).Later, when we will analytically continue our result, the relation between z j and θ j will just be θ j = −i Log z j , without the property θ j ∈ R.
The polynomial (87) can be rewritten in terms of its roots and each root determines a family of solutions to the quantization condition (86) for k via z j = e ik2π/n .The set of all such solutions is then Each k ∈ S λ defines a Bogoliubov mode which automatically satisfies [H, η k ] = k η k along with (η k ) † = η −k , and the canonical anticommutation relations {η † k , η q } = δ k,q .Then the n-defect Hamiltonian (81) is diagonal in terms of them, where the sum runs over all the solutions k of Eq. ( 86).Alternatively, one can write it as where the sum is now restricted to the set of positive solutions, S + λ = {k ∈ S λ |k > 0}.This expression is particularly convenient to compute the ground state energy.
Ground state energy
According to Eq. ( 92), the ground state of the n-defect Hamiltonian (81) corresponds to the configuration with all the positive modes k occupied.Its energy is As in the n = 1 case above, these divergent series can be evaluated by zeta-regularization, using Eq. ( 76), If we apply the identity for the Hurwitz zeta function, then we obtain This expression should be identified with the usual CFT formula for the ground state energy, E = ∆ n (λ) − c/12, with c = 1/2.Therefore, we find that for n equally-spaced defects the scaling dimension ∆ n (λ) is This is the second main result of this paper section, which we will use to derive the Rényi entanglement asymmetry of the critical XY spin chain in the next section, where we also report the explicit expression of ∆ n (λ) for n = 2 and 3.As a first check of Eq. ( 96), note that, when λ 1 = λ 2 = • • • = λ n = 0, we must obtain ∆ n (0 0 0) = 0.In fact, in that case, the polynomial (87) is P λ (z) = (z n + 1) 2 and its roots are z j = e i2π j−1/2 n , j = 1, . . ., n, all with multiplicity 2. Therefore, we have θ j = θ j+n = 2π(j −1/2)/n for 1 ≤ j ≤ n.Inserting these roots in Eq. ( 96) and performing the sum, we find ∆ n (0 0 0) = 0 as it should be.
Summary
For the convenience of the reader, let us briefly summarize the main result of this section.It is the the second important result of this paper.It gives the exact scaling dimension ∆ n (λ) associated with the insertion of n-equally-spaced marginal defects in the massless Majorana fermion on a circle, with strengths λ 1 , . . ., λ n .The result is given by Eq. ( 96), which can also be rewritten in the form where Log(.) is the principal value of the logarithm, whose imaginary part takes values in (−π, π] and its branch cut is taken along the negative real axis, and the z j 's (j = 1, . . ., 2n) are the 2n roots of the following polynomial of degree 2n: with We have derived this result for defect strengths λ j ∈ R, but it can be analytically continued to λ j ∈ C.
In particular, in what follows, we will take λ j → iα j to derive the Rényi entanglement asymmetry in the critical XY spin chain.
Rényi entanglement asymmetry in the critical XY spin chain
In this section, we derive the asymptotic behavior of the Rényi entanglement asymmetry in the ground state of the critical XY spin chain using the results obtained above.At the critical lines γ ̸ = 0, |h| = 1, the charged moments Z n (α) behave as in Eq. ( 18) for large subsystem length ℓ.
While the string tension T n (α) is given by Eq. ( 35), we have found in Sec.4.3 that the scaling dimension ∆n (α) can be obtained, upon the analytic continuation λ = iα, from Eq. (97), which further requires to determine the roots of the polynomial (98).Unfortunately, we are not able to find a general expression for these roots.Here we first consider the cases n = 2 and 3, and we check our analytic prediction for Z n (α) against exact numerical results in the ground state of the XY spin chain.We then derive by applying the saddle point approximation discussed in Sec.2.2.3 the asymptotic behavior of the Rényi entanglement asymmetry for any integer index n, and by analytically continuing it, the replica limit n → 1.
n = 2 charged moments
In the case of two defects located at the points indicated in the left panel of Fig. 5, the polynomial of Eq. (87) reads It is a bit cumbersome to write the roots explicitly, but using them in Eq. ( 97) we arrive at the formula for the scaling dimension associated with the insertion of two defects To keep formulas compact, here we write the roots only in the special case λ 2 = −λ 1 = λ, which is the case that we use below in our analysis of the asymmetry.In that case the four roots are: and, taking θ = −i log z, their arguments are Plugging them in Eq. ( 97), we obtain and, taking the analytic continuation λ = iα, Note that, in Eq. ( 106), ∆2 (α, −α) is only well-defined in the interval α ∈ (−π/2, π/2) since the domain of definition of arctanh(x) is x ∈ (−1, 1).On the other hand, the ground state of the critical XY spin chain is invariant under the subgroup Z 2 ⊂ U (1) of spin flips, which implies that the charged moments Z 2 (α) are periodic Z 2 (α + π) = Z 2 (α).Therefore, Eq. ( 106) must be extended outside the interval α ∈ (−π/2, π/2) such that this periodicity is satisfied, In the left panel of Fig. 6, we numerically check this result.As we explain in Appendix C, the charged moments Z n (α) can be exactly calculated numerically in the ground state of the critical XY spin chain with Eq. (154).Using this expression together with Eq. ( 35), we compute log(Z 2 (α)e T2(α)ℓ ) with α = (α, −α) for a fixed α and ℓ = 50, 60, . . ., 100 and we fit the curve − ∆2 (α) log ℓ + const.to this set of points.In the plot on the left side of Fig. 6, the symbols correspond to the values of ∆2 (α) obtained in the fit for different angles α and couplings (h = 1, γ), while the solid curve is the prediction of Eq. (107).We obtain a very good agreement between them.
The divergence of ∆2 (α) in α = (±π/2, ∓π/2) does not mean that the charged moment is divergent itself but that the charged moment has a different scaling in ℓ.We numerically observe that in this case the scaling is log Z 2 (α) = −T (α)ℓ + O (log ℓ) 2 .In general, we observe this anomalous scaling with a (log ℓ) 2 term in the charged moment Z n (α) for every n when at least one α j is equal to π/2.Being these points a measure zero set in the integral for the asymmetry, the analysis performed in Sec.2.2.3 does not change.
n = 3 charged moments
For three defects at the positions of the middle panel of Fig. 5 on a circle, the polynomial (87) is Figure 6: Scaling dimension ∆n(α) for two (left panel) and three (right panel) defects, which appears in the asymptotic behavior of the charged moments Zn(α).For n = 2, we take α2 = −α1 and we vary α1.For n = 3, we set α1 + α2 + α3 = 0 and change α1 with α2 = 1.9.The symbols have been obtained numerically as detailed in the main text for the ground state of the XY spin chain along the critical line γ > 0 and h = 1.The curves are the CFT prediction (97), that for n = 2 simplifies to (107). with To compute the coefficient ∆3 (α) that enters in the asymptotic behavior of the charged moment Z 3 (α), we have to impose λ 1 +λ 2 +λ 3 = 0 due to the Dirac delta in Eq. ( 8).In that case, S λ = 1−C λ and the polynomial has two equal roots z 1 = z 2 = −1.The other four roots are Plugging them in Eq. ( 97) and performing the analytic continuation λ = iα α α, we obtain the analytic expression for ∆3 (α).We numerically check it in the right panel of Fig. 6 as we have done for the case n = 2.We can calculate the exact value of the charged moment Z 3 (α) in the critical XY spin chain employing Eq. (154) in the appendix.Combining it with Eq. ( 35), we compute log(Z 3 (α)e ℓT3(α) ) for a given α = (α 1 , α 2 , −α 1 − α 2 ) and ℓ = 50, 60, . . ., 100.With the resulting set of points, we fit the function −2 ∆3 (α)/3 log ℓ + const.In the plot on the right side of Fig. 6, the symbols represent the coefficient ∆3 (α) that we get in the fit in terms of α for different couplings (h = 1, γ) and the curve is the CFT prediction of Eq. (97) using the roots (110).The agreement is excellent.
Asymptotic behavior of the entanglement asymmetry
We now compute the asymptotic behavior of the entanglement asymmetry for large subsystem size ℓ applying the general result (30).As we have seen in Sec.spin-flip symmetry.The string tension T n (α) is given by Eq. ( 35), H t = t ′′ (0), and, according to Eq. ( 38), t ′′ (0) = γ/(2(1 + γ)) at the critical lines |h| = 1.Since dim G = 1, the matrices D p , defined in Eq. ( 24), that enter in the calculation of the coefficient b n of the log ℓ/ℓ term are scalars and correspond to the eigenvalues ν p of the Hessian matrix of the scaling dimension ∆ n (λ), such that D p = −2ν p /n (recall that in our case β n (α) = 2 n ∆ n (iα)).Therefore, Eq. ( 30) reads in this case as with (114)
The Hessian of ∆ n (λ)
The only missing ingredient are the eigenvalues ν p of the Hessian (111) of the scaling dimension ∆n (α).To calculate the latter, it is convenient to rewrite Eq. (97) as a contour integral using the residue theorem, with The polynomial P λ (z) is defined in Eq. (98).The contour C encircles all the roots of P λ (z) as we depict in Fig. 7, leaving the branch cut of Log(z) outside of the region that it delimits.The advantage of this approach is that we can easily calculate the second derivatives of ∆ n (λ) at λ = 0 by expanding quadratically the polynomial P λ (z) around this point.If we rewrite (98) in the form then it is easy perform the expansion, where P 0 (z) = (z n + 1) 2 .If we plug it in the contour integral (115) and we integrate by parts, we find Observe that, according to this result, ∆ n (λ) = 0, as expected.Therefore, the components of the Hessian of ∆ n (λ) can be identified with Given that 0 ≤ |b − a| ≤ n − 1, the numerator of the integrand above is, up to the Log(z) factor, a polynomial.Since the cut of the logarithm lies outside the region enclosed by C, then the only singularities the contribute to the integral are the zeros of z n + 1 in the denominator, ζ j = e i 2π n (j− 1 2 ) , j = 1, . . ., n. Applying the residue theorem, we have These residues can be evaluated explicitly, After summing them in Eq. ( 122), we eventually find that the Hessian of ∆ n (α) is a circulant matrix, as a consequence of the symmetry under the cyclic exchange of the replicas.
Application to the asymmetry
According to Eq. ( 114), the coefficient of the log ℓ/ℓ term in the asymmetry is given by the eigenvalues ν p of the Hessian H ∆n .In our case, since the it is a circulant matrix, the eigenvalues are given by the Fourier transform of its entries, Combining Eqs. ( 114) and (124), and doing carefully the sums, we find that where we have taken into account that t ′′ (0) = γ/(2(γ + 1)).This result can be analytically continued to non integer values of n by using the integral representation of the cosecant function Applying it in Eq. ( 125) for n even, we find Thus, our final result is that the entanglement asymmetry at criticality in the XY spin chain is We stress once again that what is remarkable here is the log ℓ/ℓ term, which only appears in critical systems.We also stress that the 'semi-universality' of b n (in the sense of Ref. [75]) is manifest here, because it depends on the parameter γ of the XY Hamiltonian.A truly universal quantity -such as, for instance, the scaling dimension ∆ n (λ λ λ)-would depend only on the CFT data and not on the details of the underlying microscopic model, so it would not depend on γ.
Conclusions
In this paper, we have analyzed the entanglement asymmetry in one dimensional critical extended quantum systems using CFT methods.This observable measures how much a symmetry is broken in a part of the system.Applying the replica trick, it can be obtained from the charged moments of the subsystem's reduced density matrix.We have seen that, in the ground state of a 1+1 dimensional quantum field theory, using the correspondence between the unitary operators that represent the symmetry group in the Hilbert space and defect lines in the path integral approach, the charged moments can be identified with a quotient of the partition functions of the theory on a Riemann surface with and without defect lines inserted along each branch cut.When the state respects the symmetry, the defects are topological and any deformation leaves the partition function invariant, yielding a zero asymmetry.In this formulation, the entanglement asymmetry can be interpreted as a measure of how much the defects are non topological.Utilizing well-known scaling arguments for the partition function in two dimensions, we have deduced the asymptotic behavior of the charged moments that provide the entanglement asymmetry.While for non critical systems the moments decay exponentially with the subsystem size, see Refs.[3,4,13], in the critical case we have found that they contain an extra algebraic factor.The coefficient of the exponential decaying term can be interpreted as the line tension of the defects and is non-universal; that is, it depends on the specific lattice realization of the field theory.The exponent of the algebraic factor is universal and, therefore, it is fully determined by the CFT that describes the critical point and depends on the properties of the defects associated with the symmetry group.From this result, we have derived the asymptotic behavior of the ground state entanglement asymmetry for a generic compact Lie group.Both for non-critical and critical systems, it grows at leading order logarithmically with the subsystem size ℓ and a coefficient proportional to the dimension of the Lie group.Criticality yields a log ℓ/ℓ correction, which is semi-universal as its coefficient depends not only on the universal exponent of the charged moments but also on the defect tension.
In the rest of the paper, we have specialized to the ground state of the XY spin chain, which explicitly breaks the U (1) symmetry of spin rotations around the transverse axis.The charged moments and the entanglement asymmetry of this model have been investigated outside the critical lines in Ref. [4] employing lattice methods.Here we have considered the critical lines described by the massless Majorana fermion theory in the scaling limit, after fermionizing it with a Jordan-Wigner transformation.In this case, the defect lines correspond to a marginal deformation of this CFT.Exploiting conformal invariance, the universal exponent that appears in the charged moments can be identified with the ground state energy of the massless Majorana fermion theory on a circle with equi-spaced marginal point defects of different strength.To obtain it, we have carefully diagonalized its Hamiltonian for an arbitrary number of defects.Combining this result with those found in Ref. [4] for the non-universal exponential term, we have obtained an analytic expression for the entanglement asymmetry.
A crucial point in our problem is that the defects we are considering are marginal, which makes non-trivial the dependence of the CFT partition function on them.As we have already emphasized, the partition function hinges on the specific CFT and symmetry group under study.Therefore, it would be desirable to consider other models and symmetries; for example, the SU (2) group of spin rotations in the critical XXZ spin chain, whose continuum limit is the massless compact boson.The correspondence between global symmetries and (topological) defect lines that we exploit here can be enlarged to encompass higher-form symmetries [17], symmetries generated by extended operators supported not only on lines but also on higher dimensional manifolds, and non-invertible symmetries [82,83], which lack of an inverse element.It would be interesting to explore if the notion of entanglement asymmetry can be extended to these generalized symmetries.and the charge Finally, we perform the continuum limit.We call the continuum variable x ∈ R and introduce a lattice spacing s so that The continuum fields satisfy the algebra {ψ(x), ψ(y)} = δ(x−y), { ψ(x), ψ(y)} = δ(x−y), {ψ(x), ψ(y)} = 0.The Hamiltonian becomes where J ′ is the continuum version of J, given by J ′ = Js in the limit s → 0 and J → ∞.Deriving the equations of motion for ψ and ψ, J ′ can be recognized to be the sound velocity, which we set to 1. Finally, the charge operator, discarding the constant term in Eq. ( 139) that acts trivially on the Hilbert space, becomes
B Defects in the Hamiltonian formalism
In this Appendix, we consider a massless Majorana fermion on a line, with a defect implemented as a localized mass term We show that this formulation is equivalent to the one given in the main text, where the defect is only encoded in the gluing conditions.We provide the explicit relation between the defect strength µ and the gluing parameter λ.We find that if the defect term in the Hamiltonian is Hermitian, then λ has to be real.This is a further justification of the analytic continuation α → −iλ that is performed in the main text.
To relate this to gluing conditions at the origin, we can look for eigenmodes of that Hamiltonian of the form with A 0 e ikx , x < 0, A 1 e ikx , x > 0, v k (x) = B 0 e −ikx , x < 0, B 1 e −ikx , x > 0, (147) for some constants A 0 , A 1 , B 0 , B 1 .This Ansatz gives the following commutator with the Hamiltonian, We see that η k is a Bogoliubov mode with energy k if the last two terms vanish.This gives the constraint Thus, we recover the gluing condition (52) with the matrix (58) obtained after the analytic continuation of the gluing parameter, provided that
C Numerical calculation of the charged moments
In this Appendix, we report the formulae that we employ to compute numerically the charged moments (9) for the U (1) group of spin rotations around the z axis in the ground state of the XY spin chain (33).As we show in Appendix A, this model maps into a quadratic fermionic chain after the Jordan-Wigner transformation (131).Therefore, its ground state satisfies Wick theorem.This implies that the reduced density matrix ρ A of a single interval A of length ℓ is Gaussian and it is fully determined by the 2ℓ × 2ℓ two-point fermionic correlation matrix [85] Γ with j, j ′ = 1, . . ., ℓ.For the ground state of the XY spin chain, its entries are where G(k) is the 2 × 2 matrix and cos ξ k , sin ξ k are given in Eq. (36).
After the Jordan-Wigner transformation, the transverse magnetization (34) that generates the U (1) symmetry is also quadratic and, consequently, Gaussian.Therefore, the charged moments Z n (α) are the trace of a product of Gaussian operators.Using the well-known properties of this kind of operators, the charged moments can be calculated in terms of the two-point correlataion matrix Γ as where W j = (I +Γ)(I −Γ) −1 e iαj,j+1n A and n A is a diagonal matrix with (n A ) 2j,2j = 1, (n A ) 2j−1,2j−1 = −1, j = 1, • • • , ℓ.The detailed derivation of this expression can be found in Ref. [7].We use it to obtain the exact numerical values of the charged moments in the plots of Fig. 6.
Figure 2 :
Figure 2: Graphical representation of Eq. (2).The insertion of an extended operator UΣ,g associated with the element g of a group G and with support on the line Σ corresponds, in the path integral approach, to a defect line along Σ with the gluing condition (2) for the field ϕ(x) at each side of the defect.
Figure 3 :
Figure 3: Riemann surface Mn for n = 2 (two sheets) with line defects (in blue) inserted along the branch cut of each sheet.The defects are associated respectively with the group elements g1 and g2.The quotient (12) of the partition functions on this surface with and without the line defects gives the normalized charged moment Z2(g), defined in Eq. (9).The Dirac delta in Eq. (8) will set g2 = g −1 1 .
Figure 4 :
Figure4: On the left, we represent the n-sheet Riemann surface Mn with n marginal defect lines inserted along the branch cut [0, ℓ] of each replica sheet, which arises in the calculation of the ground state charged moments Zn(α).At the branch points 0 and ℓ, two disks of radius ϵ have been removed as UV cut-off.Under the conformal transformation(49), Mn is mapped into the cylinder C on the middle, of circumference 2π and height 2 n log ℓ ϵ .The defect lines in Mn are mapped into n evenly spaced vertical defects at the points xj = 2πj n , j = 1, . . ., n.The CFT partition functions on these two surfaces with the marginal defects are equal.On the right, top view of the cylinder C with the defects.
Figure 5 :
Figure 5: Disposition of the point defects on the circle in the calculation of the charged moments Zn(α) for n = 2, 3 and 4. If the circle has length 2π, then the defect of strength λj is located at the point xj = 2πj n .
3 ,Figure 7 :
Figure 7: Schematic representation of the contour integral that gives the scaling dimension ∆n(λ) for the case n = 2.The zig-zag line is the branch cut [0, ∞) of the function Log(z).The filled black dots are the roots zj of the polynomial P λ (z) and the white ones represent the poles of the integrand in Eq. (115), after expanding quadratically P λ (z) in λ.
It turns out this expression reproduces the exact values of the coefficient b n for n odd as well, so Eqns.(125) and (127) are equivalent expressions for all integer n.Eqns.(125)-(127) are the third main result of this paper: we have arrived at the exact expression for the coefficient b n of the log ℓ/ℓ term in the Rényi entanglement asymmetry of the XY spin chain at criticality.Finally, taking the replica limit n → 1 in Eq. (127), we find that the coefficient for the (von Neumann) entanglement asymmetry is lim n→1 b | 16,945.6 | 2024-02-05T00:00:00.000 | [
"Physics"
] |
Iterative solvers for Biot model under small and large deformation
We consider L-scheme and Newton based solvers for Biot model under small or large deformation. The mechanical deformation follows the Saint Venant-Kirchoff constitutive law. Further, the fluid compressibility is assumed to be nonlinear. A Lagrangian frame of reference is used to keep track of the deformation. We perform an implicit discretization in time (backward Euler) and propose two linearization schemes for solving the nonlinear problems appearing within each time step: Newton's method and L-scheme. The linearizations are used monolithically or in combination with a splitting algorithm. The resulting schemes can be applied for any spatial discretization. The convergences of all schemes are shown analytically for cases under small deformation. Illustrative numerical examples are presented to confirm the applicability of the schemes, in particular, for large deformation.
Introduction
The coupling of flow and mechanics in a porous medium, typically referred to as poromechanics, plays a crucial role in many socially relevant applications. These include geothermal energy extraction, energy storage in the subsurface, CO 2 sequestration, and understanding of biological tissues. The increased role played by computing in the development and optimisation of (industrial) technologies for these applications implies the need for improved mathematical models in poromechanics and robust numerical solvers for them.
The most common mathematical model for coupled flow and mechanics in porous media is the linear, quasi-stationary Biot model [8,9,10,52]. The model consists of two coupled partial differential equations, representing balance of forces for the mechanics and conservation of mass and momentum for (single-phase) flow in porous media.
In terms of modelling, Biot's model has been extended to unsaturated flow [14,37], multiphase flow [27,28,34,36,48], thermo-poro-elasticity [19], and reactive transport in porous media [33,49], where nonlinearities arise in the flow model, specifically in the diffusion term, the time derivative term and/or in Biot's coupling term. The mechanics model can also be extended to the elasto-plastic [3,56], the fracture propagation [35] and the hyperelasticity [20,21], where the nonlinearities appear in the constitutive law of the material, in the compatibility condition and/or the conservation of momentum equation. Furthermore, elastodynamics or non-stationary Biot, i.e. Biot-Allard model [38], includes a convolution in the coupling term of both mechanics and flow equations. In this paper, we are going to explore a general case that allows large deformations. The mechanical deformation follows the Saint Venant-Kirchoff constitutive law and the fluid compressibility in the fluid equation is assumed to be nonlinear. This model formulation is needed to later consider extensions of Biot's model to plasticity, more general hyperelastic materials, and elastodynamics.
Finding closed-form solutions for coupled problems is very difficult, and commonly based on various simplifications. We, therefore, resort to numerical approximations. In general, there are two approaches to solve such problems, the fully coupled and the weakly coupled scheme. In general the fully coupled schemes for fluid potential and mechanical deformation are stable, have excellent convergence properties, and ensure that the numerical solution is consistent with the underlying continuous differential equations [29,55]. Despite obvious advantages, the monolithic solver for the fully coupled problem are more difficult to implement, and have difficulties solving the resulting linear system, particularly in the context of existing legacy codes for separate physics. In the weakly coupled approach, while marching in time, we time-lag the flow problem (or the mechanics), thereby fully decoupling the two problems. Due to the complexities associated with the fully coupled scheme, the industry standard remains to use weakly coupled or iteratively coupled approaches [18,42,51,59]. An iteratively coupled approach takes somewhat of a middle path; at each time step, it decouples the flow and mechanics, but iterates so that the convergence is achieved. Weakly coupled schemes, wherein there are not iterations within time step, have in particular been questioned in previous works [17,22,42,45]; they have been shown to lack robustness and even convergence, if not properly designed. In order to ensure the robustness and accuracy of the resulting computations, it is therefore essential to understand the efficiency, stability, and convergence of iterative coupling schemes, in particular in the presence of nonlinearities.
In this work, we present monolithic and splitting approaches for solving this nonlinear system, that is, nonlinear compressibility and the Saint Venant-Kirchoff constitutive law for stress-strain. Moreover, we rigorously study the convergence of our schemes, including the Newton based ones, under the assumption of small deformations. As for splitting approach, we use the undrained split method, see [31,39]. We use linear conformal Galerkin elements for the discretization of the mechanics equation and mixed finite elements for the flow equation [7,23,30,43,58]. Precisely, the lowest order Raviart-Thomas elements are used [16]. We expect, however, that the solution strategy discussed herein will be applicable to other combinations of spatial discretizations such as those discussed in [40,50] and the references therein. Backward Euler is used for the temporal discretization.
To summarise, the new contributions of this paper are • We propose Newton and L-scheme based monolithic and splitting schemes for solving the Biot model under small or large deformation.
• The convergence analysis of all schemes is shown rigorously under the assumption of small deformations.
• We provide a benchmark for the convergence of splitting algorithms for a general nonlinear Biot model that includes large deformations.
We mention some relevant works in this direction. For the convergence analysis of the undrained split method applied to the linear Biot model, we refer to [5,6,12,24,25,39]. For a discussion on the stabilization/tuning parameter used in the undrained split approach, we refer to [12,15]. A theoretical investigation on the optimal choice for this parameter is performed in [53]. The linearization is based on either Newton's method, or the Lscheme [37,44,48] or a combination of them [14,37]. For monolithic and splitting schemes based solely on L-scheme, we refer to [11]. Multirate time discretizations or higher order space-time Galerkin method has also been proposed for the linear Biot model in [1] and [6], respectively.
The paper is structured as follows. In the next section, we present the mathematical model. In Section 3, we propose four iterative schemes. Section 4 shows the analysis of iterative schemes under the assumption of small deformations. Numerical results are presented in Section 5 followed by the conclusion in Section 6.
Governing equations
We consider a fluid flow problem in a poroelastic bounded reference domain Ω ⊂ R d , d ∈ {2, 3} under large deformation. A Lagrangian frame of reference is used to keep track of the invertible transformation x := {x(X, t) = X + u(X, t) : X ∈ Ω → x ∈ Ω t }, where Ω t is the deformed domain at time t and u represents the deformation field. The gradient of the transformation and its determinant are given by F = ∇ x(X, t) and J = det(F). All differentials are with respect to the undeformed coordinates X, unless otherwise stated.
We will now write the conservation of momentum and mass equation in Ω. The conservation of momentum represents the balance between the first Piola-Kirchhoff poroelastic stress Π in Ω and the forces acting on Ω t , and is given by where ρ b = J b is the bulk density in Ω, b is the bulk density in Ω t and g is gravity.
We exploit the relation Π = FΣ since the constitutive laws are developed for the second Piola-Kirchhoff poroelastic stress Σ. This stress tensor is composed of the effective mechanical stress Σ ef f and the pore pressure p by the following relation where JF −1 F ensures that pressure p exerts an isotropic stress in Ω t . We assume an isotropic poroelastic material with constant shear modulus µ and a nonlinear function of the volumetric strain c(·) [11,54]. The effective stress is given by Saint Venant-Kirchhoff constitutive law: Σ eff = 2µE + c (tr(E)) , where the Green strain tensor E is defined by E = 1 2 ∇u + ∇ u + (∇u) ∇u . The conservation of fluid mass is given bẏ We consider a fluid mass Γ = Jρ f φ of a slightly compressible fluid, where φ is the porosity and ρ f the fluid density and S f the source term in Ω respectively. The time derivative of the fluid contentΓ =Γ(u, p) is considered to be a function of the pressure and the pore volume change due to the deformation field. We consider Darcy's law where the flux variable q is the first Piola transform of the corresponding flux variable in Ω t , K = JF −1 kF − is the corresponding transformation of the mobility tensor k in Ω t and Υ = F g. Finally, the general nonlinear Biot model considered in this paper reads as: To complete the model we consider Dirichlet boundary conditions (BC) and initial conditions given by (u 0 , p 0 ) such that Γ(u 0 , p 0 ) = Γ 0 and Π(u 0 , p 0 ) = Π 0 at time t = 0. The functions Γ 0 and Π 0 are supposed to be given (and to be sufficiently regular). In practice, the initial data u 0 and p 0 are not independent and can be obtained by solving the flow equation for p 0 and then solving the mechanics equation for getting u 0 .
Iterative schemes
In this section, we present several monolithic and splitting iterative schemes for solving Eqs. (4). First, we propose the Newton method which is well known for having quadratic convergence. Secondly, we combine the Newton method with a stabilized splitting scheme based on the undrained split method. Finally, for the third and fourth schemes, we propose monolithic and splitting L-schemes. The iterative schemes will be written using an incremental formulation. In this regard, we introduce naturally defined residuals for the nonlinear Eqs. (4).
A monolithic Newton solver
The Newton method is usually the first choice of the linearization methods due to its quadratic convergence. However, the convergence is local and it requires relatively small time steps to ensure the quadratic convergence [47]. The method starts by using initial solution (u 0 , q 0 , p 0 ), solves for and finally updates the variables
A splitting Newton solver
The splitting Newton method combines a splitting method with the Newton linearization. We introduce a stabilization parameter L s ≥ 0 to stabilize the mechanics equation. The precise condition on L s to ensure convergence is shown in Theorem 2. The method consists on two steps: starting with the initial condition (u 0 , q 0 , p 0 ): Step 1: solve for (δq i , δp i ) and update the variables Step 2: solve for δu i satisfying and update the variable The stability of the scheme is controlled by L s as it is shown in [47].
A monolithic L-scheme
The L-scheme can be interpreted as either a stabilized Picard method or a quasi-Newton method. This scheme is robust but only linearly convergent. Moreover, it can be applied to non-smooth but monotonically increasing nonlinearities. For example, for the case of Hölder continuous (not Lipschitz) nonlinearities we refer to [13]. As it is a fixed point scheme, it can be speeded up by using the Anderson acceleration [2,15]. To summarize, the main advantages of the L-scheme are: • It does not involve computation of derivatives.
• The arising linear systems are well-conditioned.
• It can be applied to non-smooth nonlinearities.
• It is easy to understand and implement.
A monolithic L-scheme requires three constant tensors L u , L p , L q ∈ R d×d and two positive constants L p and L u as linearization parameters. A practical choice of the linearization parameters will be discussed in the numerical section. We refer to [11,22] for a discussion regarding the best choice for the linearization parameters L p and L u . The method starts with the given initial solution (u 0 , q 0 , p 0 ) and solve for and then update the variables
A splitting L-scheme
The splitting scheme requires less linearization terms: two constants L u ∈ R d×d , L p ≥ 0 and a positive stabilisation term L s . This makes it suitable for quick implementation since there is no need to calculate any Jacobian. The method is split in two steps, given initial solution (u 0 , q 0 , p 0 ): Step 1: solve for (δq i , δp i ) update the variables Step 2: and then update the variables
The Biot model under small deformations
The convergence analysis of the iterative schemes proposed cannot be addressed with standard techniques [11,15,14,37,39]. This is due to the nonlinearities being non-monotone. Nevertheless, a rigorous analysis can be performed for the case of small deformations. Accordingly, we assume the porous medium to be under small deformation and present the convergence of the iterative schemes proposed in the previous section. Under small deformation, the different between Ω t and Ω can be neglected. The gradient of the transformation is approximated by F ≈ I and the determinant of the transformation by J ≈ 1. Additionally, the Green strain tensor E can be approximated by the infinitesimal strain tensor E ≈ ε = 1 2 ∇u + (∇u) . Then, the poroelastic stress tensor can be expressed by where α is the Biot constant. The mobility tensor is considered isotropic K(u, p) = kI, but the results of the convergence analysis can be extended without difficulties to a more general anisotropic case. Additionally, the time derivative of the volumetric deformation is approximated byJ ≈ ∇ ·u. In this regard the fluid mass can be expressed as where the relative density b(·) is a nonlinear function of the pressure p. The variational formulation for the Biot model, under small deformation, reads as follows: with the initial condition (b(p 0 )) + α∇ · u 0 , w) = 0, ∀w ∈ L 2 (Ω).
In the above, we have used the standard notations. We denote by L 2 (Ω) the space of square integrable functions and by H 1 (Ω) the Sobolev space Furthermore, H 1 0 (Ω) will be the space of functions in H 1 (Ω) vanishing on ∂Ω and H(div; Ω) the space of vector valued function having all the components and the divergence in L 2 (Ω). As usual we denote by (·, ·) the inner product in L 2 (Ω), and by ||·|| its associated norm.
Next, we make structural assumptions on the nonlinearities: For the discretization of problem (14) we use conformal Galerkin finite elements for the displacement variable and mixed finite elements for the flow [23,43]. More precisely, we use linear elements for the displacement and lowest order Raviart-Thomas elements [16] for the flow. Backward Euler is used for the temporal discretization.
Let Ω = ∪ K∈T h K be a regular decomposition of Ω into d-simplices. We denote by h the mesh size. The discrete spaces are given by where P 0 , P 1 denote the spaces of constant functions and of linear polynomials, respectively. For N ∈ N, we discretize the time interval uniformly and define the time step τ = T N and t n = nτ . We use the index n for the primary variable u n , q n and p n at corresponding time step t n . In this way, the fully discrete weak problem reads: For n ≥ 1 and given Following the notation previously introduced, we denote by n the time level, whereas i will refer to the iteration number of the Newton method. We further denote the approximate solution of the linearized problem (16) These will be used subsequently in the convergence analysis of the monolithic Newton method and the alternate version. For the monolithic and splitting L-scheme the convergence analysis can be found in [11].
Convergence analysis of the monolithic Newton method
In this section, we analyse the monolithic Newton method introduced in Section 3 used for solving the simplified nonlinear Biot model given in (16).
As we have previously stated, we perform the analysis for the case of small deformation. Here we present a variational formulation of the scheme and demonstrate its quadratic convergence in a rigorous manner. The Newton scheme reads as follows: where the initial approximation (u n,0 h q n,0 h , p n,0 h ) is taken as the solution at the previous time step, that is (u n−1 h , q n−1 h , p n−1 h ).
In order to prove the convergence of the considered Newton method, the following lemmas will be used. Lemma 1. Let {x n } n≥0 be a sequence of real positive number satisfying where a, b ≥ 0. Assuming that holds, then the sequence {x n } n≥0 converges to zero.
Proof. The result can be shown by induction, see page 52 in [46] for more details.
Lemma 2. If f : R → R is differentiable and f is Lipschitz continuous, then there holds Proof. See page 350 in [32], for example.
Next, the following result provides the quadratic convergence of the Newton method (17) for τ sufficiently small. Proof. By subtracting equations (16) from (17), taking as test functions e n,i u , e n,i q and e n,i p and rearranging some terms to the right hand side we obtain, where we have rewritten, We obtain an analogous expression for the term with b (·). From (A1), c(·) is differentiable with c (·) Lipschitz continuous, then from Lemma 2 we have, where L c represents the Lipshitz constant of c (·). Then, by using Young's inequality (a, b) ≤ ||a|| 2 2γ + γ||b|| 2 2 , for γ ≥ 0, and by choosing x = ∇ · u n h and y = ∇ · u n,i−1 h in (22), from (19) we obtain the following bound, for any γ ≥ 0 Next, by using the inverse inequality for discrete spaces ||·|| L 4 (Ω) ≤ Ch −d/4 ||·|| [41], (pg. 111) the latter reads, Finally, by using (A2) and choosing γ = α c , we obtain the following inequality, In a similar way, we obtain the following expression from (21), Adding (25), (26), and (20) multiplied by τ yields, Using ∇ · e n,0 u ≤ Cτ , e n,0 p ≤ Cτ (which can be proven) and Lemma 1, the quadratic convergence of Newton's method is ensured if
Convergence analysis of the alternate splitting Newton scheme
In this section we present the splitting Newton scheme for solving the nonlinear Biot model given in (16). We present the solver in a variational form and demonstrate its linear convergence.
Proof. The proof is similar to that of Theorem 1. Nevertheless, for the sake of completion we give it in Appendix A.
Numerical examples
In this section, we present numerical experiments that illustrate the performance of the proposed iterative schemes. We study two test problems: a 2D academic problem with a manufactured analytical solution, and a 3D large deformation case on a unit cube. All numerical experiments were implemented using the open-source finite element library Deal II [4]. For all numerical experiments, a Backward Euler scheme has been used for the time discretization. We consider continuous linear Galerkin FE for u, lowest order of Raviart-Thomas FE and discontinuous Galerkin FE for q and p. However, we would like to mention that any stable discretization can be considered instead. For all cases, as stopping criterion for the schemes, we use Test problem 1: an academic example for Biot's model under small deformation We solve the nonlinear Biot problem under small deformation in the unitsquare Ω = (0, 1) 2 and until final time T = 1. This test case was proposed in [11] to study the performance of the monolithic and splitting L-scheme. We extend the Newton method and the alternate Newton method described in Section 4.
Here, we introduce a manufactured right hand side such that the problem admits the following analytical solution which has homogeneous boundary values for p and u.
For infinitesimal deformations and rotations, there is no distinction between the reference and the deformed domains. In this regard, we solve problem (16) using the iterative schemes proposed in Section 4. The mesh size and the time step are set as h = τ = 0.1. For this case, all initial conditions are zero. The linearization parameters L p and L u are equal to the Lipschitz constant L b and L c corresponding to the nonlinearities b(·) and c(·) [11].
In order to study the performance of the considered schemes, we propose four coefficient functions for b(·) and two for c(·), and define four test cases as given in Table 1. Figure 1 shows the performance of the numerical methods at the last time step T = 1. The monolithic Newton method shows quadratic convergence in all cases. Nevertheless, the alternate Newton and the Lscheme methods show linear convergence as predicted in Section 4. Figure 2 shows the performance of the considered schemes for different time steps. The Newton method has better convergence for smaller time steps while the L-scheme has it for larger time steps; all this is in agreement with the Theorems 1 and 2. The performance of the considered schemes are independent of the mesh discretization. Test problem 2: a unit cube under large deformation Table 1: The coefficient functions b(·), c(·) for test problem 1. We now solve a large deformation problem on the unit-cube Ω = (0, 1) 3 . A Lagrangian frame of reference is necessary to keep track of the deformed domain Ω t at time t. We study the performance of the iterative schemes presented in Section 3 for solving Eqs. (4). The material is supposed to be isotropic and with constant Lamé parameters µ and c(·). We consider a Lagrangian fluid mass m f = ρ f Jφ of a slightly compressible fluid, where φ is the porosity. Under this assumption, the time derivative of the fluid content reads asΓ (u, p) = c p J(u)φṗ + c αJ (u), where the compressibility c p and Biot's coefficient c α = J ∂φ ∂J + φ ≈ 1 for simplicity. We will compare the iterative schemes for a torsion case on a unit cube. On the top face, we apply the rotation tensor R(θ) of a time dependent angle θ(t) = π/4 t, which gives a rotation of π/4 at T = 1. We set homogeneous initial condition for (q 0 , p 0 ) and ∇u 0 = (R(θ) − I). In the alternate Newton method, the stabilization parameter is set to L s = 1. In the L-scheme method, the linearisation tensor parameters are set as follows: L u = ∂ u Π (∇u 0 , p 0 ) , L p = ∂ p Π (∇u 0 , p 0 ) , L q = ∂ p K (∇u 0 ) , L p = ∂ p Γ (∇u 0 , p 0 ) and L u = ∂ u Γ (∇u 0 , p 0 ). The mesh size and the time step are set as h = τ = 2 −3 . We denote by top face of the unit-cube the region z = 1, the bottom face z = 0 and the lateral faces are x = 0, x = 1, y = 0 and y = 1. The boundary conditions are listed in Table 2 and the displacement and pressure field are shown in Figure 4. We compare the performance of the schemes proposed in Section 3 and we observe that the numerical convergence is in accordance with the theory developed in Section 4, even though the analysis is done for small deformation. Newton's method has quadratic convergence for the smaller time steps and linear convergence for the larger time steps. In contrast, the monolithic L-scheme has the same rate of convergence regardless of size of the time
Iterative step
Newtons method Splitting Newton l s =0 Splitting Newton l s =1 Monolithig L-scheme Splitting L-scheme l s =0 Splitting L-scheme l s =1 Figure 4: Iterative error at each iteration step for each iterative schemes.
step (see Figure 4). All splitting schemes have better convergence when the stability term is used (we use L s = 1.0).
Conclusions
We considered Biot's model under small and large deformation. Different nonlinear solvers based on the L-scheme, Newton's method, and the undrained splitting method were presented. The only quadratic convergent scheme is the monolithic Newton method. The splitting Newton method also requires a stabilization parameter, otherwise the (linear) convergence cannot be guaranteed. The analysis of the schemes and illustrative numerical experiments were presented.
We tested the performance of the schemes on two test problems: a unit square under small deformation and a unit cube under large deformation. To summarise, we make the following remarks: • Monolithic and splitting L-schemes are robust with respect to the choice of the linearization parameter, the mesh size, and time step size.
• The stabilization parameter L s has a strong influence on the speed of the convergence of the splitting Newton scheme.
• The splitting L-scheme can be used both as a robust solver or even as a preconditioner (as it is established in [26,57]) to improve the performance of the monolithic Newton method and the L-scheme.
A Convergence proof of the alternate Newton method The following result provides the linear convergence of the alternate Newton method in (29)- (30) for τ sufficiently small. | 5,966.8 | 2019-05-30T00:00:00.000 | [
"Mathematics"
] |
Silk Nanoparticle Manufacture in Semi-Batch Format
: Silk nanoparticles have demonstrated utility across a range of biomedical applications, especially as drug delivery vehicles. Their fabrication by bottom-up methods such as nanoprecipitation, rather than top-down manufacture, can improve critical nanoparticle quality attributes. Here, we establish a simple semi-batch method using drop-by-drop nanoprecipitation at the lab scale that reduces special-cause variation and improves mixing e ffi ciency. The stirring rate was an important parameter a ff ecting nanoparticle size and yield (400 < 200 < 0 rpm), while the initial dropping height (5.5 vs 7.5 cm) directly a ff ected nanoparticle yield. Varying the nanoparticle standing time in the mother liquor between 0 and 24 h did not signi fi cantly a ff ect nanoparticle physicochemical properties, indicating that steric and charge stabilizations result in high-energy barriers for nanoparticle growth. Manufacture across all tested formulations achieved nanoparticles between 104 and 134 nm in size with high β -sheet content, spherical morphology, and stability in aqueous media for over 1 month at 4 ° C. This semi-automated drop-by-drop, semi-batch silk desolvation o ff ers an accessible, higher-throughput platform for standardization of parameters that are di ffi cult to control using manual methodologies. Exemplary smoothed second-derivative FTIR spectra and peak fi tting in the amide I region for nanoparticles, fi lms, and powders (PDF)
INTRODUCTION
The mulberry silk produced by the Bombyx mori silkworm is one of the most extensively studied silks, with ancient and farreaching applications ranging from domestic to medical textiles. 1,2 The ability to regenerate silk fibroin protein from the silk cocoon has realized an advent of new material formats with adjustable physical properties; most notable among these formats are porous scaffolds, 3 hydrogels, 4 films, 5 and particles. 6 Silk fibroin offers several exploitable characteristics, including broad biocompatibility and biodegradability, 5,7 low immunogenicity, 5 and the presence of reactive amino acids amenable to chemical modification. 8 This amenability makes reverseengineered silk a promising precursor for clinical applications, 1,4 as evidenced by the granting in 2019 of the first FDA approval for a regenerated silk hydrogel for human vocal fold reinforcement (Silk Voice, Sofregen Medical Inc., Medford, MA). 9 B. mori silk fibroin is a structural protein composed of a light (≈26 kDa) 1 and a heavy chain (≈391 kDa), 1 which are linked by a disulfide bond. 1 The heavy chain has a block copolymer sequence of short hydrophilic amorphous regions interspersed with long hydrophobic (GAGAGX) n and (GAGAGY) n residues. 1 These hydrophobic motifs, which are capable of βsheet self-assembly and constitute over 50% of the primary structure, impart high mechanical strength to the fiber. 5 Silk is a natural biopolymer with metastable tertiary structures; therefore, the structure of silk-based materials can be tuned to their desired function by modifying their crystallinity 1,3,4,10 and hierarchical composition. 11 This structural versatility, coupled with the amphiphilic nature of silk, also permits silk to undergo a variety of favorable intermolecular interactions with lipophilic and hydrophilic therapeutic payloads 3,12 by in situ 12 or postsynthetic loading. 12−14 These interactions can also stabilize synthetic drugs 3,12 and biological molecules 3,12 by surface adsorption or encapsulation, thereby sterically shielding a drug cargo from biological clearance. The drug release behavior can be designed according to a tissue-specific stimulus to improve efficacy and reduce off-target effects while preserving drug structure and activity. 3,12,15 Silk nanoparticles are especially suited for drug targeting of solid tumors as these nanoparticles exhibit increased drug release at low pH, 6,8 which is a signature of tumor environments. In addition, silk nanoparticles have shown desirable critical quality attributes, including in vitro endocytosis-mediated uptake, 14 lysosomotropic drug release, 16 and proteolysis, 16,17 which indicate their value as anticancer nanomedicines.
Preparation of silk particles of submicron size (25−180 nm) can be achieved by six major bottom-up methods (reviewed previously 18 ): capillary microdot printing, 19 desolvation, 8,13,14,20 supercritical CO 2 21 and electrospraying, 22 emulsification, 23 and ionic liquid dissolution. 24 Among these methods, desolvation provides one of the most accessible and least energy-intensive lab-scale methods and is commonly used for the manufacture of protein nanoparticles. 25 Desolvation of silk is a nanoprecipitation process whereby an aqueous silk solution is mixed with a water-miscible organic solvent in which the heavy-chain hydrophilic blocks have low solubility (e.g. isopropanol and acetone). This process has no requirement for method-specific, expensive apparatus 18 and produces silk nanoparticles with cores enriched in β-sheet structures without the need for further chemical cross-linking steps.
Currently, optimized lab-scale desolvation methodology uses a semi-batch format consisting of a manual drop-by-drop addition of 3−5% w/v silk into at least a 200% v/v excess of the organic antisolvent. 8,13,14 In comparison to batch processes, where an empty reactor is charged with all species simultaneously, semi-batch desolvation is defined by the feed of the solute into a vessel precharged with an antisolvent or vice versa. 26 Semi-batch nanoprecipitation can be scaled up from the bench, 26 with the process further aided by computational simulations. 27,28 However, when compared to pilot-scale operations, the manual method suffers from specialcause variations in flow rate, droplet size, and dropping height. Additionally, although particle size and polydispersity are controlled by rapid mixing, 26,27 which is facilitated by agitation, 26,28 stirring is not a common practice in manual silk desolvation procedures.
Designing procedures, which reduce processing times and batch-to-batch variability, will aid the progress of pharmaceutical products from the bench to the market. 4,12 The aim of the current study was to establish a simple, semi-automated, and higher-throughput drop-by-drop technique for semi-batch silk nanoprecipitation. We investigated the impact of several process parameters, including stirring rate and standing time, on the physicochemical properties (e.g. particle size, polydispersity, zeta potential, stability, secondary structure, morphology, and yield) of the resulting silk nanoparticles.
MATERIALS AND METHODS
Unless otherwise stated, studies were conducted at 18−22°C. All reagents and solvents were acquired from Acros Organics or Sigma-Aldrich at >98% purity, unless otherwise stated, and utilized without additional purification.
2.1. Regeneration of B. mori Silk. Silk fibroin was extracted from B. mori cocoons, as described elsewhere. 13 Briefly, B. mori cocoons were cut into approximately 5 × 5 mm 2 sections and boiled, with manual stirring, in 0.02 M aqueous Na 2 CO 3 (2 L) at 98−105°C for 1 h. Degummed silk fibers were rinsed in ultrapure H 2 O (1 L) three times for 0.33 h each. The silk was then dried for at least 24 h at room temperature.
Dry silk fibers were dissolved in a 9.3 M aqueous LiBr solution at 60°C for 4 h to give a 25% w/v silk solution. The silk solution was dialyzed (molecular weight cutoff 3500 g mol −1 , Slide-A-Lyzer, Thermo Scientific, Rockford, IL) against ultrapure H 2 O (1 L) for 48 h and then purified by centrifugation over four cycles, each for 0.33 h at 3000g and 5°C (Jouan BR4i centrifuge equipped with an S40 swing rotor). Silk concentrations were determined gravimetrically over 24 h at 60°C and then adjusted to 3% w/v with ultrapure H 2 O.
2.2. General Drop-By-Drop Manufacture of Silk Nanoparticles in Semi-Batch Format. Silk nanoparticles were manufactured at room temperature using a syringe pump (Harvard Apparatus 22, Holliston, MA) equipped with a BD PLASTIPACK syringe and blunt needle (23G × 0.25″) (Figure 1). Inclination of the syringe pump was 0−0.1°. The isopropanol antisolvent was added to a short-neck round-bottom flask (to give a final 5:1 v/v ratio of isopropanol:silk). A 3% w/v silk solution was then added drop by drop at a rate of 1 mL min −1 (≈27 drops min −1 and 37 μL min −1 ). The resulting suspensions were incubated at room temperature for the designated time and then transferred to polypropylene ultracentrifugation tubes made up to 43 mL with ultrapure H 2 O and centrifuged at 48,400g for 2 h at 4°C (Beckmann Coulter Avanti J-E equipped with JA-20 rotor). The supernatant was aspirated, and the pellet was resuspended in ultrapure H 2 O (20 mL) and sonicated twice for 30 s at 30% amplitude with a Sonoplus HD 2070 sonicator (ultrasonic homogenizer, Bandelin, Berlin, Germany). An additional volume of ultrapure H 2 O (23 mL) was added, and the centrifugation, washing, and resuspension steps were repeated twice more. The final pellet was collected and resuspended in 2−3 mL water. This final silk (1) loading of a bubble-free 3% w/v aqueous silk solution into a syringe equipped with a blunt needle, (2) the relative positions of the needle and round-bottom flask, (3) the flow rate control of silk solution at 1 mL min −1 , (4) the stirring rate during addition, and (5) the nanoparticle standing time in the mother liquor following completion of silk addition. nanoparticle suspension was stored at 4°C until use. Unless stated otherwise, each experiment was repeated in triplicate using three different aqueous silk precursor stock solutions.
Calculations for needle residence time and shear rate are based on the literature dynamic viscosity (27 mPa s) of the regenerated 3% aqueous silk 29 and density (1.02 g mL −1 ) calculated herein for the 3% w/v aqueous silk solution and assumed Newtonian flow. 29 Reynold's number was estimated as 2 using the internal diameter of the needle 30 (0.33 mm) and indicated laminar flow. An upper limit of the residence time was estimated using the linear velocity (1.94 mm s −1 ) and the needle length. 31 The maximum shear rate was taken as the wall shear rate, and for simplicity, the shear rate calculations used the geometry of a straight cylinder. Calculations for the 3 and 10 mL syringes used in the study were undertaken similarly using the internal diameters as stated by the manufacturer.
2.2.1. Reproducibility of Semi-Automated Silk Nanoparticle Manufacture. Silk nanoparticles were manufactured in a 10 mL flask at a 6 mL total volume ( Figure 1). Silk was added from a height of 5.5 cm from the bottom of the isopropanol meniscus. The mother liquor suspension was then incubated for 2 h before purification. This procedure was repeated a further 15 times using five silk precursor solutions.
2.2.2. Effect of Stirring Rate on Manufacture and Silk Nanoparticle Properties. Silk nanoparticles were manufactured in a 10 mL flask at a 6 mL total volume ( Figure 1). Silk was added from a height of 7.5 cm from the bottom of the isopropanol meniscus, and stirring was accomplished with an egg-shaped stir bar (15 × 6 mm) at 200 and 400 rpm. The mother liquor suspension was then incubated for 2 h before purification. This procedure was repeated in triplicate using three silk precursor solutions.
2.2.3. Effect of Standing Time on Manufacture and Silk Nanoparticle Properties. Silk nanoparticles were manufactured in a 50 mL flask at a 36 mL total volume. Silk was added from a height of 7.5 cm from the bottom of the isopropanol meniscus with stirring at 400 rpm with an egg-shaped stir bar (15 × 6 mm). An aliquot (6 mL) was taken immediately following complete addition of the silk precursor, and stirring was stopped. Further aliquots (6 mL) were taken at 2.7, 5.5, 8.5, 11.5, and 24 h following stirring for 0.02 h at 400 rpm to ensure suspension homogeneity. This procedure was repeated in triplicate using three silk precursor solutions.
2.3. Yield of Silk Nanoparticles. The nanoparticle concentrations were determined by recording the total mass of the suspension in a preweighed centrifuge tube. A known mass of each suspension was then frozen at −80°C for 5 h in preweighed microcentrifuge tubes, followed by freeze-drying (Christ Epsilon 1−4, Martin Christ Gefriertrocknungsanlagen GmbH, Osterode, Germany) for 24 h at −10°C and 0.14 mbar. The dry mass was recorded, and the yield was calculated using eq 1 yield/% particle concentration (% w/w) mass (mg) silk concentration (% w/v) volume (mL) This process was repeated in duplicate and the average yield was reported, and the freeze-dried samples were stored in a vacuum desiccator until use. 2.4. Silk Nanoparticle Physicochemical Characterization and Stability in Water. The size (Z-average of the hydrodynamic diameter), polydispersity, and zeta potential of silk nanoparticles were determined as described elsewhere. 32 Briefly, silk nanoparticles were analyzed in ultrapure H 2 O at 25°C by dynamic light scattering (DLS) (Zetasizer Nano-ZS Malvern Instrument, Worcestershire, U.K.). Unless otherwise stated, the samples were vortexed for 20 s and sonicated twice at 30% amplitude for 30 s prior to measurement. Refractive indices of 1.33 and 1.60 for H 2 O and protein, respectively, were used for particle size measurement. All analyses were conducted in triplicate.
The particle size and zeta potential of silk nanoparticles generated in the stirring studies were determined on days 0, 10, 18, 24, 28, 35, and 42 by DLS. The particle size and zeta potential of silk nanoparticles generated in standing time studies were determined at days 0, 42, and 63 by DLS. The silk nanoparticles from all studies were stored at 4°C. At t > 0 days, the silk nanoparticles were vortexed for 20 s before size and zeta potential analysis.
2.5. Secondary Structure Measurements of Silk Nanoparticles. Air-dried silk films and freeze-dried silk were used as silk I structure references, while autoclaved silk films and silk films treated with 70% v/v ethanol/ultrapure H 2 O were used as positive controls for silk II structure. Silk films, powders, and nanoparticles were analyzed by Fourier transform infrared spectroscopy (FTIR) on an ATR-equipped TENSOR II FTIR spectrometer (Bruker Optik GmbH, Ettlingen, Germany). Each nanoparticle and freeze-dried silk sample was flash-frozen at −80°C for at least 5 h and then lyophilized for 24 h. Each FTIR measurement was run for 128 scans at a 4 cm −1 resolution in absorption mode over the wavenumber range of 400−4000 cm −1 and corrected for atmospheric absorption using Opus (Bruker Optik GmbH, Ettlingen, Germany). The amide I regions of the FTIR spectra were analyzed in OriginLab 19b (Northampton, Massachusetts), as described elsewhere. 33 The second derivative of the background-corrected absorption spectrum was obtained and smoothed twice using a seven-point Savitzky−Golay function with a polynomial order of 2. A nonzero linear baseline was interpolated between 2 and 3 of the highest points between 1600 and 1710 cm −1 . Peak positions in the amide I region were then identified using the second derivative and peaks fitted using nonlinear least squares with a series of Gaussian curves ( Figure S1). Band positions, widths, and heights were allowed to vary, and peak area was allowed to take any value below or equal to 0. The deconvoluted spectra were area-normalized, and the secondary structure content was calculated with reference to literature band assignments 34,35 using the relative areas of each band.
The correlation coefficient (R) was calculated according to previous analyses. 36 The air-dried silk film of an aqueous silk precursor batch was used as the reference for all silk films, freeze-dried silk, and silk nanoparticle samples. The second-derivative curves of the absorption spectra were smoothed twice with a five-point Savitzky−Golay function and a polynomial order of 2 and then compared over the spectral range of 1600−1700 cm −1 using eq 2 R x y where x i and y i are the derivative values of the air-dried silk film and sample of interest at the frequency i, respectively. 2.6. Thermal Analysis of Silk Nanoparticles. A known volume and mass of each silk sample and freeze-dried silk control was frozen at −80°C for 5 h, followed by freeze-drying for 24 h at −10°C and 0.14 mbar. First-cycle differential scanning calorimetry and thermogravimetric analysis were carried out on the dried samples (1.95−4.89 mg) in aluminum pans from 20 to 350°C at a scanning rate of 10°C min −1 and under a nitrogen flow of 50 mL min −1 (STA Jupiter 449, Netzsch, Geraẗebau GmbH, Germany). Thermograms were analyzed using OriginLab 19b (Northampton, Massachusetts). The desorption enthalpy was normalized to a corrected mass during volatilization, as described previously. 37 2.7. Scanning Electron Microscopy (SEM) of Silk Nanoparticles. Aqueous silk nanoparticle suspensions were adjusted to a concentration of 1 mg mL −1 . An aliquot (20 μL) of each sample was then pipetted onto a silicon wafer and lyophilized for 24 h at −10°C and 0.14 mbar. The specimens were sputter-coated with gold using a low-vacuum sputter coater (Agar Scientific Ltd., Essex, U.K.) and analyzed with the secondary electron detector of an FE-SEM SU6600 instrument (Hitachi High Technologies, Krefeld, Germany) at 5 kV and 40k magnification. The images were processed using ImageJ v1.52n (National Institutes of Health, Bethesda, MD) and Adobe Illustrator (Adobe, San Jose, CA for equal variance was undertaken on multiple groups using Bartlett's method. Sample pairs were analyzed using Welch's independent t-test. Multiple groups were evaluated by one-way analysis of variance (ANOVA), followed by Tukey's multiple comparison post hoc test or by the Brown−Forsythe and Welch ANOVA tests, followed by the Dunnett T3 multiple comparison post hoc test. Silk nanoparticle stability was evaluated by ANOVA followed by Dunnett's post hoc test to compare between t = 0 day control and t > 0 day samples. All data were assumed to have normal distributions. Asterisks denote statistical significance determined using post hoc tests as follows: *p < 0.05, **p < 0.01, ***p < 0.001, and ****p < 0.0001. Unless otherwise specified, all data are presented as mean values ± (SD) and the number of experimental repeats (n) is noted in each figure legend.
3. RESULTS 3.1. Silk Nanoparticle Characterization. The DLS and mass measurement values indicated an influence of the stirring rate on the physicochemical properties and yield of silk nanoparticles (Figures 1 and 2). When the silk solution was added from a height of 7.5 cm, an increase in the stirring rate from 0 to 400 rpm significantly decreased the silk nanoparticle size (ANOVA, p < 0.05) from 134 to 114 nm (Figure 2c). Varying the stirring rate between 0 and 400 rpm had no significant impact on the polydispersity or negative surface charge, which ranged from 0.12 to 0.14 and −30 to −33 mV, respectively (Figure 2d,e). However, increasing the stirring rate significantly decreased the yield from 23 to 9% (ANOVA, p < 0.01) (Figure 2f).
The effect of droplet velocity on nanoparticle formation in the absence of stirring was determined by varying the height from which the silk solution was dropped (henceforth, initial addition height). A decrease in the initial addition height from 7.5 cm (ν droplet ≈ 1.21 m s −1 ) to 5.5 cm (ν droplet ≈ 1.03 m s −1 ) significantly decreased the yield of nanoparticles (t-test, p < 0.01) from 23 to 18%. By contrast, the physicochemical properties were not affected by decreasing the initial addition height to 5.5 cm, as the nanoparticles had an average size, polydispersity, and zeta potential of 131 nm, 0.11, and −30 mV, respectively.
The growth of nanoparticles in the mother liquor was also investigated by varying the nanoparticle standing time before purification (Figure 1). Over a 24 h interval, the standing time had no significant effect on nanoparticle physicochemical properties or yield. Overall, the silk nanoparticle size ranged from 104 to 116 nm, with a polydispersity ranging from 0.11 to 0.14. The negative surface charge ranged from −30 to −35 mV, and the yield varied between 9 and 15% w/w of silk ( Figure 2).
3.2. Secondary Structure Measurement. The impact of the process conditions on silk nanoparticle secondary structure was determined by attenuated total reflectance-FTIR (ATR-FTIR) analysis and deconvolution of the characteristic protein amide I band (1600−1710 cm −1 ) ( Figure S1). Silk nanoparticle secondary structure did not vary significantly with changes in the initial addition height, stirring rate, or formulation standing time. For stirring and standing time studies, the high nanoparticle β-sheet content (54−57%) correlated with the 55% β-sheet composition measured for autoclaved and ethanol-treated silk films, which served as positive controls for silk II structure ( Figure 3). Additionally, the α-helix and random coil content (18−21%) of silk nanoparticles were comparable to autoclaved (20%) and ethanol-treated films (19%). Autoclaving provides thermal energy to break labile bonds in the silk film, with the uptake of water acting to plasticize the material. This directly contrasts with nanoprecipitation, where water is removed from the silk hydration shell. The silk nanoparticle structure from both studies showed a significantly higher percentage of β-sheets (ANOVA, p < 0.0001) and less α-helix and random coil content (ANOVA, p < 0.0001) compared to the negative silk II structure controls (air-dried silk film and freeze-dried silk powder with 17−25% β-sheet and 47−56% α-helix and random coil content) (Figure 3c).
The spectral correlation coefficient method of comparing second-derivative ATR-FTIR spectra in the amide I region (1600−1700 cm −1 ) was also used to measure formulationinduced structural changes in silk nanoparticles versus those in an air-dried silk film. Silk nanoparticle correlation coefficients ranged from 0.27 to 0.31 and showed no significant variation with initial addition height, stirring rate, or formulation standing time (Figure 3c). The stirring and standing time studies revealed a discrepancy between the correlation coefficients of silk nanoparticles and those of the autoclaved films (0.10) (ANOVA, p < 0.0001) and the ethanol-treated silk films (0.18) (ANOVA, p < 0.05). This disagreement with the band deconvolution findings could reflect offsets in the secondderivative baselines. Regardless, the nanoprecipitation-associated β-sheet enrichment, identified by band deconvolution, 3.3. Thermal Analysis. The simultaneous thermal analysis first-cycle results of silk nanoparticles manufactured at stirring rates between 0 and 400 rpm and silk II negative controls are shown in Figure 4 and Table 1. Thermogravimetric analysis (TGA) was used to confirm differences in water content and thermal stabilities of silk nanoparticles caused by formulation ( Figure 4). Thermograms of all silk nanoparticles and controls showed three regions with two weight loss steps: the loss of adsorbed and strongly bound water between 20 and 140°C followed by silk decomposition above 170°C. The increase in mass at low temperatures in TGA measurements was due to buoyancy effects resulting from variations in air density with heating. 38 No significant differences in water content were observed with increased stirring rate, with nanoparticles containing 12−14% w/w water across all formulations. Nanoparticles displayed a significantly delayed (ANOVA, p < 0.05) onset decomposition temperature ranging between 273.2 and 277.3°C compared to the freeze-dried powder, negative silk II control (261.4°C). This higher stability to thermal degradation suggests that nanoparticle structure is composed of a higher crystalline fraction compared to amorphous, freezedried silk. Nevertheless, there was no significant difference between the decomposition temperatures of silk nanoparticles manufactured at different stirring rates, ranging between 298.5 and 304.0°C.
Differential scanning calorimetry (DSC) measurements confirmed that the formulation stirring rate did not affect the primary or secondary structure of silk nanoparticles (Figure 4). The desorption enthalpy ranged between −207.8 and −282.7 J g −1 , and the temperature of desorption ranged from 36.1 to 43.6°C, with no significant variation observed with stirring rate. The water desorption-associated and final glass transitions at 59.3 and 201.5°C, respectively, were not identifiable for all nanoparticle samples. The glass transition at 201.5°C was also shifted to a higher temperature and was less steep when compared to that of the silk I structure (184.5°C). This indicates that the molecular mobility of silk molecules was reduced upon their incorporation into the nanoparticle structure. The crystallization exotherm (random coil to βsheet transition), present for the negative controls at 241.0°C, was absent from the nanoparticle curves. No significant difference was noted between the decomposition temperatures Silk films treated with 70% ethanol and autoclaving to obtain high β-sheet content were used as positive controls for silk II structure, with an untreated silk film and freeze-dried silk powder serving as negative controls. The correlation coefficients (R) of silk nanoparticle, film, and powder second-derivative amide I spectra were calculated using the silk II negative control film as reference, n = 3, ± SD. The correlation coefficients, total β-sheet, and α-helix and random coil contents were evaluated by one-way analysis of variance (ANOVA), followed by Tukey's multiple comparison post hoc test. The intermolecular β-sheet, native β-sheet, β-turn, and antiparallel amyloid β-sheet contents were evaluated using the Brown−Forsythe and Welch ANOVA tests, followed by the Dunnett T3 multiple comparison post hoc test.
ACS Biomaterials Science & Engineering
pubs.acs.org/journal/abseba Article (ranging between 282.9 and 289.5°C) of silk nanoparticles manufactured at different stirring rates. 3.4. Silk Nanoparticle Aqueous Stability. The particle size, polydispersity, and zeta potential stability of nanoparticles manufactured with stirring rates between 0 and 400 rpm were determined for up to 42 days. Nanoparticles manufactured across all stirring rates showed size stability and constant polydispersity in water for up to 42 days ( Figure 5). By contrast, the zeta potential of nanoparticles manufactured without stirring varied significantly across 42 days. The particle size, polydispersity, and zeta potential stability of nanoparticles manufactured with standing times between 0 and 24 h were also determined for up to 63 days. All formulations showed size and polydispersity stability in water for up to 63 days ( Figure 5). The negative surface charges of silk nanoparticles Figure 6). Silk nanoparticles manufactured with stirring rates of 200 and 400 rpm at 6 mL scale had spherical shapes and narrow size distributions at day 24. Nanoparticles manufactured with standing times of 0 and 24 h at the 36 mL scale showed generally spherical morphologies and uniform size distribution when imaged at day 55. Overall, nanoparticles showed a coarse surface topography.
DISCUSSION
Silk particles have attracted increased attention for drug delivery applications because their manufacture can be tailored for a desired size (from nano-to microscale), crystallinity, and surface chemistry. 6,18,39 The nanoparticles produced by desolvation are well suited to anticancer applications due to their submicron size, 8,14,16,20,32 which would allow extravasation through leaky tumor vasculature, 40 followed by endocytosis and lysosomal trafficking in malignant cells. 16 However, nanoparticle manufacture and drug loading have not always translated from small-scale, bench procedures to those following current good manufacturing practices (i.e. 21 Code of Federal Regulations Part 210−212) in the manufacturing sector. 41,42 This has prompted the implementation of continuous techniques, 20,23,32 which can offer greater ease of scale-up. 41 One powerful approach includes microfluidicassisted nanoprecipitation, which uses laminar flow focusing to achieve micromixing conditions and solvent shifting by diffusion. 20,32,41 Production can be scaled up by microfluidicchip parallelization or increasing channel diameters, although working at large total volumes can cause problems due to low production rates and the limitations of scaling imposed by complex mixer designs. Consequently, improving the reproducibility of lab-scale methodology for semi-batch manufacture of silk nanoparticles is still an area of much interest.
Understanding the parameters that impact silk nanoprecipitation will aid in the optimization of silk nanoparticle physicochemical properties for in vivo performance as nanomedicines. The consequences of varying silk stock reverseengineering processes 20 and antisolvent species, 4,32 and their relative ratios, on the outcome of nanoprecipitation in semibatch 43 and continuous format 32 are already reported. For example, 1 h degumming times for silk cocoons resulted in greater molecular weight polydispersity of silk stocks and in favorable nanoparticle size, polydispersity, and zeta potential when compared to shorter degumming times. 20 Several research groups have also investigated the effect of the antisolvent species used for nanoprecipitation on the resulting nanoparticle properties, 32,44 as the antisolvent molecular and macroscopic properties contribute to the mixing conditions. Increasing the volume ratio of the antisolvent-to-silk solution imposes high supersaturation conditions in both semibatch 43,44 and microfluidic formats, 32 resulting in faster nucleation and smaller nanoparticle sizes. Hence, in the current study, the optimized formulation variables 13,20,32 for preparing silk nanoparticles were used to investigate several little-understood preparation parameters, namely, initial addition height, nanoparticle stirring rate, and standing time.
A semi-automated drop-by-drop procedure for silk desolvation in semi-batch format was designed to replace the manual addition of silk to organic antisolvent using a syringe or pipette. It is probable that homogeneous silk nucleation occurred by antisolvent-induced desolvation (Figure 3a). We do not consider seeded crystallization as the 3% w/v silk concentration used lay below the ≈10% w/w critical micelle concentration 29 of regenerated silk fibroin. Additionally, the Combined with the low residence time of 33 ms, these shear rates would not be expected to provide sufficient work (≈10 5 Pa) 45 for shear-induced nucleation of the silk molecules. Silk nanoparticles were reproducibly and reliably manufactured at a 6 mL scale when key processing parameters were set at levels for optimal nanoparticle properties, as determined in previous work. 13,20,32 As unreliable nanoparticle manufacture has been implemented in the reduced efficacy of the generic Doxil formulation, LipoDox, 41,46 one might speculate that increasing the reproducibility of silk nanoparticle manufacture will ultimately lead to better in vitro and in vivo therapeutic profiling. For example, across 16 repeats using three aqueous silk batches over 3 days, silk nanoparticles produced at a 5.5 cm initial addition height and without stirring had an average size of 131 ± 7 nm and a low polydispersity of 0.11 ± 0.02. The polydispersity was similar to previously reported values obtained using the same silk concentration and silk-toantisolvent volumetric ratio in manual and microfluidicassisted methodology. 8,13,20,32 Although the nanoparticle sizes were larger than the literature values (≈100−115 nm), 8,13,20,32 they lie within the optimal size range (100−200 nm) for drug delivery vehicles. 47 The nanoparticles were obtained in an average yield of 18 ± 3%, comparing favorably with previous reports in manual and microfluidic formats. 13,32 The zeta potential of −30 ± 2 mV was higher than previously reported values obtained with manual silk addition. 8,20 This phenomenon was also observed for microfluidic-assisted manufacture 20,32 and probably reflects different silk molecule packing arrangements resulting from varying fluid dynamics due to the different flask geometries between the studies.
Silk nanoparticles were highly crystalline, featuring a high βsheet content of 56 ± 1% and a spectral correlation coefficient of 0.27 ± 0.03 over the amide I region, in agreement with previous studies. 20,32 Increasing the initial addition height to 7.5 cm resulted in higher droplet impact velocity and kinetic energy, thereby causing larger disturbances in the antisolvent and facilitating mass transfer and solvent shifting. Surprisingly, this change did not significantly affect nanoparticle physicochemical properties, although it resulted in significantly increased yield.
The inverse relationship between nanoparticle size and formulation stirring rate has been observed for globular protein nanoparticles 48 and polymer emulsions. 49 As a nanoprecipitation process, aqueous silk desolvation is fundamentally a diffusion-limited solvent shift of water molecules from the silk hydration shell and their replacement with isopropanol molecules. Therefore, mixing efficiency is an important factor that dictates nanoprecipitation outcomes, and magnetic stirring increases the control of macro-to micromixing rates. 28 In labscale semi-batch manufacture, the silk nanoparticle size and yield showed inverse dependence on the stirring rate, through 0−400 rpm (Figure 2). While stirring has been used for desolvation of regenerated silk obtained from Antheraea mylitta, 50 we believe this is the first report highlighting the importance of stirring rate in a semi-batch system on the outcome of silk fibroin desolvation. When manufactured at 400 rpm stirring rate and 6 mL scale, the nanoparticle size compared well with literature values for manual semi-batch manufacture, which is typically conducted between 40 and 50 mL scales without stirring. 8,20 It is likely that reducing the mixing time by active stirring will result in increased reproducibility of nanoparticle physicochemical properties during scale-up, although experimental proof is needed.
Silk nanoprecipitation occurs during mixing with an antisolvent in which the solubility of at least one type of hydrophilic block is low, and this results in particle nucleation upon supersaturation (i.e., when the silk concentration exceeds the equilibrium solubility). 25,51 Nucleation follows a minimum Gibbs free-energy self-assembly process via protein−protein association until a critical nucleus size is reached. 51 This is then followed by particle growth and protein conformational changes for induced fit (Figure 3a). 51 For most cases, the general mechanism of protein−protein association is defined by at least three steps. 51 The initial steps are diffusion-limited and occur following complete solvent−antisolvent mixing. First, a random collision of proteins produces a nonspecific encounter complex, which minimizes repulsive long-range electrostatics. 51 This short-lived complex can then go on to form nuclei with favorable intermolecular interactions, although no change in secondary structure occurs. This process is enthalpy-driven: the establishment of new shortrange attractive forces between silk molecules offsets the entropic loss upon incorporation into the nuclei.
The consequent reduction in protein concentration reduces the rate of further nucleation and, for nuclei exceeding the critical size, leads to growth by thermodynamically controlled stepwise or aggregative mechanisms. 27,52 The final step involves a structural change between the favorable growth intermediates and the nanoparticle-bound state to maximize attractive intra-and intermolecular interactions. In the case of silk fibroin, this involves conversion of random coil and α-helix content to β-sheet structure. 1,32 Across all semi-batch formulations, as in previous microfluidic-assisted work, 20,32 the mixing efficiency correlated with nanoparticle size, while the secondary structure content and thermal stability of silk nanoparticles remained consistent. Hence, nanoprecipitation can be assumed to occur via diffusion-controlled association in the regime where k R→β+ ≫ k 1 − (Figure 3a), 51 so β-sheet formation occurs at a faster rate than silk molecule diffusion. Applying this assumption to silk fibroin desolvation, the high turbulence created by increasing the stirring rate would increase the meso-and micromixing rates, thereby reducing the total mixing time. 27,53 Consequently, nucleation rates will increase, causing a fast reduction in supersaturation and arresting further nucleation, thereby resulting in a growth phase with greater homogeneity. The reduction in local silk concentration with lower mixing times also disfavors surfacecontrolled growth processes of nuclei and lowers the chance of successful diffusion-limited collisions of silk molecules with nuclei 54 prior to structural rearrangement.
Alternatively, as the stirring rate and mixing efficiency increased, solvent shifting from hydrated silk pockets was likely improved prior to β-sheet enrichment. This would result in tighter packing of the internal nanoparticle architecture. However, variation in the stirring rate caused no significant difference in water content or desorption enthalpy by simultaneous thermal analysis. Furthermore, as water absorption occurs predominantly in the amorphous regions, 55 variations in secondary structure would be expected. Nevertheless, β-sheet crystalline content did not vary significantly between nanoparticle formulations, as measured by amide I band deconvolution. 32 (Figure 4) reinforced the high nanoparticle crystallinity determined by FTIR. 35,56 Simultaneous thermal analysis also showed no significant differences between the thermal stabilities of nanoparticles manufactured with different stirring rates. Macromolecule thermal stability is dependent on molecular weight and length, 56 so this finding indicated that silk molecules incorporated into nanoparticles from all formulations were of a similar weight and length distribution, again reinforcing previous work. 20 We speculate that the reduction in nanoparticle yield with stirring is due to several factors, including silk film formation on flask walls and insufficient g-force for the sedimentation of smaller nanoparticles during centrifugation. Stirring at 400 rpm resulted in the poorest reproducibility of size and yield, which may indicate that hydrodynamic confounders caused by slight differences in feed point position were introduced between experiments. Surprisingly, no significant differences were observed for polydispersity with stirring, suggesting that, in the static system, size distribution was controlled by diffusionlimited or polynuclear surface-controlled growth. 54 The moderate polydispersity of all formulations arose, in part, due to the time dependence of antisolvent composition, the local regions of high supersaturation at the droplet−antisolvent interface, and the nonuniform nucleation inherent in the semibatch process. Although particle size was affected, the zeta potential remained at a constant level as stirring rate was varied, suggesting that packing geometries were affected by active stirring although the secondary structure content remained consistent.
Particle growth is a thermodynamically driven process and primarily occurs via three diffusion-limited mechanisms: stepwise growth 52 proceeding through molecular adsorption until the equilibrium silk saturation concentration is reached; Ostwald ripening, 57 whereby the dissolution of small particles results in the growth of larger particles; and aggregation proceeding according to Smoluchowski kinetics. 27 The rate of silk nanoparticle growth was also investigated over 24 h by varying the standing time in the mother liquor before purification. No significant differences in nanoparticle physicochemical properties, yield (Figure 2), or secondary structure content ( Figure 3c) were observed for standing times between 0 and 24 h.
The isoelectric point of crystalline silk fibroin lies between pH 4 and 5. 58 Therefore, silk nanoparticles and silk molecules in the mother liquor have a net negative surface charge. This results in repulsive long-range electrostatic interactions between nanoparticles, providing a high-energy barrier for aggregation and agglomeration and conferring colloidal stability. These repulsive interactions exist between nanoparticles, precursor silk molecules, and newly formed silk nuclei. Hence, once silk nanoparticles reach a key size, they no longer act as templates for stepwise growth, as the repulsive energy barrier and entropic loss are no longer offset by the establishment of favorable short-range bonds and reduction in surface energy. The conversion of amorphous content to βsheet structure upon nanoprecipitation can also be considered irreversible at room temperature, in the absence of chaotropic agents. 20 The tightly bound crystalline architecture, poor solubility of the silk hydrophilic blocks in the mother liquor, and the low polydispersity of nanoparticle size consequently result in unfavorable nanoparticle growth via Ostwald ripening.
This means that screening of operating conditions for nanoparticle manufacture at room temperature can be conducted with maximum time efficiency, increasing throughput.
For example, at 6 mL scale and 0 h standing time, the nanoparticle production rate was estimated as 0.41 g/h using semi-automated silk dispensing and 0.12 g/h using manual silk addition, assuming 23% nanoparticle yields. 20 The former value assumed the use of one syringe pump equipped with two syringes, while both processes require an operator intensive setup time of 1 min. Based on this, the time taken to prepare nanoparticle batches for conducting a clinically relevant in vivo study with five rats was calculated. Assuming a nanoparticle blood concentration 7 of 250 μg mL −1 and a rat blood volume of 25.6 mL, 6.4 mg of silk nanoparticles would be required per rat. The time taken to obtain the total required mass of 32 mg is 0.08 and 0.26 h using the semi-automated and manual setups, respectively. However, the total production rate is lowered by the 6 h purification process, assuming the use of one eight-place centrifuge. Nevertheless, the total production rate can be increased by a syringe pump platform and centrifuge parallelization.
The characterization of the effect of aging on nanoparticle physicochemical properties is important for maximizing shelf life and preventing undesired complications. For this reason, we also examined the long-term stability of silk nanoparticles from all formulations in aqueous conditions for over 1 month at 4°C to assess storage capabilities ( Figure 5). Similar to previous studies, 13,20,32 the zeta potential of nanoparticles from all formulations on the day of manufacture was lower than −25 mV, indicating sufficient electrostatic repulsion between particles for moderate aqueous stability. Indeed, silk nanoparticles manufactured across all stirring rates and standing times showed size stability over the entire study period. Some fluctuations in polydispersity and zeta potential occurred for nanoparticles produced from some formulations and, while these changes were significant, they did not follow a trend indicative of time-dependent flocculation, coagulation, or dissolution. 32 This observation was reinforced by morphological assessments conducted over the time course by SEM, which showed spherical granules without apparent agglomeration or adhesion ( Figure 6). The sizes of the freeze-dried particles imaged by SEM were relatively small compared to the Z-average size measured using DLS, probably due to the removal of the solvation sphere and bound water during freezedrying. 50
CONCLUSIONS
The use of a semi-automated liquid dispensing setup provided consistent, standardized, and higher-throughput manufacture of silk nanoparticles via drop-by-drop desolvation in a semibatch format. Operational parameters investigated for their effect on nanoparticle formation indicated that decreasing the initial addition height from 7.5 to 5.5 cm reduced the nanoparticle yield. The stirring rate was also a key process parameter that affected silk nanoparticle size, yield, and experiment reproducibility, as stirring at 400 rpm provided the smallest nanoparticle size and the lowest yield of silk nanoparticles. Nanoparticles from all formulations displayed spherical morphologies and showed stability of size and polydispersity for over 1 month when stored as aqueous suspensions at 4°C. The standing time of silk nanoparticles in the mother liquor was also not a key process parameter, and ACS Biomaterials Science & Engineering pubs.acs.org/journal/abseba Article | 9,384.2 | 2020-11-03T00:00:00.000 | [
"Biology"
] |
The GBM Tumor Microenvironment as a Modulator of Therapy Response: ADAM8 Causes Tumor Infiltration of Tams through HB-EGF/EGFR-Mediated CCL2 Expression and Overcomes TMZ Chemosensitization in Glioblastoma
Simple Summary Resistance to standard therapies impose a huge challenge on the treatment for glioblastoma multiforme (GBM), which is often considered as a cell intrinsic property of either GBM or, more significantly, of GBM stem-like cells. Tumor-associated macrophages and microglia (TAMs) take up the majority of the immune population in the tumor microenvironment of GBM and potentially participating in modulating therapy responses. However, little is known about the mechanisms underlying the effect of TAMs on temozolomide (TMZ) induced chemoresistance. Members of the metzincin superfamily such as Matrix Metalloproteases (MMPs) and A Disintegrin and Metalloprotease (ADAM) proteases are important participants in the process of intercellular communications in the tumor microenvironment. Herein, we revealed a novel concept of an intra-tumoral ADAM8 mediated malignant positive feedback loop constituted by the intimate interaction of tumor associated macrophages (TAMs) and GBM cells under TMZ treatment. These findings provide a convincing example and further support the notion that the tumor microenvironment, in addition to GBM cells and GBM stem-like cells, should be considered as an essential modulator of therapy in GBM. In conclusion, our study provides a rational basis for TAM sparing ADAM8-targeting in GBM to optimize standard chemotherapy. Abstract Standard chemotherapy of Glioblastoma multiforme (GBM) using temozolomide (TMZ) frequently fails due to acquired chemoresistance. Tumor-associated macrophages and microglia (TAMs) as major immune cell population in the tumor microenvironment are potential modulators of TMZ response. However; little is known about how TAMs participate in TMZ induced chemoresistance. Members of the metzincin superfamily such as Matrix Metalloproteases (MMPs) and A Disintegrin and Metalloprotease (ADAM) proteases are important mediators of cellular communication in the tumor microenvironment. A qPCR screening was performed to identify potential targets within the ADAM and MMP family members in GBM cells. In co-culture with macrophages ADAM8 was the only signature gene up-regulated in GBM cells induced by macrophages under TMZ treatment. The relationship between ADAM8 expression and TAM infiltration in GBM was determined in a patient cohort by qPCR; IF; and IHC staining and TCGA data analysis. Moreover; RNA-seq was carried out to identify the potential targets regulated by ADAM8. CCL2 expression levels were determined by qPCR; Western blot; IF; and ELISA. Utilizing qPCR; IF; and IHC staining; we observed a positive relationship between ADAM8 expression and TAMs infiltration level in GBM patient tissues. Furthermore; ADAM8 induced TAMs recruitment in vitro and in vivo. Mechanistically; we revealed that ADAM8 activated HB-EGF/EGFR signaling and subsequently up-regulated production of CCL2 in GBM cells in the presence of TMZ treatment; promoting TAMs recruitment; which further induced ADAM8 expression in GBM cells to mediate TMZ chemoresistance. Thus; we revealed an ADAM8 dependent positive feedback loop between TAMs and GBM cells under TMZ treatment which involves CCL2 and EGFR signaling to cause TMZ resistance in GBM.
Simple Summary: Resistance to standard therapies impose a huge challenge on the treatment for glioblastoma multiforme (GBM), which is often considered as a cell intrinsic property of either GBM or, more significantly, of GBM stem-like cells. Tumor-associated macrophages and microglia (TAMs) take up the majority of the immune population in the tumor microenvironment of GBM and potentially participating in modulating therapy responses. However, little is known about the mechanisms underlying the effect of TAMs on temozolomide (TMZ) induced chemoresistance. Members of the metzincin superfamily such as Matrix Metalloproteases (MMPs) and A Disintegrin and Metalloprotease (ADAM) proteases are important participants in the process of intercellular communications in the tumor microenvironment. Herein, we revealed a novel concept of an intratumoral ADAM8 mediated malignant positive feedback loop constituted by the intimate interaction of tumor associated macrophages (TAMs) and GBM cells under TMZ treatment. These findings provide a convincing example and further support the notion that the tumor microenvironment, in addition to GBM cells and GBM stem-like cells, should be considered as an essential modulator of therapy in GBM. In conclusion, our study provides a rational basis for TAM sparing ADAM8-targeting in GBM to optimize standard chemotherapy.
Abstract: Standard chemotherapy of Glioblastoma multiforme (GBM) using temozolomide (TMZ) frequently fails due to acquired chemoresistance. Tumor-associated macrophages and microglia (TAMs) as major immune cell population in the tumor microenvironment are potential modulators of TMZ response. However; little is known about how TAMs participate in TMZ induced chemoresistance. Members of the metzincin superfamily such as Matrix Metalloproteases (MMPs) and A Disintegrin and Metalloprotease (ADAM) proteases are important mediators of cellular communication in the tumor microenvironment. A qPCR screening was performed to identify potential targets within the ADAM and MMP family members in GBM cells. In co-culture with macrophages ADAM8 was the only signature gene up-regulated in GBM cells induced by macrophages under TMZ treatment. The relationship between ADAM8 expression and TAM infiltration in GBM was determined in a patient cohort by qPCR; IF; and IHC staining and TCGA data analysis. Moreover; RNA-seq was carried out to identify the potential targets regulated by ADAM8. CCL2 expression levels were determined by qPCR; Western blot; IF; and ELISA. Utilizing qPCR; IF; and IHC staining; we observed a positive relationship between ADAM8 expression and TAMs infiltration level in GBM patient tissues. Furthermore; ADAM8 induced TAMs recruitment in vitro and in vivo. Mechanistically; we revealed that ADAM8 activated HB-EGF/EGFR signaling and subsequently up-regulated production of CCL2 in GBM cells in the presence of TMZ treatment; promoting TAMs recruitment; which further induced ADAM8 expression in GBM cells to mediate TMZ chemoresistance. Thus; we revealed an ADAM8 dependent positive feedback loop between TAMs and GBM cells under TMZ treatment which involves CCL2 and EGFR signaling to cause TMZ resistance in GBM.
Introduction
Glioblastoma multiforme (GBM) is an extremely malignant central nervous system tumor with an annual incidence rate of 3-5/100,000 and a dismal prognosis of 14.6 months survival, accounting for about 50% of all gliomas [1,2]. Multimodal treatment incorporating surgical resection, radiotherapy, and chemotherapy is the standard regimen for GBM patients [3]. Temozolomide (TMZ), the first-line alkylating agent for GBM chemotherapy [4], efficiently penetrates the Blood-Brain Barrier (BBB) and causes cytotoxicity by inducing DNA double-strand damage. In previous studies, many genes related to DNA damage repair played an important role in drug resistance, such as alkylpurine-DNA-N-glycosidase, which may be the cause of TMZ resistance in glioblastoma [5][6][7]. However, TMZ efficacy has not been improved for GBM patients over the past 10 years, indicating that the underlying mechanism of TMZ resistance needs to be further excavated.
Both intrinsic characteristics of cancer cells and extrinsic interactions within the sophisticated tumor microenvironment (TME) contribute to treatment resistance and tumor aggression [8]. Increasing evidence manifested that chronic inflammation in TME was closely related to cancer initiation, promotion, and progression. Macrophages are the dominant orchestrators of tumor-promoting inflammatory signals and a high density of tumor-associated macrophages (TAMs) infiltration is associated with high-grade tumors and a dismal prognosis [9]. In GBM, TAMs (resident microglia and bone marrow derivedmacrophages) constitute the majority of inflammatory cells, accounting for up to 40% of the bulk tumor. In general, TAMs present as an immunosuppressive tumor-supportive phenotype and play significant roles in tumor proliferation, migration, and invasion [10]. Mitchem et al. found that TAMs promoted the chemotherapy resistance of pancreatic cancer and the regional immunosuppressive response [11]. Wang et al. demonstrated that TAMs locally aggregated in gliomas can promote tumor growth [12]. Hu et al. confirmed that TAMs can promote the invasion and tumor expansion of malignant gliomas [13]. Zhou et al. also found that TAMs recruited by glioblastoma stem cells promoted malignant growth [14]. However, little is known about the role of TAMs in promoting tumor chemoresistance to alkylating agents in GBM.
A disintegrin and metalloproteinase 8 (ADAM8) is a transmembrane protein consisting of 856 amino acids, involved in many physiological functions such as cell adhesion, cell fusion, signal transduction, and proteolysis [15]. ADAM8 expression in the central nervous system (CNS) is very low under physiological conditions. However, in the case of CNS inflammation, i.e., caused by increased expression of tumor necrosis factor alpha (TNF-a), the expression levels of ADAM8 in astrocytes and microglia are significantly increased [16], and overexpressed ADAM8 can promote the local matrix remodeling through proteolysis or cleavage of other substrates [17,18]. In addition, the markedly increased ADAM8 in a variety of malignant tumors has also attracted attention to its role in the malignant behavior of tumors [15]. Wildeboer et al. [19] showed increased invasion of ADAM8 expressing GBM cells. He et al. [20] showed that ADAM8 expression was significantly related to tumor progression and patient prognosis. Li et al. demonstrated that ADAM8 affected angiogenesis in GBM [21]. Considering ADAM8 as a protein that plays an important role in both CNS inflammation and tumor pathology, we reasonably believe that ADAM8 may mediate the malignant biology of tumor cells through enhancing or maintaining local inflammatory responses in GBM. Studies have shown that some members of the ADAM family (ADAM10, ADAM12, and ADAM17) participated in the ectodomain shedding of the EGFR ligand and mediated the EGFR signaling pathways in certain circumstances [22]. However, it is not clear whether ADAM8 is involved in the EGFR signaling pathways in GBM under TMZ treatment. An induce CCL2-mediated macrophages recruitment [23]. In our previous study, we validated that TMZ induced ADAM8 overexpression in GBM cells [24]. Therefore, we hypothesize that ADAM8 may induce TAMs recruitment through EGFR signaling-mediated CCL2 expression in GBM under TMZ treatment to induce chemoresistance.
In this study, we investigated the role of ADAM8 in recruiting TAMs to mediate chemoresistance and put forward a potential ADAM8 positive feedback loop involved in the chemo-resistance between TAMs and GBM cells, providing a theoretical basis for ADAM8-targeting treatment of GBM patients in the future.
Patient Specimens
Tumor tissues of patients who underwent surgical resections of GBM were collected after patients provided informed consent at Tongji Hospital of Huazhong University of Science and Technology. Fresh tissues were immediately snap frozen in liquid nitrogen and preserved at −80 • C until RNA separation or embedded in paraffin for immunohistochemistry and immunofluorescent staining. The Human Ethics Committee of Tongji Hospital of Huazhong University of Science and Technology approved this study, and all the studies were in accordance with the ethical standards of the 2008 Helsinki Declaration.
Cell Lines, Cell Culture, and Construction of Stable Cell Lines
Established human glioblastoma cell lines U87MG and U251MG and THP-1 monocyte were purchased from the American Type Culture Collection and provided by Sino-German Neuro-Oncology Molecular Laboratory. GBM primary cells (G1 and G2) were prepared from human GBM specimens collected directly after surgery as described [24]. To generate glioblastoma cells (U87MG and G1) with an ADAM8 knockdown, the small hairpin RNA (shRNA) of human ADAM8 (target sequences: CGTGGACAAGCTATATCAGAA and GCATGACAACGTACAGCTCAT) were synthesized and cloned into the PLKO.1-puro vector (Invitrogen, Chongqing, China). ADAM8 knockdown and control plasmids were co-transfected into HEK293T cells with pMD2.G and psPAX2 plasmids using lipofectamine 3000 (Invitrogen, Waltham, MA, USA) following the manufacturer's instructions, respectively. After 72 h, the medium supernatant was collected and the virus liquid was obtained after 0.45 µm filtration. Glioblastoma cells were seeded into 6-well plates and cultured with virus liquid. After lentivirus transfection for 3 days, Puromycin was added to screen for transformants. After 14 days selection, stable ADAM8 knockdown GBM cell lines (U87MG and G1) were obtained. ADAM8 expression in single cell clones was analyzed by qRT-PCR and western blotting. GBM cell lines were cultured in DMEM high glucose (4.5 g/L) supplemented with 1% L-glutamine (200 mM), 1% penicillin/streptomycin, and 10% fetal bovine serum (heat-inactivated) (all purchased from Gibco company, NY, USA). THP-1 cells were maintained in RPMI 1640 medium containing 10% Fetal Bovine Serum (FBS) at 37 • C in a humidified atmosphere with 5% CO 2 . THP-1 cells were primed with PMA (Sigma (Saint Louis, MI, USA), 100 ng/mL) for 48 h to generate unpolarized macrophages (M0) as macrophage model cell line.
Quantitative Real-Time PCR (qRT-PCR)
Total RNA was isolated from GBM tissues and cell lines using Trizol RNA isolation reagent (Invitrogen) and reversely transcribed to cDNA with a cDNA Synthesis kit (Yeasen Biotech Co., Shanghai, China). qRT-PCR was used to detect the gene expressions with Hieff ® qPCR SYBR Green Master Mix (Low Rox Plus) (Yeasen Biotech Co. China). The sequences of primers are shown in Supplementary Table S2.
ELISA
Human CCL2 levels in cell culture supernatants were determined by ELISA kits for human CCL2 (Elabscience, Houston, TX, USA) according to the manufacturer's instructions.
Cell Proliferation Assay
Co-cultures of GBM cells with macrophages were performed in the presence of TMZ. In brief, THP-1 derived macrophages were seeded into a 6-well transwell chamber with an 0.4 µm pore size polycarbonate membrane (Corning, NY, USA) at a density of 2 × 10 5 cells, and GBM cells were seeded in the lower chamber at a density of 2 × 10 5 cells in the presence of TMZ. After 3 days, GBM cells were fixed with 4% paraformaldehyde and Crystal Violet Staining Solution (Servicebio, Wuhan, China) was used to visualize the cells. We randomly selected five visual fields of each group under the microscope and the number of GBM cells were measured by ImageJ software.
Cell Migration and Invasion Assays
Co-Cultures of GBM cells with THP-1 derived macrophages were performed in transwell assays. For the macrophage migration assay, THP-1-derived macrophages were seeded into a 24-well transwell chamber with an 8 µm pore size polycarbonate membrane (Corning, NY, USA) at a density of 5 × 10 4 cells in 200 µL serum-free medium, TMZ (500 µmol/L, 5 days)-or DMSO-treated U87MG_scramble, U87MG_shA8, G1_scramble cells, and G1_shA8 were seeded into the lower chamber at a density of 5 × 10 4 cells in 400 µL 10% FBS medium. For the GBM cell migration assay, GBM cells were seeded into the 24-well transwell chamber at a density of 2 × 10 4 cells, and THP-1-derived macrophages were added to the lower chamber at a density of 5 × 10 4 cells. After 24 h for GBM cells and 48 h for THP-1 derived macrophages, 4% paraformaldehyde was used to fix the migrated cells and Crystal Violet Staining Solution (Servicebio) was used to visualize the cells. For GBM cells invasion assay, 3 × 10 4 cells were maintained in the matrigel (BD Bioscience, San Jose, CA, USA) coated chamber for 24 h. Three independent experiments were carried out and analyzed through ImageJ software. Then, samples were blocked with a PBS solution containing 1% BSA plus 0.3% Triton X-100 for 2 h at room temperature and then incubated with indicated primary antibody overnight at 4 • C followed by the fluorescence-conjugated second antibody (1:200) at room temperature for 2 h. After being counterstained with DAPI for 5 min, sections were mounted on glass and subjected to microscopy. The extent of positive cells was measured by ImageJ, which was defined as the ratio of positive cells relative to the total cells in five randomly selected viewing fields. The images were acquired with a fluorescent microscope (Olympus (Tokyo, Japan), CKX53).
Xenograft Studies in Nude Mice
Human U87MG_scramble and U87MG_shA8 cells (5 × 10 6 in 100 µL PBS) were inoculated subcutaneously into the right armpit of BALB/c nude mice (6-week-old, male). After 7 days, the mice bearing tumors were randomized into U87MG_scramble + drug vehicle (PBS and dimethyl sulfoxide [DMSO]) group, U87MG_scramble + TMZ group, U87MG_shA8 + drug vehicle group, and U87MG_shA8 + TMZ group. Mice in TMZ groups received 5 mg/kg TMZ on 5 days on/2 days off regimen (two cycles in total, intranodal injection.), and mice in drug vehicle groups received equivalent volumes of drug vehicle. About 28 days after the first treatment, all mice were euthanized, and the tumor masses were carefully removed, measured, and processed for IHC and IF staining. For in-site inoculation, five thousand human U87MG_scramble and U87MG_shA8 cells were transplanted into the right frontal lobe of mice. After 7 days, the mice bearing tumor received 5 mg/kg TMZ on 5 days on/2 days off regimen (two cycles in total, intraperitoneal injection.). Overall survival was analyzed between the U87MG_scramble and U87MG_shA8 groups.
Western Blot
Total protein was extracted with RIPA buffer, and 20-50 µg samples were loaded after measuring their concentration using a BCA kit and separated by 6%, 8% or 12% sodium dodecyl sulfate-polyacrylamide gel electrophoresis. The separated proteins were transferred onto polyvinylidene fluoride membrane blocked with 5% fat-free milk for 2 h at room temperature and incubated in primary antibodies against ADAM8
RNA-Seq Data Analysis
U87MG_shA8 and U87MG_scramble cells were harvested in TRizol for RNA extraction and sequencing by BGI (Beijing Genomic Institute in Shenzhen). Briefly, SOAPnuke (v1.5.2, BGI, Shenzhen, China) was used to filter the sequencing data, then clean reads were stored in FASTQ format, mapped to the reference genome utilizing HISAT2 (v2.0.4, Hopkins, Baltimore, MD, USA). Afterwards, fusion genes and differential splicing genes (DSGs) were analyzed through Ericscript (v0.5.5), and rMATS (V3.2.5, Sourceforge, San Diego, CA, USA), respectively. The clean reads were aligned by Bowtie2 (v2.2.5, Hopkins, Baltimore, MD, USA) to a known and novel database built by BGI, which includes coding transcript, then RSEM (v1.2.12) was applied to calculate the levels of gene expression. Differential expression was analyzed using the DESeq2 (v1.4.5, UNC, Chapel Hill, NC, USA) with a Q value ≤ 0.05. To deduce the phenotype change, a GO and KEGG enrichment analysis was performed by Phyper in the basis of Hypergeometric test. Bonferroni was used to correct the significant levels of terms and pathways by Q value with a rigorous threshold (Q value ≤ 0.05)
Statistical Analysis
An unpaired two-tailed Student's t-test and Pearson's X2-test were used to analyze the variances in each experimental group. The Kaplan-Meier method and log-rank test were used to estimate survival probabilities. Based on the obtained results, the data were considered not significant (p > 0.05), significant (* p < 0.05, ** p < 0.01, *** p < 0.001 and **** p< 0.0001). Calculations were performed using the GraphPrism statistical analysis software (v6.0, GraphPad Software Inc., CA, USA).
ADAM8 Expression Induced by TMZ and Macrophage Co-Culture
Initially, we performed a qPCR screen to analyze expression levels of MMP and ADAM genes in GBM cell lines (U87MG and U251MG) and primary cells (G1 and G2) under TMZ treatment (500 ng/mL) and co-culture with THP-1 derived macrophages for 3 days ( Figure S1). Among all MMPs and ADAMs detected, we found that ADAM8 was significantly upregulated in GBM cells by TMZ, particularly under conditions of coculture ( Figure 1A-D). Western blot analyses were carried out to further validate TMZ and macrophage-induced ADAM8 overexpression in GBM cells ( Figure 1E-H; The uncropped Western blots have been shown in Figure S5). Protein levels of ADAM8 in GBM cell lines U87MG and U251MG and primary cells G1 and G2 with co-culture and TMZ treatment were detected by western blot and measured by Image J. mRNA and protein fold change were statistically shown in the bar graph and data are presented as mean ± SD. TMZ (temozolomide), ADAM8 (A disintegrin and metalloproteinase 8). * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001.
Positive Correlation of ADAM8 Expression and Macrophages Infiltration in GBM Tissue
To investigate the relationship between ADAM8 expression and macrophage infiltration level, bioinformatic analyses of correlation using the public dataset GEPIA (Gene Expression Profiling Interactive Analysis) were performed. We selected GBM samples from TCGA projects on GEPIA and found a positive correlation between ADAM8 gene expression and expression of TAM signatures including Iba-1 (AIF1, R = 0.38, p = 4.5 × 10 −7 ), CD11b (ITGAM, R = 0.63, p = 0), CD163 (R = 0.56, p = 4.9 × 10 −15 ), and CD206 (MRC1, (E-H) Protein levels of ADAM8 in GBM cell lines U87MG and U251MG and primary cells G1 and G2 with co-culture and TMZ treatment were detected by western blot and measured by Image J. mRNA and protein fold change were statistically shown in the bar graph and data are presented as mean ± SD. TMZ (temozolomide), ADAM8 (A disintegrin and metalloproteinase 8). * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001.
Positive Correlation of ADAM8 Expression and Macrophages Infiltration in GBM Tissue
To investigate the relationship between ADAM8 expression and macrophage infiltration level, bioinformatic analyses of correlation using the public dataset GEPIA (Gene Expression Profiling Interactive Analysis) were performed. We selected GBM samples from TCGA projects on GEPIA and found a positive correlation between ADAM8 gene expression and expression of TAM signatures including Iba-1 (AIF1, R = 0.38, p = 4.5 × 10 −7 ), CD11b (ITGAM, R = 0.63, p = 0), CD163 (R = 0.56, p = 4.9 × 10 −15 ), and CD206 (MRC1, R = 0.64, p = 0) (Figure 2A), indicating that ADAM8 expression is associated with TAMs and may play a role in attracting TAMs into GBM. Using our GBM patient cohort (n = 18), the previous positive relationship between mRNA levels of ADAM8 and Iba-1 (Figure 2A) was validated by immunostaining and qPCR (N = 18, R 2 = 0.5316, p = 0.0006) ( Figure 2B,C). Moreover, immunohistochemistry staining of GBM tissues showed that high-ADAM8 expression groups tend to have a higher density of infiltrated TAMs than the low-ADAM8 expression group (Figure 2D,E).
ADAM8 Induces Macrophage Recruitment in Vitro and in Vivo
To investigate the role of ADAM8 in TAMs recruitment in vitro, we constructed ADAM8 knockdown GBM cells (U87MG_shA8 and G1_shA8) and scramble controls (U87MG_scramble and G1_scramble) and co-cultured these cells with THP-1 derived macrophages using a standard protocol to obtain M0 macrophages. THP-1 derived macrophages were seeded into 24-well transwell chambers at a density of 5 × 10 4 cells in 200 (D) Immunohistochemistry staining of ADAM8, Iba-1, and CD206 of GBM tissues in two defined groups (low ADAM8 and high ADAM8) using the median value of ADAM8 expression as classification criteria. (E) The bar graph showed the statistical results of immunohistochemistry staining of ADAM8, Iba-1, and CD206, measured by Image J and analyzed by an unpaired two-tailed Student's t-test. scale bar = 100 um. **** p < 0.0001.
ADAM8 Induces Macrophage Recruitment In Vitro and In Vivo
To investigate the role of ADAM8 in TAMs recruitment in vitro, we constructed ADAM8 knockdown GBM cells (U87MG_shA8 and G1_shA8) and scramble controls (U87MG_scramble and G1_scramble) and co-cultured these cells with THP-1 derived macrophages using a standard protocol to obtain M0 macrophages. THP-1 derived macrophages were seeded into 24-well transwell chambers at a density of 5 × 10 4 cells in 200 µL serum-free medium. TMZ-(500 µmol/L, 5 days) or DMSO-treated U87MG_shA8, G1 and shA8 and U87MG_scramble and G1_scramble cells were seeded into the lower chamber at a density of 1 × 10 4 cells in 400 µL 10% FBS medium for 48 h. For the group of TMZ-treated GBM cells, significant increases in the number of migrated macrophages were observed. Compared to scramble controls, both ADAM8 knockdown GBM cells markedly decreased the numbers of migrated macrophages ( Figure 3A-D). To evaluate ADAM8-dependent macrophage recruitment in vivo, U87MG_shA8 and U87MG_scramble cells were inoculated subcutaneously into the right flank of BALB/c nude mice (6-week-old, male). After 7 days, the mice bearing tumors were randomized into U87MG_scramble + drug vehicle (PBS and dimethyl sulfoxide [DMSO]) group, U87MG_scramble + TMZ group, U87MG_shA8 + drug vehicle group, and U87MG_shA8 + TMZ group. Mice in TMZ groups received daily 5 mg/kg TMZ for a 5 days on and 2 days off regimen (two cycles in total, intra-tumoral injection), and mice in drug vehicle groups received the equivalent drug vehicle. Twenty-eight days after the first treatment, the tumors were collected and processed. IHC staining of the resulting tumors showed a significantly higher expression of ADAM8, Iba-1, and CD206 in the group of U87MG_scramble + TMZ, compared to U87MG_scramble + drug vehicle. In accordance, ADAM8 knock-down showed a markedly decreased expression of ADAM8, Iba-1 and CD206 ( Figure 3E,F), indicating that ADAM8 promotes TAMs recruitment in the presence of TMZ. Moreover, we showed that macrophage promoted GBM cells proliferation, migration and invasion in vitro in the presence of TMZ, indicating that macrophage induced GBM chemoresistance in vitro ( Figure S2).
ADAM8 Regulates the HB-EGF/EGFR Signal Pathway
To further analyze the underlying mechanisms of ADAM8-mediated TAMs recruitment, RNA sequencing was performed on U87MG_shA8, and U87MG_scramble cells. We identified 2533 up-regulated and 2855 down-regulated genes in U87MG cells ( Figure S3). Then we selected for the top 200 down-regulated genes to undergo GO enrichment to search for the involvement of potential cell signaling pathways. The top twenty GO enrichments are listed in Figure 4A. We observed that among seven genes (MAPK1, MAP2K1, E2F1, CCND1, HBEGF, MYC and VEGFA) in the most significant GO terms, HB-EGF accounts for the most significant signature after ADAM8 knock down ( Figure 4B). To further corroborate this finding, qPCR analyses were carried out in GBM cells (U87MG and G1) and revealed that ADAM8 knockdown markedly reduced the expression of HB-EGF ( Figure 4C). It has been reported that HB-EGF binds to EGFR thereby activating downstream EGFR signaling cascades including (PI3K/AKT, MAPK/ERK, and JAK/STAT etc.) [26]. Consequently, we determined the protein expression of HB-EGF, p-EGFR, p-AKT, and p-ERK in GBM cells by western blot and observed that ADAM8 knockdown reduced the expression of HB-EGF, p-EGFR, p-AKT, and p-ERK ( Figure 4D,E). Moreover, TMZ treatment augmented ADAM8, HB-EGF, p-EGFR, p-AKT, and p-ERK expression ( Figure 4D,E). Immunofluorescence staining of xenograft tissue sections showed a significantly higher expression of HB-EGF ( Figure 4F), p-EGFR ( Figure 4G), and p-ERK ( Figure S4A) in the U87MG_scramble +TMZ group compared to the U87MG scramble + drug vehicle group. ADAM8 knockdown significantly decreased the expression of HB-EGF ( Figure 4F), p-EGFR ( Figure 4G), and p-ERK ( Figure S4A), as manifested in the group of U87MG_shA8 + drug vehicle and U87MG_shA8 + TMZ. Concomitantly, the immunofluorescence of human GBM tissues showed a stronger staining density of HB-EGF in the specimens from high-ADAM8 expression patients compared to the low-ADAM8 expression patients ( Figure S4B). Taken together, our above data suggest that ADAM8 regulates the HB-EGF/EGFR signaling pathway by affecting expression levels of HB-EGF.
TMZ groups received daily 5 mg/kg TMZ for a 5 days on and 2 days off regimen (two cycles in total, intra-tumoral injection), and mice in drug vehicle groups received the equivalent drug vehicle. Twenty-eight days after the first treatment, the tumors were collected and processed. IHC staining of the resulting tumors showed a significantly higher expression of ADAM8, Iba-1, and CD206 in the group of U87MG_scramble + TMZ, compared to U87MG_scramble + drug vehicle. In accordance, ADAM8 knock-down showed a markedly decreased expression of ADAM8, Iba-1 and CD206 ( Figure 3E,F), indicating that ADAM8 promotes TAMs recruitment in the presence of TMZ. Moreover, we showed that macrophage promoted GBM cells proliferation, migration and invasion in vitro in the presence of TMZ, indicating that macrophage induced GBM chemoresistance in vitro (Figure S2). The bar graph represented the statistical results of the immunohistochemistry staining of ADAM8, Iba-1, and CD206, respectively. All experiments were repeated at least three times, and data were analyzed using a student's t-test. scale bar = 100 um. shA8 represents ADAM8 knock down. ** p < 0.01, *** p < 0.001, **** p < 0.0001.
ADAM8 Induce HB-EGF/EGFR Mediated CCL2 Expression
Since the EGFR signaling pathway has been reported to induce CCL2 expression to recruit macrophages in glioblastoma [23], we propose that ADAM8 could induce TAM recruitment by regulating CCL2 expression via HB-EGF/EGFR. Accordingly, qPCR and Western blot showed ADAM8 knockdown reducing intracellular CCL2 expression in GBM cells both at the transcriptional and translational level ( Figure 5A). Given the fact that CCL2 is a secreted protein, we applied ELISA to detect the secreted CCL2 in the supernatant of ADAM8 knockdown and TMZ-treated GBM cells. ELISA assays showed that ADAM8 knockdown significantly decreased CCL2 secretion, and TMZ induced the expression of CCL2 in GBM cells. (Figure 5B). Immunofluorescence staining of xenograft tissue sections showed a markedly higher CCL2 expression in the U87MG_scramble +TMZ group compared to the U87MG_scramble + drug vehicle group. ADAM8 knockdown markedly reduced CCL2 expression ( Figure 5C). Moreover, the immunofluorescence of human GBM tissues showed a higher staining density of CCL2 in the high-ADAM8 expression patient group compared to the low-ADAM8 expression patient group ( Figure S4C). To further validate ADAM8 induced CCL2 expression through the HB-EGF/EGFR signaling pathway, we used Erlotinib as an EGFR signaling pathway inhibitor. Western blot showed that Erlotinib significantly inhibited EGFR downstream signaling pathway-related protein expression and phosphorylation levels, and subsequently reduced the CCL2 expression in GBM cells ( Figure 5D,E). The ELISA assay manifested that Erlotinib markedly reduced the CCL2 secretion under TMZ treatment ( Figure 5F). Furthermore, an in vitro migration assay showed that Erlotinib significantly decreased the numbers of migrated macrophages ( Figure 5G). Therefore, these findings indicated that ADAM8 induced CCL2 expression to recruit TAMs through the HB-EGF/EGFR signaling pathway. To validate our observation in vivo, U87MG_scramble and U87MG_shA8 cells were inoculated subcutaneously into the right armpit of BALB/c nude mice (6-week-old, male). Seven days later, the mice bearing tumors received a daily application of 5 mg/kg TMZ for a 5 days on and 2 days off regimen (two cycles in total, intra-tumoral injection). After 28 days, the resulting tumors were removed and measured. The changes in tumor volumes showed that ADAM8 knockdown significantly reduced the tumor growth under TMZ treatment ( Figure 5H and Figure S4D). U87MG_scramble and U87MG_shA8 cells were orthotopically inoculated into the right frontal lobe of mice for survival analysis. Seven days later, the mice bearing tumors received daily 5 mg/kg TMZ for a 5 days on and 2 days off regimen (two cycles in total, intraperitoneal injection). Consistent with the observed effects on tumor growth in vivo, ADAM8 knockdown markedly prolonged the overall survival of tumor-bearing mice ( Figure 5I).
Discussion
An acquired chemoresistance limits TMZ efficacy in GBM patients. Previous studies have identified ADAM8 as a modulator of chemoresistance in GBM cells. It is commonly recognized that TAMs inhabit GBM tumors with an immunosuppressive pro-tumor phenotype and play pivotal roles for GBM progression [27][28][29][30]. A qPCR screen to detect a spectrum of ADAM and MMP genes in GBM cells revealed that ADAM8 stands out as gene whose expression is induced by TMZ treatment and further is enhanced by the co-culture with macrophages under TMZ treatment. Consequently, ADAM8 could be a major player for the communication of tumor cells with the tumor microenvironment, in particular in conjunction with TAMs. ADAM8 as an inflammatory mediator regulates diverse pathological processes in CNS inflammation and tumor biology [15,17] through its proteolysis and non-proteolysis function, attributable to the different structural domains present in the full-length protein. In the current study, we demonstrated the roles of ADAM8 in recruiting TAMs to mediate chemoresistance in GBM and simultaneously put forward a potential ADAM8 positive feedback loop involved in the interaction between GBM cells and TAMs under chemotherapy.
ADAM8 overexpression induced by anti-inflammatory macrophages mediates the invasion of pancreatic adenocarcinoma tumor cells [31]. In GBM cells, ADAM8 modulates angiogenesis thereby affecting GBM tumor progression [21,32]. Our previous study manifests that TMZ induced ADAM8 overexpression can mediate TMZ chemoresistance in GBM cells [24]. Hence, we hypothesized that TMZ induced ADAM8 overexpression in GBM cells subsequently modulates the recruitment of TAMs which in turn further enhances TMZ chemoresistance by inducing ADAM8 upregulation in GBM cells in a "malignant positive feedback loop".
An increasing number of studies investigated how GBM recruit TAMs and maintain an immunosuppressive TME [23,[33][34][35]. Various chemokines released from GBM cells attract TAMs directly, whereas some signaling molecules overexpressed in GBM cells induced TAMs recruitment indirectly through regulating the expression of chemokines. In our study, we demonstrated a positive relationship between ADAM8 mRNA expression and Iba-1 staining in human GBM tissues. Moreover, GEPIA data analysis also demonstrated a significant correlation, indicating that ADAM8 may participate in TAMs recruitment. Chemotherapeutic agents can induce TAMs infiltration and orientate TAMs toward tumorsupporting or anti-tumor directions, depending on the type and application schemes of chemotherapeutic agents and the type of tumors [36]. Our results corroborate that TMZ augments macrophage migration in vitro and M2-like macrophages recruitment in vivo in an ADAM8-dependent manner. This is supported by data showing that the knockdown of ADAM8 reduced macrophage migration in vitro and M2-like macrophages recruitment in vivo, indicating a mechanistic role for ADAM8 in TMZ-induced TAMs recruitment in GBM.
The next experimental step was to figure out the mechanism of ADAM8 induced TAMs recruitment. Proteolytic cleavage and non-proteolytic intracellular signal transduction are the two major functions of ADAM8. ADAM8 interacts with integrins through the disintegrin (DIS) domain, thereby activating integrin signaling pathways such as focal adhesion kinase (FAK), extracellular regulated kinase (ERK1/2), and protein kinase B (AKT/PKB) signaling, further contributing to cancer progression via the induction of angiogenesis, metastasis and chemoresistance [24,37]. Through RNA-seq data analysis, we found that HB-EGF was significantly down-regulated in ADAM8 knockdown of U87MG cells. qPCR and western blot were carried out to validate the transcriptional data in U87MG and G1 cells. Moreover, immunofluorescence of human GBM tissues also showed a positive correlation, as there was a stronger staining density of HB-EGF in the high-ADAM8 + patients compared to low-ADAM8 + patients. the activation of EGFR mediated a variety of intracellular downstream signals, contributing to tumor aggression and resistance to first-line chemotherapies [38][39][40]. Although studies have mentioned that ADAM family proteases can mediate ectodomain shedding of HB-EGF to activate EGFR signaling pathways [22,41], it is still unknown whether ADAM8 is implicated in EGFR activation in GBM under TMZ treatment, in this case by increasing the total amount of HB-EGF, so that shedding of HB-EGF is not the major function of ADAM8 (data not shown). Here, we validated that ADAM8 activates HB-EGF/EGFR signaling pathways in GBM cells as a consequence of TMZ treatment, and ADAM8 knockdown markedly reduced the phosphorylation of EGFR and subsequently the activation of EGFR downstream signals (AKT and ERK signaling).
The regulation of CCL2 expression by ADAM8 through HB-EGF/EGFR signaling is another major finding in our study. CCL2 is a member of the CC chemokine family which acts as a classical chemokine to regulate the chemoattraction of macrophages, monocytes, and other inflammatory cells [23,42,43]. Tumor cells of various types were reported to release CCL2 resulting in the recruitment of macrophages, thereby supporting tumor progression. For instance, Qian et.al demonstrated that CCL2 recruited inflammatory monocytes to facilitate breast-tumor metastasis [44]. Wei et.al reported that the production of CCL2 promoted macrophage recruitment and subsequently colorectal cancer metastasis [45]. Similarly, in several experimental glioblastoma models, tumor cells released CCL2 to attract macrophages [46], and the blockade of CCL2/CCR2 prolonged mouse survival in GBM models [47,48]. Similarly, CCL2-expressing glioma cells induced a 10-fold induction of Ox42-positive cell density in rat models, while tumors overexpressing CCL2 increased more than three-fold, leading to reduced survival in rats [46]. Moreover, Felsenstein et al. showed that TAMs expressed CCR2 to various extents in human GBM specimens and syngeneic glioma models. Glioma inoculation using a Ccr2-deficient strain revealed a 30% reduction of TAMs intratumorally [42]. As An et.al demonstrated that EGFR cooperated with EGFRvIII to induce CCL2-mediated TAMs recruitment in GBM [23], we investigated CCL2 expression in glioblastoma cells under TMZ treatment. Both, in vitro and in vivo, we roundly validated that CCL2 was upregulated in TMZ-treated GBM cells, and ADAM8 knockdown reduced CCL2 expression in GBM cells under TMZ treatment. Furthermore, to demonstrate that CCl2 expression is dependent on EGFR signaling, Erlotinib was used as EGFR signaling pathway inhibitor which significantly reduced CCL2 expression in GBM cells, as shown by western blot and ELISA assays. Our in vitro migration assays showed that Erlotinib significantly decreased the numbers of migrated macrophages. These findings identify an axis in which TMZ induces ADAM8 and leads to downstream signaling that causes the enhanced secretion of CCL2 to recruit TAMs into GBM under TMZ treatment via the HB-EGF/EGFR signaling pathway.
In general, we provide a convincing example that TAMs play pivotal roles in chemoresistance of GBM and further support the point that the tumor microenvironment should be considered as an essential modulator of therapy in GBM. Nevertheless, there are limitations of our subcutaneous immunocompromised models: (i) they don't reflect the original tumor location, and (ii) they are in immunocompromised mice, which may heighten dependence on innate immunity.
Conclusions
Taken together, we revealed a novel ADAM8 mediated malignant positive feedback loop between TAMs and GBM cells under TMZ treatment. As such, ADAM8 upregulates the HB-EGF/EGFR signaling-mediated CCL2 expression of GBM cells under TMZ treatment, subsequently inducing TAMs recruitment, which further stimulates ADAM8 upregulation of GBM cells to induce TMZ chemoresistance. These findings support the notion that the tumor microenvironment, in addition to GBM cells and GBM stem-like cells should be considered as an essential modulator of therapy in GBM. Our study provided a theoretical basis for TAM sparing ADAM8-targeting in GBM to optimize standard chemotherapy.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki, the animal study protocol was approved by the Institutional Review Board of Tongji Hospital of Huazhong University of Science and Technology (protocol code TJH-202206015) and the human study was approved by the Tongji Hospital of Huazhong Uni-versity of Science and Technology (protocol code: TJ-IRB20211166).
Informed Consent Statement:
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. | 8,720.6 | 2022-10-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Inter-row information recognition of maize in the middle and late stages via LiDAR supplementary vision
In the middle and late stages of maize, light is limited and non-maize obstacles exist. When a plant protection robot uses the traditional visual navigation method to obtain navigation information, some information will be missing. Therefore, this paper proposed a method using LiDAR (laser imaging, detection and ranging) point cloud data to supplement machine vision data for recognizing inter-row information in the middle and late stages of maize. Firstly, we improved the YOLOv5 (You Only Look Once, version 5) algorithm based on the characteristics of the actual maize inter-row environment in the middle and late stages by introducing MobileNetv2 and ECANet. Compared with that of YOLOv5, the frame rate of the improved YOLOv5 (Im-YOLOv5) increased by 17.91% and the weight size decreased by 55.56% when the average accuracy was reduced by only 0.35%, improving the detection performance and shortening the time of model reasoning. Secondly, we identified obstacles (such as stones and clods) between the rows using the LiDAR point cloud data to obtain auxiliary navigation information. Thirdly, the auxiliary navigation information was used to supplement the visual information, so that not only the recognition accuracy of the inter-row navigation information in the middle and late stages of maize was improved but also the basis of the stable and efficient operation of the inter-row plant protection robot was provided for these stages. The experimental results from a data acquisition robot equipped with a camera and a LiDAR sensor are presented to show the efficacy and remarkable performance of the proposed method.
In the middle and late stages of maize, light is limited and non-maize obstacles exist. When a plant protection robot uses the traditional visual navigation method to obtain navigation information, some information will be missing. Therefore, this paper proposed a method using LiDAR (laser imaging, detection and ranging) point cloud data to supplement machine vision data for recognizing inter-row information in the middle and late stages of maize. Firstly, we improved the YOLOv5 (You Only Look Once, version 5) algorithm based on the characteristics of the actual maize inter-row environment in the middle and late stages by introducing MobileNetv2 and ECANet. Compared with that of YOLOv5, the frame rate of the improved YOLOv5 (Im-YOLOv5) increased by 17.91% and the weight size decreased by 55.56% when the average accuracy was reduced by only 0.35%, improving the detection performance and shortening the time of model reasoning. Secondly, we identified obstacles (such as stones and clods) between the rows using the LiDAR point cloud data to obtain auxiliary navigation information. Thirdly, the auxiliary navigation information was used to supplement the visual information, so that not only the recognition accuracy of the inter-row navigation information in the middle and late stages of maize was improved but also the basis of the stable and efficient operation of the inter-row plant protection robot was provided for these stages. The experimental results from a data acquisition robot equipped with a camera and a LiDAR sensor are presented to show the efficacy and remarkable performance of the proposed method.
Introduction
Maize is one of the five most productive cereals in the world (the other four being rice, wheat, soybean, and barley) (Patricio and Rieder, 2018) that is an important source of food crops and feed. In recent years, with the rapid increase in maize consumption, an efficient and intelligent maize production process has been required to increase productivity (Tang et al., 2018;Yang et al., 2022a). Inter-row navigation is a key to realizing the intelligence of maize planting. Pest control in the middle and late stages of maize determines the crop yield and quality. A small autonomous navigation plant protection robot is a good solution for plant protection in the middle and late stages of maize development (Li et al., 2019). However, in these stages, the high plant height (Chen et al., 2018), insufficient light, and several non-maize obstacles lead to a typical high-occlusion environment (Hiremath et al., 2014;Yang et al., 2022b). Commonly used navigation systems such as GPS (Global Positioning System) and BDS (BeiDou Navigation Satellite System) have shown poor signal quality in a high-occlusion environment (Gai et al., 2021); therefore, accurately obtaining navigation information between rows in the middle and late stages of maize has become the key issue to realizing the autonomous navigation of plant protection robots. At present, machine vision is the mainstream navigation method used to obtain inter-row navigation information in a high-occlusion environment (Radcliffe et al., 2018); that is, the RGB (red, green, and blue) camera acquires images of the maize stems, identifies maize stems through a trained model, and obtains position information so as to plan the navigation path. The convolutional neural network was used to train the robot to recognize the characteristics of maize stalks at the early growth stage, which was implemented on an inter-row information collection robot based on machine vision (Gu et al., 2020). Tang et al. reported the application and research progress of harvesting robots and vision technology in fruit picking . The authorsMachine vision technology was applied for the multi-target recognition of bananas and automatic positioning for the inflorescence axis cutting point (Wu et al., 2021); in addition, the improved YOLOv4 (You Only Look Once, version 4) micromodel and binocular stereo vision technology were applied for fruit detection and location (Wang et al., 2022;Tang et al., 2023). Zhang et al. proposed an inter-row information recognition algorithm for an intelligent agricultural robot based on binocular vision, where the effective inter-row navigation information was extracted by fusing the edge contour and height information of crop rows in the image . By setting the region of interest, Yang et al. used machine vision to accurately identify the crop lines between rows in the early growth stage of maize and extracted the navigation path of the plant protection robot in real time (Yang et al., 2022a). However, the inter-row environment in the middle and late stages of maize is a typical high-occlusion environment, with higher plant height and dense branches and leaves, seriously blocking light (Liu et al., 2016;Xie et al., 2019). When the ambient light intensity is weak, information loss will occur when using machine vision to obtain inter-row navigation information (Chen et al., 2011). However, considering the fact that machine vision usually takes a certain feature of maize as the basis for the acquisition of information, recognizing multiple features at the same time will greatly reduce the recognition speed and also reduce the realtime performance of agricultural robots, taking non-maize obstacles into consideration (such as soil, bricks, and branches) in the middle and late stages of maize; it is, therefore, quite difficult to obtain all the inter-row information by using only a single feature.
Since LiDAR (laser imaging, detection and ranging) can obtain accurate point cloud data of objects according to the echo detection principle (Reiser et al., 2018;Wang et al., 2018;Jafari Malekabadi et al., 2019) and is less affected by light (Wang et al., 2022a;Wang et al., 2022b), it can supplement the missing information caused by the use of machine vision (Jeong et al., 2018;Aguiar et al., 2021). In order to solve the issue of information loss when a vision sensor was used to obtain information, a method using LiDAR supplement vision was proposed (Bae et al., 2021), which pooled the strength of each sensor and made up for the shortcomings of using a single sensor. Through the complementary process between vision and LiDAR (Morales et al., 2021;Mutz et al., 2021), the performance of adaptive cruise control was significantly improved; thus, a complementary method combining vision and LiDAR was developed in order to further improve the accuracy of unmanned aerial vehicle (UAV) navigation . Liu et al. proposed a new structure of LiDAR supplement vision in an end-to-end semantic segmentation network, which can effectively improve the performance of automatic driving (Liu et al., 2020). The above methods had good application effects in the field of autonomous driving Yang et al., 2021;Zhang et al., 2021). Based on the above research, we believe that LiDAR supplement vision is an interesting and effective method to obtaining inter-row information in the middle and late stages of maize development.
Therefore, this paper proposed a method of using LiDAR point cloud data to supplement machine vision data for obtaining inter-row information in the middle and late stages of maize. We took the location of maize plants as the main navigation information and proposed an improved YOLOv5 (Im-YOLOv5) algorithm (Jubayer et al., 2021, p. 5) to identify maize plants and obtain the main navigation information. At the same time, we took the locations of stones, clods, and other obstacles as auxiliary navigation information, which were obtained through LiDAR. By the supplementary function of vision and LiDAR, the accuracy of the inter-row navigation information acquisition in the middle and late stages of maize can be improved. The proposed method provides a new and effective way to obtaining navigation information between rows in the middle and late stages of maize under the condition of equal height occlusion.
The contributions of this article are summarized as follows: 1. A method of inter-row information recognition with a LiDAR supplement camera is proposed. 2. An Im-YOLOv5 model with efficient channel attention (ECA) and lightweight backbone network is established. 3. Auxiliary navigation information acquisition using LiDAR can reduce the loss of information. 4. The proposed method was tested and analyzed using a data acquisition robot.
2 Methods and materials
Composition of the test platform
The experimental platform and data acquisition system are shown in Figure 1. A personal computer (PC) was used as the upper computer to collect LiDAR and camera signals. The LiDAR model is VLP-16, the scanning distance was 100 m, the horizontal scanning angle was 270°, and the vertical scanning angle was ±15°. The camera model is NPX-GS650, the resolving power was 640*480, and the frame rate was 790.
Commercialization feasibility analysis
The data acquisition platform used in the test costs 490 RMB. The plant protection operation can be carried out by installing a pesticide applicator in the later stage, with the cost of the pesticide applicator about 100 RMB. The cost of the camera sensor was about 100 RMB, and that of the LiDAR sensor was about 5,000 RMB. Consequently, the cost of VLP-16 LiDAR represented a key issue affecting the commercialization of this recognition system. Therefore, our recognition system was applied to small autonomous navigation plant protection robots. The relatively low-cost of small plant protection robots, even with the application of this relatively highprecision recognition system, had a price advantage over UAVs.
Joint calibration of camera and LiDAR
In this paper, a monocular camera and VLP-16 LiDAR were used as the information fusion sensors. When the monocular camera and the LiDAR detect the same target, despite the range and angle information being the same, the detection results of the two sensors belong to different coordinate systems (Chen et al., 2021a). Therefore, in order to effectively realize the information supplementation of LiDAR to the camera, the coordinate system must be unified; that is, the detection results of the two sensors should be input into the same coordinate system and the relative pose between them should be calibrated at the same time so as to realize the data matching and correspondence between these two sensors.
It should be noted that the main task of the monocular camera calibration was to solve its extrinsic parameter matrix and intrinsic parameters. In this paper, the chessboard calibration method was used , with the Data acquisition robot. PC, personal computer. Li et al. 10.3389/fpls.2022.1024360 Frontiers in Plant Science frontiersin.org chessboard size being 400 mm × 550 mm and the grid size being 50 mm × 50; mm. We randomly took 21 chessboard pictures of different positions. The camera calibration error was less than 0.35 pixels and the overall mean error was 0.19 pixels, which means, according to reference, that the error met the calibration accuracy and that the calibration result has practical value (Xu et al., 2021). The internal parameters of the camera were as follows: focal length (f) = 25 mm, radial distortion parameter (k 1 ) = 0.012 mm, radial distortion parameter (k 2 ) = 0.009 mm, tangential distortion parameter (p 1 ) = −0.0838 mm, tangential distortion parameter (p 2 ) = 0.1514 mm, image center (u 0 ) = 972 mm, image center (v 0 ) = 1,296 mm, normalized focal length (f x = f/ dx) = 1,350.3 mm, and normalized focal length (f y = f/ dy) = 2,700.8 mm. On the basis of camera calibration, we carried out the joint calibration of the camera and LiDAR. The calibration principle is shown in Figure 2A. By matching the corner information of the chessboard picture taken by the camera to the corner information of the chessboard point cloud data obtained by LiDAR, a rigid transformation matrix from the point cloud data to the image can be obtained. During calibration, the camera and LiDAR were fixed on the data acquisition robot platform developed by the research group. After the joint calibration, the relative positions of the camera and LiDAR were saved and fixed. The calibration error is shown in Figure 2B. As indicated in Aguiar et al. (2021), the calibration error met the calibration accuracy, and the calibration result showed practical value. Through joint calibration, the rigid transformation matrix of the point cloud projection to the image is obtained from Equations (1) and (2). (2)
Navigation information acquisition based on LiDAR supplement vision
As mentioned in Section 1, machine vision usually takes a single feature of the plant as the basis of recognition. In this paper, the maize stem about 10 cm above the ground surface was taken as the machine vision recognition feature. It should be noted that taking the maize stem as the identification feature will cause lack of information on the other non-maize obstacles (such as stones and clods). In order to solve the issue of missing information when using machine vision to acquire navigation information, this paper proposed a method of inter-row navigation information acquisition in the middle and late stages of maize based on LiDAR supplement vision. The detailed principle is shown in Figure 3. The machine vision datasets were trained using the Im-YOLOv5 algorithm to identify the stem of the maize and, subsequently, to obtain the main navigation information. The point cloud data of the interrow environment in the middle and late stages of maize were obtained using LiDAR to gather auxiliary navigation information. It should be noted that the method proposed in this paper obtained inter-line information through LiDARassisted cameras; therefore, spatial data fusion was used. After establishing the precise coordinate conversion relationship among the radar coordinate systems-a three-dimensional world coordinate system-a camera coordinate system, an image coordinate system, and a pixel coordinate system-the spatial position information of the obstacles in the point cloud data can be matched to the visual image.
FIGURE 2
Camera-LiDAR (laser imaging, detection and ranging) joint calibration process. (A) Principle of joint calibration. (B) Joint calibration error. By matching the corner information of the chessboard picture taken by the camera to the corner information of the chessboard point cloud data obtained by LiDAR, the rigid transformation matrix from the point cloud data to the image can be obtained.
Main navigation information acquisition with the improved YOLOv5
YOLO models have a real-time detection speed, but require a powerful GPU (graphic processing unit) and a large amount of memory when training, limiting their use on most computers. The large size of the model after training can also increase the hardware requirements on mobile devices. Ideally, a detection model would meet the requirements of detection accuracy and real-time detection speed of maize stems, without high hardware requirements. The YOLOv5 model is a lightweight version of YOLO, has fewer layers and faster detection speed, can be used on portable devices, and requires fewer GPU resources for training (Tang et al., 2023). Therefore, the goal of this work was to build on the YOLOv5 model and apply the improved model for the detection of maize stems. The main idea for improving YOLOv5 was to lighten its backbone network through MobileNetv2 and introduce the ECANet attention mechanism to improve the recognition accuracy and robustness of the model.
Lightweight Backbone network
This paper used MobileNetv2 (Zhou et al., 2020) to replace the backbone network of YOLOv5 for the extraction of maize stem images with effective characteristics. In order to enhance the adaptability of the network to the task of recognizing maize stem features and fully extract features, a progressive classifier was designed in this paper to enhance the network's recognition ability of the corn rhizome. The original MobileNetV2 network was primarily used to deal with more than 1,000 types of targets on the ImageNet dataset, while this paper only targeted maize stems. Therefore, in order to better extract the characteristics of maize stems and improve the recognition ability of the network on maize stems, we the classifier of the network was redesigned, which included two convolution layers, one global pooling layer, and one output layer (convolution layer).
The main task of the classifier was to efficiently convert the extracted maize stem features into specific classification results. As shown in Figure 4, two convolution kernels with different scales were selected to replace a single convolution kernel in the original classifier in order to perform the compression and conversion operations of the feature map. The size of the first convolution kernel was 1 × 1. It was mainly responsible for the channel number compression of the feature map. In order to avoid the loss of a large number of useful features caused by a large compression ratio, the second convolution was used mainly for the size compression of the feature map to avoid fluctuations in the subsequent global pooling on a large feature map. Comparison of the Im-YOLOv5 network based on MobileNetv2 with the original YOLOv5 network showed that the model parameters decreased from 64,040,001 to 39,062,013 and the parameters decreased by 39%.
At the same time, Im-YOLOv5 used CIOU_Loss [complete intersection over union (IOU) loss] to replace GIOU_Loss (generalized IOU loss) as the loss function of the bounding box and used binary cross-entropy and logits loss function to calculate the loss of class probability and target score, defined as follows.
FIGURE 3
Principle of navigation information acquisition based on LiDAR (laser imaging, detection, and ranging) supplement camera. The machine vision datasets were trained using the improved YOLOv5 (Im-YOLOv5) algorithm to identify the stem of the maize and then obtain the main navigation information, while LiDAR was used to obtain auxiliary navigation information.
In Equations (3) and (4), A and B are the prediction box and the real box, respectively; IOU is the intersection ratio of the prediction box and the real box; and C is the minimum circumscribed rectangle of the prediction box and the target box. However, Equations (3) and (4), considering only the overlap rate between the prediction box and the target box, cannot describe well the regression problem of the target box. When the prediction box is inside the target box and the size of the prediction box is the same, GIOU will degenerate into IOU, which cannot distinguish the corresponding positions of the prediction box in each target box, resulting in error detection and leak detection. Equation (5) is the calculation formula of CIOU, where a = v/(1-IOU)v is an equilibrium parameter that does not participate in gradient calculation; v = 4/p^2(arctan (W gt /H gt )arctan (W/H)) 2 is a parameter used to measure the consistency of the length-width ratio; b is the forecast box; b gt is the realistic box; r is the Euclidean distance; and c is the diagonal length of the minimum bounding box. It can be seen from Equation (5) that the CIOU comprehensively considers the overlapping area, center point distance, aspect ratio, and other factors of the target and prediction boxes and solves the shortcoming of the GIOU loss function, making the regression process of the target box more stable, with faster convergence speed and higher convergence accuracy.
Introducing the attention mechanism
In order to improve the recognition accuracy and robustness of the algorithm in the case of a large number of maize stems and mutual occlusion between stems, efficient channel attention (ECA) was introduced (Xue et al., 2022). It should be noted that, although the introduction of ECANet into convolutional neural networks has shown better performance improvements, ECANet only considers the local dependence between the current channel of the feature map and several adjacent channels, which inevitably loses the global dependence between the current channel and other long-distance channels. On the basis of ECANet, we added a new branch (shown in the dashed box in Figure 5) that has undergone channel-level global average pooling and is disrupted. This branch randomly rearranges the channel order of the feature map after undergoing channel-level global average pooling, so the longdistance channel before disruption may become its adjacent channel. After obtaining the local dependencies between the current channel of the new feature map and its new k adjacent channels, weighting the two branches can obtain more interaction information between channels.
In this paper, suppose that the feature vector of the input feature after convolution is x ϵ R W×H×C , where W, H, and C respectively represent the width, height, and channel size of the feature vector. The global average pooling of the channel dimension can be expressed as: Then, in ECANet, the feature vector inputs by the two branches can be expressed as: where ys represents the vector obtained after global average pooling and disrupting the sequential branching of channels; yg MobileNetv2 network structure. Li et al. 10.3389/fpls.2022.1024360 Frontiers in Plant Science frontiersin.org represents the vector obtained after global average pooling and branching; and S is a channel-disrupting operation. Given that the feature vector without dimension reduction is y ϵ R C , the inter-channel weight calculation using the channel attention module can be expressed as: where s(x) = 1/(1+e -x ) is the sigmoid activation function and W k is the parameter matrix for calculating channel attention using ECANet.
We took MobileNetv2 (Zhou et al., 2020) as the backbone model, combined YOLOv5 with the SeNet and ECANet modules (Hassanin et al., 2022), and carried out maize stem recognition experiments. The test results are shown in Table 1. ECANet showed better performance compared toSeNet, indicating that ECANet can improve the performance of YOLOv5 with less computational costs. At the same time, ECANet was more competitive than SeNet, and the model complexity was also lower.
In this work, the ECANet attention mechanism was first placed on the enhanced feature extraction network and the attention mechanism added on the three effective feature layers extracted from the backbone network. Regarding the problems of information attenuation, the aliasing effect of cross-scale fusion and the inherent defects of channel reduction in the feature pyramid network (FPN) in YOLOv5, in this paper, we added the ECANet attention mechanism to the sampling results on FPN in order to reduce information loss and optimize the integration characteristics on each layer. By introducing the ECANet attention mechanism, Im-YOLOv5 can better fit the relevant feature information between the target channels, ignore and suppress useless information, and make the model focus more on training the specific category of maize stems, strengthening it and improving its detection performance. The specific structure of the Im-YOLOv5 algorithm is shown in Figure 6.
Auxiliary navigation information acquisition by LiDAR
Because of the obvious color and structural characteristics of maize stems, we trained the Im-YOLOv5 model to only detect maize stems when the main navigation information was obtained through machine vision. However, the actual nonmaize obstacles were mainly soil blocks and stones, and the color and shape characteristics of such obstacles are relatively close to the ground color, which greatly increased the difficulty of Im-YOLOv5 model training. At the same time, recognizing multiple features simultaneously by machine vision will also reduce the recognition speed to a certain extent. Under this condition, it is necessary to obtain point cloud information using LiDAR to supplement machine vision. ECANet channel attention. Li et al. 10.3389/fpls.2022.1024360 Frontiers in Plant Science frontiersin.org
Determination of the effective point cloud range
Since the camera and LiDAR were fixed on the data acquisition robot platform, when the robot is walking between lines during data acquisition, it is necessary to determine the effective data range of the LiDAR point cloud according to the shooting angle range of the camera, as shown in Figure 7A.
Note that, in Figure 7A, q e is the camera shooting angle range, q e is the scanning angle of LiDAR, and d is the width of the robot. Therefore, the range of the effective point cloud data collected by LiDAR is the sector area, where r is the radius of the sector with the angle of q e and is defined as:
Coordinate conversion of the auxiliary navigation information
Through the joint calibration of the camera and LiDAR in the above section, the camera external parameter matrix (R, T), the camera internal parameter, and the rigid conversion matrix (R lidar , T lidar ), of the camera and LiDAR sensor information were obtained.
In order to supplement the main navigation information with the auxiliary navigation information, it is essential to Camera-LiDAR (laser imaging, detection, and ranging) joint calibration process. (A) Effective data range. q e is the camera shooting angle range, q i is the scanning angle of LiDAR, and the overlapping area is the effective point cloud range. (B) Coordinate transformation. O w -X w Y w Z w is the LiDAR coordinate system, O c -X c Y c Z c is the camera coordinate system, o -xyis the image coordinate system, and O uvuv is the pixel coordinate system. (C) Distortion error. dr and dt are the radial distortion and the tangential distortion of the camera, respectively. Improved YOLOv5 (Im-YOLOv5) architecture. Li et al. 10.3389/fpls.2022.1024360 establish a conversion model between sensors. Through the established transformation model, the points in the world coordinate system scanned by LiDAR were projected into the pixel coordinate system of the camera to realize the supplementation of the point cloud data to the visual information according to the pinhole camera model, as shown in Figure 7B. Note that, in Figure 7B, P is the point on the real object, p is the imaging point of P in the image, (x, y) are the coordinates of p in the image coordinate system, (u, v) are the coordinates of p in the pixel coordinate system, and f is the focal length of the camera, where f = || o -0 c || (in millimeters). The corresponding relationship between a point P(X w , Y w , Z w ) in the real world obtained by LiDAR and the corresponding point p (u, v) in the camera pixel coordinate system can be expressed as: According to the principle of LiDAR scanning, the point cloud data obtained by LiDAR are in the form of polar coordinates. Therefore, the distance and angle information of the point cloud data under polar coordinates were converted into the three-dimensional coordinate point information under the LiDAR ontology coordinate system. The conversion formula was as follows: where r is the distance between the scanning point and the LiDAR;a is the elevation angle of the scanning line at the scanning point, namely, the angle in the vertical direction; and q is the heading angle in the horizontal direction.
In order to eliminate the camera imaging distortion error caused by the larger deflection of light away from the lens center and the lens not being completely parallel to the image plane, as shown in Figure 7C, we corrected the distortion of Equation (11) with the correction formula, given as follows (Chen et al., 2021b): Radial distortion correction: Tangential distortion correction: Where k 1 and k 2 are the radial correction parameters; p 1 and p 2 are the tangential correction parameters; u′′and v′ re the radially corrected pixel coordinates; and u′′ and v′′ are the tangentially corrected pixel coordinates.
The corresponding relationship between the point in the world coordinate system obtained by LiDAR and the camera pixel coordinate system is established through Equations (10)-(14). According to the established coordinate transformation model, the LiDAR point cloud data can be converted to the image space for the purpose of supplementation between machine vision and LiDAR.
Feature recognition of point cloud based on PointNet
Because of the irregular format of the point cloud, it is difficult to extract its feature, but with the proposal of the PointNet model (Jing et al., 2021), this problem was solved. In this paper, the features of the non-maize obstacles in the middle and late stages of maize were extracted through PointNet, and their location information taken as the output. Note that we also performed the following work before using the PointNet model for training. The principle is shown in Figure 8.
Ground segmentation
In order to obtain auxiliary navigation information from the LiDAR point cloud data, the ground point cloud must be The principle of auxiliary navigation. Li et al. 10.3389/fpls.2022.1024360 Frontiers in Plant Science frontiersin.org segmented first. In this work, the RANSAC (random sample consensus) algorithm was adopted to segment the collected point cloud data. The unique plane can be determined by randomly selecting three non-collinear sample points (x a , x b , x c ) in the point cloud.
Where n i is the normal vector of the plane model and d i is the pitch of the plane model. Then, the distance from any sample point x i in the point cloud to the plane model is given by Let the distance threshold be T, when r i <T. The sample point x i is the internal point; otherwise, it is the external point. Let N be the number of internal points with sNote that Equations (15)-(19) show a calculation process, but N is not necessarily the maximum value at this time; hence, an iterative calculation is needed. Let the number of iterations be k c . When N takes the maximum value, N max , in the iterative process, the plane model corresponding to n best and d best is the best-fitting ground.
Removing noise points caused by maize leaves
LiDAR was mainly used to identify obstacles other than maize leaves. In order to reduce the difficulty of model training, the point cloud data of maize leaves were deleted. This technology depends on the analysis of the z-coordinate distribution of each point cloud. In general, the height of obstacles such as soil blocks and stones is less than 10 cm. Therefore, when we trained the model sexually, we deleted the point cloud with a z-coordinate greater than 10 cm in the q e range.
Experiments and discussions
The focus of this paper was navigation information acquisition. Navigation information can be used for path planning to guide the robot to drive autonomously and can also be used as the basis for the adjustment of the driving state of the robot, such as reducing the driving speed when detecting rocks or large clods. We provided the results of the information acquisition experiment.
Main navigation information acquisition experiment
We verified the recognition performance of the Im-YOLOv5 for the main navigation information from two aspects: model training and detection results. In order to facilitate comparisons, we also provided the test results of YOLOv5 and Faster-RCNN (faster region-based convolutional network). The datasets used in the experiment were collected by the Anhui Intelligent Agricultural Machinery Equipment Engineering Laboratory. It should be noted that, in order for each model to perform best on the datasets, we adjusted the parameters of each model separately to select the appropriate hyperparameters. The initial hyperparameter settings of each algorithm are shown in Table 2. We divided the train set, test set, and verification set according to an 8:1:1 ratio, and the dataset contained 3,000 images.
The model training and validation loss rate curves are shown in Figure 9. From the figure, it can be seen that the loss rate tends to stabilize with the increase of iterations, finally converging to the fixed value; this indicates that the model has reached the optimal effect. The debugged model showed good fitting and generalization ability for the maize stem datasets. Note that, due to the Im-YOLOv5 having an improved loss function, the initial loss value of the model was about 0.38, which was the lowest among the three models, and the convergence speed was accelerated.
The P (comparison of accuracy), R (recall), F 1 (harmonic average), FPS (frame rate), and mAP (mean average precision) values for Im-YOLOv5, YOLOv5, and Faster-RCNN are shown in Table 3. From the table, it can be seen that Im-YOLOv5 had the highest accuracy rate, followed by YOLOv5; the accuracy rate of Faster-RCNN was low. With the lightweight backbone network, the FPS of Im-YOLOv5 was the highest, and the weight was greatly reduced. While meeting the real-time requirements, the detection speed of a single image was also the fastest and the detection performance was the best. Compared with that of YOLOv5, the FPS of Im-YOLOv5 was increased by 17.91% and the model size reduced by 55.56% when the mAP was reduced by only 0.35%, which improved the detection performance and shortened the model reasoning time. From the datasets, we selected a number of inter-row images of maize in the middle and late stages for testing, as shown in Figure 10. For the same image, Im-YOLOv5 was able to identify most maize stems, even those that were partially covered. At the same time, the detection confidence of Im-YOLOv5 and YOLOv5 was high, but that of Faster-RCNN was relatively low.
Auxiliary navigation information supplements the main navigation information experiment
In the experiments, the practical feasibility of the proposed inter-row navigation information acquisition method was verified based on LiDAR point cloud data-supplemented machine vision in the middle and late stages of maize. Considering the current coronavirus outbreak, conducting large-scale field experiments had been difficult. Therefore, an artificial maize plant model was used to set up the simulation test environment for verifying the feasibility of the designed method. Figure 11A shows the test environment using the maize plant model. Investigation of maize planting in Anhui Province revealed that the row spacing for maize plants is about 50-80 cm and that plant spacing is about 20-40 cm. Therefore, the row spacing in the maize plant model was set to 65 cm and the plant spacing to 25 cm. At the same time, a number of nonmaize obstacles were also set in the experiments. For the purpose of data acquisition in this work, the data acquisition robot was developed by Anhui Intelligent Agricultural Machinery and Equipment Engineering Laboratory at Anhui Agricultural University.
During the experiments, the required main navigation information was the position information of maize plants, while the required auxiliary navigation information was the position information of the non-maize obstacles. We set up six maize plant models and three non-maize obstacles and randomly set the locations of the obstacles. Subsequently, we conducted 10 information acquisition experiments at distances of 1,000, 2,000, and 3,000 mm from the data acquisition robot to the front row of the maize plant model. The test results are shown in Figures 11B, C. Model training and validation loss rate curves.
Discussions
Generally, visual navigation between rows in the middle and late stages of maize extracts the maize characteristics and then fits the navigation path. If the camera was only used to obtain information based on the maize characteristics in the recognition stage, information on the non-maize obstacles between rows in the middle and late stages of maize is missed, as shown in Figures 11B, C. With the introduction of the Im-YOLOv5 stem recognition algorithm, sufficient training for Li et al. 10.3389/fpls.2022.1024360 Frontiers in Plant Science frontiersin.org maize stem recognition has become exceptionally accurate; however, the non-maize obstacle recognition rate was almost zero only for Im-YOLOv5, which is extremely fatal for the actual operation safety of plant protection robots in the middle and late stages of maize. When using LiDAR to obtain auxiliary navigation information in order to supplement the main navigation information obtained by machine vision, the issue of missing information can be properly solved, with the safety of the planned navigation path under this condition being greatly improved. However, due to the recognition accuracy of the 16line LiDAR and the error of the camera-LiDAR joint calibration, the information recognition effect was not very satisfactory when the obstacle is far away and is too small. With increasing distance between the data acquisition robot and the maize plant, the number of maize plant models can be stably maintained, which means that the identification of the main navigation information is also stable. However, recognition of the number of non-maize obstacles showed a downward trend, indicating that the recognition accuracy using the auxiliary navigation information was reduced. In view of these issues, we will be using the 32-line or the 64-line LiDAR, both with higher accuracy, in future experiments.
Conclusion
In order to solve the problem of missing information when using machine vision for inter-row navigation in the middle and late stages of maize, this paper has proposed a method using LiDAR point cloud data to supplement machine vision in order to obtain more accurate inter-row information in the middle and late stages of maize. Through training of the machine vision datasets with the Im-YOLOv5 model, the main navigation information was obtained by identifying maize plants between the rows of maize in the middle and late stages. As a supplement to the main navigation information acquired by machine vision, LiDAR has been used to provide additional information to identify information on other non-crop obstacles as auxiliary navigation information. Not only was the accuracy of information recognition improved, but technical support for planning a safe navigation path can also be provided. Experimental results from the data acquisition robot equipped with a camera and a LiDAR sensor have demonstrated the validity and the good inter-row navigation recognition performance of the proposed method for the middle and late stages of maize. However, with the improvement in the accuracy of LiDAR, cost is the key problem restricting the commercialization of this recognition system. Therefore, we hope that our recognition system can be applied in small autonomous navigation plant protection robots, as the relatively low cost of small plant protection robots, even with the application of this relatively high-precision recognition system, has a price advantage over UAVs. The navigation information can be used for path planning to guide robots to drive autonomously and can also be used as the basis for the adjustment of the driving state of robots, such as in reducing the driving speed when detecting rocks or large clods. Therefore, in subsequent research, we will focus on path planning between maize rows and the control of the driving state of robots.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. | 9,116.8 | 2022-12-01T00:00:00.000 | [
"Computer Science"
] |
GEOPHYSICAL ANALYSIS OF LANDSCAPE POLYSTRUCTURES
. The objective identification of landscape cover units is very important for sustainable environmental management planning. The article proposes a method-algorithm for describing the formation of landscape structures, which is based on the classic landscape analysis and applies the parameters of geophysical fields. The main driving forces of all structure-forming processes are the gradients of gravitational and insolation fields, parameters of which were calculated using the digital elevation models and the GIS-technologies. A minimum number of principal parameters are selected for typological and functional classification of landscapes. The number and importance of parameters were identified basing on the results of numerical experiments. Landscape classifications elaborated on the basis of standard numerical methods take a fundamental geophysical value. In this case, a concept of polystructural landscape organization is logical: by selecting different structure-forming processes and physical parameters, different classifications of landscapes could be elaborated. The models of geosystem functioning are closely related to their structure through boundary conditions and relations between parameters. All models of processes and structures are verified by field experimental data obtained under diverse environmental conditions.
INTRODUCTION
The identification of multi-scale polystructural geosystems and the boundaries between them is among the principal problems of landscape research. The fundamentals of non-equilibrium thermodynamics show the principles on which the classifications of natural-territorial complexes (NTC) should be based. In accordance with the Onsager bilinear equation, classifications should account for: 1) the system-forming flows; 2) the force fields and their gradients; 3) the phenomenological coefficients of generalized conductivity. The fields of gravity and insolation are the most common for any geosystem. Selection (classification, integration) of geographical objects by the parameters describing the geophysical fields and their gradients leads to the identification of geosystems according to the flows of matter and energy. This is a functional approach to the identification and investigation of geosystems, which is being developed in the works by Armand (1988), Reteum (1975), and other. In such a case the boundaries of geosystems are determined by the magnitude and sign of flow divergence. For example, if we consider the behavior of elementary water volumes in a geopotential field, we obtain a hierarchy of catchment geosystems (river basins) which corresponds to the formalized Horton-Strahler-Tokunaga schemes. The drastic change in the phenomenological coefficients is the basis for the classification of NTC according to the principle of homogeneity (typological approach in accordance with N. A. Solntsev's theory (Solntsev 1948)). Considering the spatial distribution of plants and animals in both geopotential and other physical fields -insolation, chemical, thermodynamic etc. allows obtaining the hierarchy of ecosystems (biogeocenotic systems) and their spatial distribution. Such approaches to classification of geosystems are mutually complementary, and should not be contrasted. For example, V.N. Solntsev (Solntsev 1997(Solntsev , 2006 considers three mechanisms of landscape structuring -geostationary, geocirculatory and biocirculatory, which could operate individually or simultaneously. The objective identification of landscape cover units is essential for the planning of sustainable nature management. The development of territorial planning assumes the need to use different methods for selection of spatial units, depending on the objectives of environmental management. For example, the maps showing the structure of biocentricity are necessary to embed nature reserves; those representing positional-dynamic and morphological structures are essential for industrial facilities (Pozachenyuk 2006); for agroforestry purposes the catenary differentiation should be considered, and different types of units, such as urochishche, mestnost etc., and catchments should be used (Rulev 2008). In landscape-agroecological planning the preference is given to genetics-morphological approach, the principal units being the urochishche and a group of urochishches (Orlova 2014). An example of the use of landscape planning is water protection zoning (Landscape Planning… 2002). Complicated environmental situations and a variety of conflicts between land and water users are characteristic of the water protection zones. On the other hand, the most complex landscape-hydrological systems (LHS) are presented in the areas adjacent to water bodies. The combination of landscape and hydrological indicators at the basin level is used to refine the calculation of hydrological parameters or to assess the distribution of LHS (Antipov and Fedorov 2000). The concept of LHS as applied to landscape hydrology is a set of natural-territorial complexes (NTC) similar in runoff formation conditions. The NTC only indirectly characterizes the catchment area with similar capacitive features; therefore the experimental observations in each small river basin are necessary for an accurate estimate of the LHS area. However, in most cases LHS are formed from a set of NTC. The calculation accuracy (ceteris paribus) could be improved by increasing the number of taxonomic units. Moreover, the landscape typological approach is the most appropriate for the large-scale work (Tkachev and Bulatov 2002).
Modern landscape ecology is based on the patch mosaic paradigm, in which landscapes are conceptualized and analyzed as mosaics of discrete patches (Forman 2006;Turner et al. 2005).The strength of the patch mosaic model lies in its conceptual simplicity and appeal to human intuition. In addition, the patch mosaic model is consistent with well-developed and widely understood quantitative statistic techniques designed for discrete data (e.g., analysis of variance). Developing this rather limited approach to environmental considerations McGarigal and Cushman (McGarigal and Cushman 2005) introduced the «landscape gradient» model, as a general conceptual model of landscape structure based on continuous spatial heterogeneity. Based on the continuous characteristics of the earth's surface, obtained from DEM (slope, topographic wetness index, topographic position index, normalized difference vegetation index NDVI etс.), the «landscape gradient» model is constructed using the statistical characteristics of patch mosaic (patch density, largest patch index, edge density, mean patch area, area-weighted mean patch area, coefficient of variation in patch area, mean patch shape index, etc.). Statistical characteristics of patches are called landscape metrics (McGarigal et al. 2009). However, these are metrics of just the sizes and shapes of patches in mosaics, and not of complex geosystems, such as landscapes of any dimension and hierarchy. The structure (pattern) is understood, first of all, as a combination of interacting spatial elements with their area, configuration, orientation, neighborhood, connectedness or fragmentation (Turner and Gardner 2015), i.e. close to the concept of "landscape pattern" (Viktorov 1986). The structure is interpreted as an indicator (on the one hand) and a condition (on the other hand) of radial and lateral processes. This interpretation turned out to be especially productive for regions highly transformed by anthropogenic activity, where the zonal landscape was preserved in the form of a few "islands". From the point of view of modeling the landscape structure based on structure-forming processes these methodological approaches are not entirely correct. Amount of empirical data, composite indices with a fuzzy (intuitive) physical meaning cannot be used directly as parameters of the equations of mathematical physics. As a result, a gap arises between landscape-ecological and physical-mathematical models, and the empirical and semi-empirical parameters need to be introduced into rigorous descriptions of the processes of transfer of matter and energy to overcome it. For example, GIS SAGA software (Olaya 2004) is supplemented by the TOPMODEL hydrological unit (Beven 2012), which describes the migration of moisture based on the Darcy equation, with a significant number of empirical parameters that are difficult to determine, so parameterization, approximation, and similar fitting methods are required.
Planning decisions based on the landscape-ecological analysis could become more reasonable if the following problems are solved (Landscape Planning… 2002): • Identification of quantitative indicators describing both the structure and the functioning and development of a landscape. We need objective indicators which are relatively easy to calculate. • Development of classifications of landscapes according to their sustainability, vulnerability, suitability and capacity for particular types of environmental management. • Identification of quantitative characteristics of the land-scape self-organization, or at least qualitative description of the regional developments of this process. • Search for relationships between the spatial structures of natural and socio-economic systems. • Determination of minimum natural ranges within which ecological stabilization of cultural landscapes could be implemented.
• Development of regional norms or recommendations for planning spatial relationships between the main landscape elements (by area and configuration).
These tasks are currently relevant. In our work, possible ways of implementing some of these tasks are given.
The aim of the work is to justify the choice of the least number of objective parameters characterizing the landscape structure, which is interpreted in the classical definition of the Moscow State University School of Landscape Sciences. In fact, this is a synergistic task of determining the main parameters of structure-forming processes. The method proposed in the article allows application of numerical modeling to describe the landscape polystructure by the parameters of major continuous geophysical fields.
COMMON PRINCIPLES OF THE LANDSCAPE STRUCTURE MODELS
The elaboration of any physical-mathematical model begins with basic axioms and postulates. The principal point is the identification of elementary material objects (particles, points) forming the system and the assignment of independent variables and functions of the system's states. Further, it is necessary to adopt a number of binding postulates so that it is possible to apply particular physical laws. It is essential that physical laws and their parameters be applied relevant to the description of the structure-forming landscape processes (Sysuev 2014).
To describe geosystems, it is first of all necessary to substantiate the potentials of the main geophysical fields that determine the structure-forming processes, and then to formalize the description of elementary geosystems and hierarchical invariants of geosystems. The quantitative values of spatially distributed physical parameters of the state of landscapes could be obtained: 1) from digital elevation models (DEM) -morphometric parameters describing the gradients of the gravity and insolation fields; 2) from digital remote sensing data -parameters of the Earth's surface cover; 3) from field and laboratory measurements, and 4) during special experiments.
The space of geographical coordinates is provided by the construction of a digital elevation model (DEM). Pixels of the 3D DEM are elementary material points (similar to the material points in theoretical mechanics), from which the NTC structure is synthesized using the formalized procedures. DEMs are constructed to achieve the maximum resolution of a particular hierarchical level of geosystems. For example, if a regular-grid DEM based on the contours from a detailed topographic map M 1:10 000 is constructed, the pixel size could be 10x10 m. However, the pixel size depends on the resolution of aerial photo or satellite image as well. So, the resolution of Landsat images (30x30 m) allows us to identify the NTC of just urochishche level.
Differentiation of geographical space could be realized using various mathematical methods (cluster analysis, neural networks, etc.). However, we need numerical parameters of the state of elementary material objects (pixels) to distinguish uniform areas. Theoretical description of the geostructure, i.e. stationary (for a certain time interval) state of a dynamic geosystem, begins with identifying morphometric parameters (MP) which describe the force fields that determine the main structure-forming processes. Morphometric formalization of the Earth's surface in the gravitational field was systematized in (Shary 1995). Logically meaningful association of morphometric parameters includes three groups characterizing: 1) the distribution of solar energy -the dose of direct solar radiation (daily, annual), aspect and illumination of slopes; 2) the distribution and accumulation of water under the influence of gravitythe specific catchment area and the specific dispersive area, depth of B-depressions and the height of B-hills, the slope gradient; 3) the mechanisms of matter redistribution under the influence of gravity -horizontal, vertical and mean curvature, slope gradient, height (Sysuev 2014). It should be noted that the minimum number of simple (non-composite) state parameters is selected that independently describe the gradients of geophysical fields generating the main structure-forming processes. Thus, the selection (classification) of geosystems is not carried out according to relief elements, or elementary locations, or patches, or other spatial units, but directly by the parameters of geophysical fields and structure-forming processes.
The physical meaning of the morphometric parameters is quite clear. For example, the dose of radiation characterizes the potential input of direct insolation. Slope aspect and gradient are the components of the geopotential vector gradient. Horizontal curvature is responsible for the divergence of flow lines. Vertical curvature is the derivative of the steepness factor and characterizes the slope convexity/concavity. These parameters are directly included in the equations of mathematical physics. Specific catchment area shows the area from which suspended and dissolved substances could be transported to a surface element. It is a substance balance parameter (included in conservation laws), and a component of a number of related indices (water flow capacity index, erosion index, etc.).
The state of the Earth's surface (vegetation cover, snow cover, soil cover, etc.) is detected from the digital data of space spectral image and from the related indices (e.g., the normalized difference vegetation index, snow index, humidity index -NDVI, NDSI, NDWI, etc.). The most important sources of data are field studies, which also allow verifying the interpretation of the state of the covers. In addition to the traditional complex methods of landscape science, it is necessary to use automated complexes for recording geophysical parameters of the lowest atmospheric layers, natural waters and soils. Methods of applied geophysics, based on measuring the spatial distribution of gravitational, electromagnetic, and other parameters, are very promising as well.
The parameters for describing the structure are chosen in accordance with the classical approaches of landscape studies. All formal algorithms for selecting the smallest and the higher order units of relief surface based on the parameters of the gravity and insolation fields acquires a fundamental geophysical meaning. In this case, the concept of polystructural landscape organization is absolutely logical: by selecting different structure-forming processes and physical parameters, different classifications of landscapes could be elaborated. Let us demonstrate the approach through particular cases below.
Typological model of the landscape structure
Typological approach allows obtaining a hierarchy of classical NTC (facies -urochishche -mestnost -landscape) in accordance with N.A. Solntsev's theory (Solntsev 1948). The parameters are chosen in accordance with the wide-ly-known definitions. For example, "An elementary NTC -facies ... is confined to one element of mesorelief; this territory is homogeneous in terms of its three principal characteristics: the lithological composition of the rocks, the slope aspect and gradient. In this case, the total solar radiation and atmospheric precipitation coming to the surface are the same within any part of it. Therefore, one microclimate and one water regime are formed, ... one biogeocenosis, one soil unit and a uniform complex of soil mesofauna" (Dyakonov and Puzachenko 2004). As it follows from the definition, elementary NTCs could be selected by the parameters of solar radiation and water distribution over the surface; more precisely, by the distribution of the gradients of insolation and gravity fields. Thus, the classical definition already requires the description of the NTCs differentiation using the theory of field and the morphometry of the Earth's surface. The classification results essentially depend on weight values and number of parameters. By changing the latter, it is possible to optimize the classification of landform elements according to a known landscape structure. On the other hand, the change in the set of parameters and their numerical values makes it possible to model landscape structure changes under the influence of climate change, neotectonic events, etc. A rigorous landscape approach is needed for such modeling that allows identification of the main factors of differentiation and exclusion of derivative or dependent variables. Automatically obtained classes of landscape cover require identification and verification of their physical content.
The investigated territory of the Valdai National Park is located in the central part of the Valdai Upland, which belongs to the end moraine belt of the last Valdai (Würm) Ice Age in the northwestern East-European Plain. The loamy moraine deposits with residual carbonates reach a thickness of 25 m in the ridges of the Crestets end moraine belt and overlie the glacio-fluvial sands of preceding stages. Locally, the moraine is covered by kame silty-sandy loam sediments. The fluvioglacial plains are covered with peat bog sediments, the massifs of which are separated by sandy eskers. Peatland systems are connected by streams and the Loninka and the Chernushka rivers. Such a variety of landforms and sediments causes the high degree of biological and landscape diversity within the study area. The digital elevation model (DEM) was constructed on horizontally a detailed topographic map with a scale of 1:10 000 using the regular grid method with 28 × 28 m pixel size attached to the Landsat-7 DTM+. The pixel size makes it possible to reliably distinguish NTCs of the locality (mestnost) rank in the study area with dimensions of about 10x10 km. Based on the values of the height and size of pixels (vertical and horizontal steps), the main geomorphometric parameters were calculated, i.e. slope, dose of direct solar radiation, aspect and illumination of slopes, specific watershed area and dispersive area, B-depression depth, B-hill height, horizontal and vertical curvature. GIS ECO (P. Shary), GIS FractDim (G. Aleschenko, Yu. Puzachenko), GIS DiGem (O. Conrad), and GIS SAGA (V. Olaya) were used for calculations. The best result of building a smoothed DEM and morphometric parameters of the studied territory was shown by GIS ECO.
Next, a matrix (database) was built: the rows correspond to the relief surface elements (DEM pixels), and the columns to the parameters (MP) describing the state of an element (height, geomorphometric parameters, as well as digital values of the brightness of the Landsat-7 DTM+ channels and NDVI). The parameters describing the same surface element have different physical meanings and are not comparable in dimensions and size. They are therefore normalized and reduced to a standard form. The resulting matrix is ready for any algebraic operations.
The row vectors of the data matrix characterize a multitude of DEM pixels. Geometrically, the closer two such vectors in the parameter space, the less different the parameter values for both objects. This suggests that the closer two vectors in the parameter space, the more "similar" and less distinguishable the corresponding objects in many of their other properties, not only for those included in the data matrix. Therefore, if it is possible to distinguish a geometrically sufficiently isolated "group" of vectors close to each other within the set of all object vectors, then a class of objects with similar internal properties could be identified. The Euclidean metric of the distance between the corresponding object vectors was used as a measure of the proximity of two objects. If the proximity measure function is selected and calculated, the matrix of communication between objects is thus constructed, and the task of automatic classification of objects is reduced to the problem of diagonalizing such a communication matrix. The automatic classification can be understood as a geometric task of distinguishing "dense" concentration of points in a certain space. Such geometric approach allows elaborating the methods of solving the task of automated classification of spatially distributed objects, such as elements of relief surface in the DEM, remote sensing data, etc. The number of objects in the geographic data matrix is very large and could reach dozens and hundreds of thousands. Efficient recurrence algorithms could be applied in these cases which provide that the calculations are performed with only one successive object (or with one row of the corresponding communication matrix) at each step. In our work, the FractDim software was used to classify the relief according to the MP matrix.
The classes of vegetation cover automatically decoded from space spectral image were verified on the basis of field data (Akbari et al. 2006). Some results are shown in Fig. 1. The investigated transect (integrating a series of analogous transects) was about 5 km long; leveling was performed at 5 m interval; integrated descriptions were made at sample plots 20 m apart. Along with complex descriptions, a complete forest inventory was carried out at 239 plots of 20x20 m area. The forest inventory included strip enumeration of trees with measurement of standard parameters of each tree and the sample plot (composition, stratum, height, diameter, age, crown attachment height, stocking, canopy density, underwood, advance growth, grass-shrub cover, type of soil, type of station, etc.) The map of automatically decoded classes (Fig. 2, A) compiled from a priori geophysical data (annual dose of direct solar radiation, slope gradient, numerical data of Landsat 7 spectral channels and the NDVI) was refined on the basis of field data geo-referenced to space image. The map of vegetation cover (Fig.2, B) shows classes according to the field-based verification. Interpolation of continuous forest inventory data on the studied territory was performed using the discriminatory methods.
Correlation of the compiled stand map and the calculated type of site conditions with scale 1:10000 forest compartment map for this territory showed that the simulation results are significantly more detailed compared to the standard forest inventory data.
The scheme of the geosystems structure (NTC at the urochishe level) is shown in Fig. 3. We used successive dichotomous grouping of landform elements (DEM pixels) based on the parameters of geophysical fields and the state of landscape cover. Independent morphometric parameters were chosen based on preliminary analysis (digital modelling): the annual dose of direct solar radiation, elevation, slope gradient, horizontal and vertical curvature, specific catchment area, as well as numerical data of Landsat 7 spectral channels and the NDVI index.
Classification results depend significantly on the number of parameters and their weight values. By changing them, it is possible to optimize the classification of relief elements according to the known (assumed) landscape structure. The selection of parameters was carried out in accordance with the classical landscape science defini-GEOGRAPHY, ENVIRONMENT, SUSTAINABILITY 2020/01
Fig. 2. Identification of the physical content of landscape cover classes in the Valdai National Park according to a priori information (A) and the vegetation cover identified by continuous forest inventory along the transect (B). Legend see in Table 1. The dots show the transect location
Class (Fig. 2 (Sysuev 2003(Sysuev , 2014. For example, if the heat (energy) supply of the territory is modeled with the same weight coefficients for all insolation parameters, the obtained groups do not satisfy the landscape structure revealed in the field studies. The problem is that at first and subsequent levels of the dichotomic classification the classes of relief surface are distinguished, first, against the exposure of slopes, which is not true for a significant part of the territory, which is a swampy landscape of the fluvioglacial outwash plain. On the other hand, the cooling properties of swampy landscape are not taken into account, and even vast massifs of upland bogs with lakes are not identified. To obtain more correct classification, the weight coefficient of the slope parameter was increased, which improved the classification of the relief surface. At each stage, the verification of the classification of relief elements with the selected values of weight coefficients was checked by the method of discriminative analysis. At all levels, according to the values of F-criterion, the distinguished classes differ statistically reliable. Thus, the leading role of waterlogging of the territory, obvious to landscape scholars, could be numerically expressed by the value of weight coefficients of slope, i.e. the parameter of gravity field gradient. Similarly, the significance of other geomorphometric parameters was substantiated. Because of numerical experiments, the level of required numerical classification was also objectively revealed. The 5-level dichotomy adopted at the beginning with identification of 32 classes turned out to be excessive. The size of urochishche (simple and complex) corresponding to the main mesostructures of the relief of the studied territory is the most adequately displayed at the 4th level of classifica-tion. Moreover, as can be seen in Fig. 3, a good number of identified classes are automatically combined into larger groups according to close-valued colors of a palette.
The need for an objective justification of the significance of the state parameters and the level of numerical classification dichotomy is emphasized by the crucial role of the landscape approach, which makes it possible to single out the role of individual factors (structure-forming processes) of NTC differentiation in specific geographical conditions.
Finally the distinguished classes were characterized using the parameters obtained during field study of experimental landscape transects. The characteristics of the grass-shrub and soil cover, and the lithological structure were extrapolated by discriminative methods.
In addition, an independent team of researchers created a landscape map basing on the classical method. The comparison showed that the landscape map resulting from numerical experiments is sufficiently accurate in reproducing the boundaries of the NTC independently obtained by the classical field methods (Sysuev and Solntsev 2006).
Revealing spatial structures in such a way is a process of synthesis, since the material points (pixels) are integrated into elementary natural-territorial complexes (at a given hierarchical level) according to the selected parameters of main geophysical fields and the state of covers.
Functional model of geosystem structure
The functioning of low-order geosystems is largely determined by water flows. Hence, the classification is aimed at the construction of a hierarchy of catchment geosystems according to the morphometric parameters describing the redistribution of water in the gravity field. These parameters, namely slope gradient, specific catchment area, horizontal and vertical curvatures, determine the boundaries of the divergence zones and the convergence of streamlines. The hierarchy of catchment geosystems is determined in accordance with the Horton-Strahler-Tokunaga scheme (Tarboton et al. 1991;Dodds and Rothman 1999). The automated algorithm for identifying drainage channels on the basis of the GIS raster layers involves three main steps. First, we use the digital map of an above-mentioned parameter to select cells with values exceeding a predetermined threshold that are considered to be potential source points. At the second step, channels from the given sources are drawn, and the sources which have transit flow from higher elevations are removed. At the third step, channels smaller than a certain minimum length are cut off (Fig. 4).
The process can be easily adjusted by changing the limit values of the catchment area and the minimum length of the drainage channels. The resulting array of morphometric characteristics with geographic reference to watersheds of various orders is a characteristic of landscape-hydrological systems (LGS), or catchment geosystems (Sysuev 2014).
Application of typological approach to obtain landscape characteristics of catchment geosystems could be demonstrated by the example of the Upper Mezha River basin (Central Forest Reserve, Tver Region). The southern taiga spruce forests with broad-leaved species, shrubs and nemoral herbs dominate this southwestern part of the Valdai Upland (the East European Plain). The territory is a combination of flat ridges and inter-ridge depressions, a weakly dissected plain composed of glacial and fluvioglacial deposits. Modern drainage streams are mostly temporary, with poorly developed alluvial relief and poorly pronounced valley forms. They occupy the drained inter-ridge depressions, which are ancient valleys of the glacial meltwater runoff. The inter-ridges depressions lacking the active runoff are occupied by upland and transitional bogs. The upper reaches of the Mezha River are located in the study area. The river channel is winding within the valley with a wide floodplain (50-100 m) winding channel. The methods for the DEM construction and calculation of parameters are similar to those described above. The algorithm for constructing a drainage network in SAGA numbers the selected segments of watercourses in the order of R. Shreve. To obtain data on the orders of catchments according to the Straler classification, the channels of the first, second, and subsequent orders were sequentially cut off, until maps of watercourses with a breakdown in order were received (Fig. 5 A). An analysis of the use of the method of the specific catchment area critical values in different landscape showed that as the age of the relief increases (in the series: secondary glacial plain → periglacial plain of the marginal zone of the Valdai IceAge → end moraine zone of the Valdai Ice Age) the average area of watercourse formation noticeably reduced.
The geosystem order is the same that the order of a watercourse (Fig. 5B). A special category -zero order geosystems -was introduced for lower rank complexes that do not have a pronounced drainage watercourse. About 400 such geosystems have been identified within the territory under study. The maximum fourth order geosystem is the catchment of the Mezha River in the vicinity of the Fedorovskoye village ( Table 2).
The dependence of the average values of a catchment area Y on its order X is described by the equation Y=b0*(X+1) b1 , where the parameter values are b 0 =0.419665, b 1 =2.526742, and the model reliability is R = 0.99977. Areas of zero order geosystems have the approximately lognormal distribution.
The qualitative and quantitative characteristics of runoff for any watercourse depend on the physical and geographical features of its catchment. To identify these dependencies, a map of the landscape structure of catchment geosystems within the river basin was compiled (Fig. 6).
The methodology for compiling the map of the landscape structure of geosystems is as follows. Maps of relief structure and vegetation cover were created for the study area by means of classification analysis of the digital relief model and satellite imagery of Landsat 7. The relief structure map was created using "with training" classification according to the digital relief model. Using the K-means method, eleven classes were distinguished, which reflect the differentiation of the territory into flat sections and slopes with convex (spurs) and concave (hollows) sections. A map of the structure of vegetation cover was created through the interpretation of the Landsat 7 satellite imagery. Eleven classes were also identified, and two of them were then combined to reflect anthropogenically modified territories (village, roads, fields and hayfields, deposits etc.). At the next stage the number of pixels corresponding to a particular class of vegetation and topography was calculated for each geosystem. As a result, the areas occupied by the main classes of relief and vegetation within each geosystem were estimated. These data were summarized in a single table. Then, the percentage of the area occupied by each class was calculated for each geosystem. To simplify the map compilation, classes with the area exceeding 10% of a geosystem total area were left for further consideration. A generalized matrix was then obtained and the map of the landscape structure of catchment geosystems of various orders was compiled (Fig. 6). Thus, the landscape structure is described within physically determined watersheds boundaries.
The technique simplifies the application of landscape characteristics and differs from methods used for identifying the landscape-hydrological systems (LGS). According to Antipov and Fedorov (2000), the area of LGS varies from year to year, from season to season, and from day to day. Therefore, the selection of NTCs that have similar state in relation to runoff in particular time period (for example, floods) is not simple.
In mathematical modeling of surface runoff, there is a clearer concept of an "active runoff area" that changes during the process (Troendle 1985). We suggested calculating the active runoff area by the value of the territory elevation excess above the mouths of the particular order watercourses for a certain period of time (Fig. 7).
The relationship of functioning and the structure of geosystems could be analyzed through the gradual decline in flow hydrographs (Fig. 7, Table 3). During spring floods, and after heavy prolonged rains, practically all first order catchments function, i.e. a characteristic surface runoff along the hollow-like depressions is observed. These depressions are usually not well pronounced in the relief and their depth could be only dozens of centimeters. However they are well marked by moist grasses and humus-gley soils. So for the beginning of June (the final stage of 2001 spring flood), the active runoff area for the first-order geosystems with an excess above the watercourses mouths of <1.0 m was 0.2 to 1.7 ha. As a rule, such areas accounted for about 60-70% of the total area of a geosystem (Fig. 7). For some geosystems, particularly those with flat surface, the active areas could not estimated. During the low water period associated with the low drainage from soil and ground, the flow continued only in the largest streams and the Mezha River itself.
Analysis of the catchments parameters and hydrological measurements showed a close relationship between the structure and functioning of geosystems. This provides opportunity to calculate the water flow discharge basing exclusively on the geosystems structure and precipitation data. Hydrological functioning and water protection zoning of geosystems The calculation of surface runoff from a priori topographic DEM data can be performed in different GIS supporting hydrological procedures. For example, in order to calculate the water flow in SAGA (Olaya 2004), a sufficiently
Table 3. Average water discharges and mean values of some hydrochemical indicators for the sections of different order catchments in the Mezha River basin (the Tver region)
large number of individual catchment parameters, such as the Shezi-Manning coefficient of surface roughness ("Manning's n" -MN) and the coefficient of soil influence on the intensity of surface runoff ("Curve number" -CN), are required in addition to general parameters (elevation, slope gradient, specific catchment area, etc.). The values of parameters must be assigned to each pixel of the model, which is objectively possible only with reliance upon the information on the landscape structure. In Fig.8, the numerical values of MN, taken from the standard Chow tables (Chow 1959), are depicted in accordance with the typological structure of landscape (see Fig. 3). These data are the basis for setting the spatially distributed parameters of the hydrological model aimed at calculating the runoff rates within the Loninka River basin.
The average precipitation intensity was assumed 0.0, 0.66, 10.0, or 100.0 mm/hour in numerical experiments for the calculation of runoff. In addition, Channel Site Slope (CSS) parameters, runoff characteristics (slope surface, Mixed Flow Threshold -MFT, and Channel Definition Threshold -CDT) and some other parameters were subject to changes in calculations. Numerical modeling has shown that even tabular values of MN, CN, CSS, MFT and CDT, not adapted to taiga wetlands, reveal significant features in the distribution of surface water runoff in various geosystems (Fig. 9). Extremely low runoff values (<0.01 m/s), were observed for most of the basin. Higher rates are characteristic only for the channels of streams and rivers (0.025-0.2 m/s) and the runoff increases up to 2 m/s within certain sections of the Loninka River. The pattern of runoff rates distribution is quite realistic, since the catchment of the Loninka River is a flat swamped hummocky sandy plain, cut by rare channels with water flow.
The results of runoff simulation using various parameters of Average Rain Intensity (ARI) and Channel Site Slope (CSS) revealed some regularities. In all cases, the increase in ARI caused higher runoff, for example, at the source point located near the drainage pipe under the railway embankment (the source of the Loninka River). This site is highly modified by human activities, and, consequently, it has low MN values (0.025) contributing to the surface runoff, and high CN (98) impeding water infiltration into the soil. Thus, the changing intensity of precipitation successively leads to the change in surface runoff rates. That is, in this section of the river the sensitivity of runoff to the precipitation intensity is great, although the flow rates are not very high. Most other observation sites are located in natural forest and mire landscapes. These sites are characterized by high MN values (0.5-0.9) and low CN (30-40). High values of MN prevent surface runoff, while low CN favor active infiltration. As such, the runoff rates decrease substantially at these sites and weakly respond to changes in the precipitation intensity. Thus, the different location of the observation sites in terms of landscape structure results in significant differences in runoff characteristics. Low regular runoff rates at zero precipitation intensity confirm the high capacity of flat over-moisturized catchment to accumulate water and regulate runoff in geosystems.
Verification of model calculations was carried out in the field. Experimental measurements of flow rates and discharge of the Loninka River at the gauging stations suggest the following. In all cases, the predicted rates differ from the measured ones, but the calculated values were not so far from the real ones as it was expected (Fig.10). The closest results were obtained for the precipitation intensity of 10 mm/h, although lower-intense precipitation is more probable.
More accurate simulation results could be obtained by adjusting the values of model coefficients (tabular values for non-waterlogged rivers were used for calculations). A more detailed DEM could also be useful. It turned out that the channel width of 1.0-1.5 m, and the 30 m pixel size does not allow delimiting valleys, as well as the microrelief which is very important for runoff from the flat plains. On the other hand, errors in the measurement of flow rates are quite possible for flat, boggy meandering channels, often blocked by forest debris and beaver dams. Nevertheless, under the lack of information, the values obtained during GIS modeling could become a basis for predicting runoff values in areas where the direct measurements are labor-consuming or otherwise impossible.
Let us demonstrate the possibility of water protection zoning of geosystems based on modeling the structure of catchment basins and the runoff from their areas using a priori data. An important environmental characteristic of the processes in the catchment basin is the delay time of water flowing to the river or control stations. The isochrones of flow delay time were calculated using the SAGA GIS (Fig. 11, A). However, this method of calculating is difficult to use to predict the time of pollutants arrival from side streams, which is important for taking measures to localize pollution before it gets into the river channel. We developed a modified cascade algorithm (Sysuev et al. 2011) to calculate the running time to first-order channel for each first-order catchment, combine them into second-order catchments, and then calculate the running time for each second-order catchment before merging together all second-order catchments, and so forth (Fig. 11, B).
CONCLUSION
The formation of landscape structures is described in traditional empirical concepts using the geomorphometric parameters of geophysical fields, i.e. gravity and insolation. The concept of landscape polystructure becomes physically defined: by choosing the main structure-forming processes and their principal parameters different classifications of landscapes could be elaborated. Formal mathematical algorithms of selecting surface relief units acquire fundamental geophysical meaning if combined with the state parameters. Implementation of the typological approach makes it possible to obtain a hierarchy of natural-territorial complexes (facies -urochishche -mestnost -landscape); implementation of the approach of the hydrological functioning of the landscape results in the hierarchy of catchment geosystems; implementation of the classification approach for parameters and normalized coefficients of remote sensing data produces the structure of vegetation cover.
Geomorphometric values describing the gradients of gravity (height, slope, horizontal, vertical and average curvature, specific collection area and specific dispersive area; B-depression depth) and insolation (direct solar radiation dose; aspect and illumination of slopes) fields are considered to be the parameters of physical state of individual units of relief surface, i.e. the DEM pixels, which form the geosystems. The state parameters were preferred due to their simple form and direct description of physical fields For example, slope is the modulus of the geopotential gradients; horizontal/ planar curvature is the divergence of streamlines; vertical curvature is a derivative of the steepness factor along the streamline; the dose of direct solar radiation is the relative amount of incoming energy, etc. The state parameters are also independently included into the description of structure-forming processes. Digital remote sensing data are also physical parameters of the state of individual units of relief surface and geosystems.
Parameters of the typological model of the landscape structure are selected in accordance with classical definitions and preliminary numerical experiments. The need for professionally correct and justified selection of the physical state parameters, i.e. principal structure-forming processes, as well as their weights suggests the crucial role of the landscape approach.
The functional model of the landscape structure is based on morphometric parameters describing the redistribution of water over the surface in the gravitational field (slopes, specific catchment area, horizontal and vertical curvature). Such classification makes it possible to identify the contours of various-order catchment geosystems in accordance with the Horton-Strahler-Tokunaga scheme.
The structural parameters obtained from the typological description of landscapes allow simulating the hydrological functioning of catchment geosystems with satisfactory accuracy for particular types of water protection zoning. | 9,331 | 2020-03-31T00:00:00.000 | [
"Geology"
] |
An improved multi-ridge fitting method for ring-diagram helioseismic analysis
Context: There is a wide discrepancy in current estimates of the strength of convection flows in the solar interior obtained using different helioseismic methods applied to observations from SDO/HMI. The cause for these disparities is not known. Aims: As one step in the effort to resolve this discrepancy, we aim to characterize the multi-ridge fitting code for ring-diagram helioseismic analysis that is used to obtain flow estimates from local power spectra of solar oscillations. Methods: We updated the multi-ridge fitting code developed by Greer et al.(2014) to solve several problems we identified through our inspection of the code. In particular, we changed the merit function to account for the smoothing of the power spectra, model for the power spectrum, and noise estimates. We used Monte Carlo simulations to generate synthetic data and to characterize the noise and bias of the updated code by fitting these synthetic data. Results: The bias in the output fit parameters, apart from the parameter describing the amplitude of the p-mode resonances in the power spectrum, is below what can be measured from the Monte-Carlo simulations. The amplitude parameters are underestimated; this is a consequence of choosing to fit the logarithm of the averaged power. We defer fixing this problem as it is well understood and not significant for measuring flows in the solar interior. The scatter in the fit parameters from the Monte-Carlo simulations is well-modeled by the formal error estimates from the code. Conclusions: We document and demonstrate a reliable multi-ridge fitting method for ring-diagram analysis. The differences between the updated fitting results and the original results are less than one order of magnitude and therefore we suspect that the changes will not eliminate the aforementioned orders-of-magnitude discrepancy in the amplitude of convective flows in the solar interior.
Introduction
Understanding solar interior dynamics is crucial to understanding the mechanisms of the solar dynamo. As one example, convection may play an important role in the formation of magnetic flux tubes, as well as in their rise through the convection zone and their tilts at the solar surface (e.g., Brun & Browning 2017). Helioseismology, which uses observation of oscillations on the solar surface, is an important probe of interior dynamics.
Currently there is a major discrepancy between the timedistance (Hanasoge et al. 2012) and ring-diagram (Greer et al. 2015) estimates of the strength of solar subsurface convection at large spatial scales. The measurements from time-distance helioseismology suggest flows orders-of-magnitude weaker than those seen in convection simulations (e.g., in the anelastic spherical harmonic (ASH) convection simulations by Miesch et al. 2008), while the measurements from ring-diagrams are closer to the expectations from simulations.
Time-distance helioseismology (Duvall et al. 1993;Kosovichev & Duvall 1997) is based on measuring and interpreting the travel times of acoustic and surface-gravity wave packets. These travel times are measured from the temporal cross-covariances between the Doppler observations at pairs of points on the solar surface (see Gizon & Birch 2005, for a review). Ring-diagram analysis (Hill 1988;Antia & Basu 2007) measures the Doppler shift of acoustic and surface-gravity oscillation modes in the local power spectra and uses these Doppler shifts to infer local flows in the solar interior. To compute the local power spectra, the solar surface is divided into a number of spatial tiles. Each tile is tracked at a rate close to the solar rotation rate. The power spectra of the solar oscillations (Doppler observations) are computed for each tile. In the three-dimensional power spectra, roughly concentric rings with high power are present at each frequency and they correspond to the modes of different radial orders. Flow and wave-speed anomalies in the Sun shift and distort the rings and, hence, one can obtain information about the solar interior from the ring parameters. Flow maps from Doppler observations by the Helioseismic and Magnetic Imager (HMI; Schou et al. 2012) onboard the Solar Dynamics Observatory (SDO; Pesnell et al. 2012) are automatically computed by the SDO/HMI ring-diagram pipeline (Bogart et al. 2011a,b) on a daily basis since the SDO launch in 2010.
The HMI ring pipeline codes separately fit each single ridge (single radial order n) in the power in slices at constant horizontal wavenumber or slices in temporal frequency. Greer et al. (2014) developed an alternative approach based on simultaneously fitting multiple ridges (multiple radial orders) at each horizontal wavenumber. Greer et al. (2015) introduced another in-A&A proofs: manuscript no. Nagashima_arxiv novation: they chose a much denser tile layout than other ringdiagram analyses: 16-degree tiles with the separation of 0.25 degree instead of the 7.5 degree spacing (at the equator, the spacing increases at higher latitude to maintain 50% overlap between neighboring tiles) used for 15-degree tiles in the HMI ring pipeline. As described in detail in Appendix A.2, the AT-LAS code from Greer et al. (2014Greer et al. ( , 2015 provides seven parameters for each ridge and five parameters for the background power at each wavenumber k. Greer et al. (2015) applied a threedimensional flow inversion to these fit results to estimate the three-dimensional flow field in the solar interior. Both the dense packing of tiles and the three-dimensional inversions are unique to ATLAS, however, in this paper we focus only on the fitting component of the code.
As one step toward understanding the causes of the abovementioned disagreement between the helioseismic measurements of subsurface convection, here we focus on the ringdiagram analysis described by Greer et al. (2015). We revisit the analysis code (Greer et al. 2014;Greer 2015) that was used in that work and identify several issues through a step-by-step examination of the code. In response to these issues, we have developed an updated method and characterize the updated code by applying it to synthetic data generated from Monte-Carlo simulations.
Description of the updated code
In this section, we describe our updated ATLAS code. Each of the updates is a response to a problem that we found in our inspection of the original code. In this section we will refer to Appendix A for the details of the original code.
After computing the power spectrum for a particular tile, the processing steps are: 1) remap the power spectrum from Cartesian (k x , k y ) to polar (k, θ) coordinates, 2) rebin the power in azimuth θ; the number of grid points in θ is reduced from n pix = 256 to n pix = 64, 3) fit the logarithm of a Lorentzian model to the logarithm of the smoothed power by least-squares minimization at each k using the Levenberg-Marquardt (Marquardt 1963) technique; the model function has 7n r + 4 parameters at each horizontal wavenumber k, where n r is the number of ridges at the particular value of k, and 4) estimate the covariance matrix of the errors of the fitted parameters by computing the inverse of the Hessian matrix of the cost function. In the following subsections we describe the changes that we introduced to each of these steps.
Re-binning in azimuth
The computed local power spectra O(k x , k y , ν) and the interpolated spectra in polar coordinates are non-negative by construction. Following the original code, the interpolated spectra have 256 pixels at each k. At kR ⊙ ≡ ℓ ∼ 500 (where R ⊙ is the solar radius), which corresponds to k pix ≡ k/h k = 21 (where h k = 3.37 × 10 −2 Mm −1 is the grid spacing in k) and which we use in most of the Monte-Carlo test calculations shown later, this is about twice of the number of grid points in θ from the full resolution, which has n pix ≈ 2πk/h k . For the sake of computational efficiency, it is desirable to reduce the number of points in azimuth. In the updated code, we use a running box-car smoothing of four-pixel width followed by subsampling by a factor of four to reduce to the number of grid points in θ. This procedure ensures that the resulting smoothed power spectrum O s (k, θ, ν) will be positive. We expect that as flows produce θ variations that are dominantly at azimuthal wavenumber one, it should be possible retain only very low resolution in θ; Sect. 4.1 discusses this in more detail. We retain 64 points in θ for the examples shown in this paper.
The original code used a low-pass Fourier filter to smooth the power spectrum in the θ direction. This procedure produces occasional points where the smoothed power is negative.
Least-squares fitting of the logarithm of power
In the updated code, we use a least-squares fitting to fit the logarithm of the model power to the logarithm of the remapped and smoothed power. Following Greer et al. (2014), the fitting is carried out independently at each horizontal wavenumber k. As the amount of smoothing is increased, the probability distribution function (PDF) of the power, as well as its logarithm, approaches a normal distribution (see Appendix B.4 for more details), and it is therefore appropriate to use a least-squares fit . The logarithm of the smoothed power has the convenient property that the variance of the logarithm of power (σ N in Eq. (B.4)) depends only on the details of the remapping and smoothing and does not depend on θ or ν (see Appendix B.2 for details).
In our approach, the cost function at a single k is: where O s (θ, ν) is the observed spectrum at some fixed k after smoothing in θ and P(θ, ν; q) is the model of the spectrum with model parameters q. The summation is taken over all bins j within the fitting range. We note that as the error estimates for ln O s (σ N in Eq. (B.4)) are all the same, we set them all to one in writing the cost function. For the sake of readability, throughout the remainder of this paper we do not introduce or carry notation to denote the value of k; the fitting problems at each k are treated as independent and we will not be comparing fit parameters for different values of k. This is an approximation as the interpolation from (k x , k y ) space to (k, θ) space does imply error correlation between the fit parameters at different values of k.
In the updated code, we use the Levenberg-Marquardt technique to solve the minimization problem. In particular, we use mpfit.c (Markwardt 2009), one of the codes in the MINPACK-1 least-squares fitting library (Moré 1978;Moré & Wright 1993). The covariance matrix of the errors associated with the fitted parameters are estimated at the last step of the Levenberg-Marquardt procedure. As a practical note, the error estimates obtained by this method must be scaled as we have assumed σ = 1 in Eq. (1); see original PDF, Eq. (B.4). In general, calling the code with an incorrect estimate of σ could cause poor performance of the fitting algorithm. In the current case σ is not far from one (σ ∼ 0.5, see Sect. 3.1) and we do not expect that this is a significant issue here. To implement the σ estimation in the code is a task for the future.
The original ATLAS code used a maximum-likelihood method based on the assumption that the power spectrum in a single bin in (k x , k y , ν) space follows a chi-squared distribution with two degrees of freedom; see Appendix B.1 for the original likelihood function. This method does not account for the impact of smoothing on the PDF of the power spectrum.
The fitting parameters for the peaks (n = 0, . . . n r − 1, where n r is the number of the ridges) are q 0,n = ν n is the frequency of the n-th peak, q 1,n = A n is the amplitude, q 2,n = Γ n is the width, (q 3,n , q 4,n ) = u = (u x,n , u y,n ) is the horizontal velocity, q 5,n = f c,n and q 6,n = f s,n are parameters to handle anisotropy. The background is modeled at each k as where F is again given by Eq.
(3) and q 0,BG = B 0 is the amplitude, q 1,BG = b is the power-law index, q 2,BG = f c,bg and q 3,BG = f s,bg are the parameters to handle the anisotropy. The number of the parameters in total is 7n r + 4 at each k. For reference, Tables C.1, C.2, and C.3 show the physical meaning of each of the fitting parameters. We altered the model function Eq. (2) from the original Eq. (A.1) to make the parameterization more stable. The following subsections describe the motivation for these changes.
Parameters of the anisotropy terms
We replaced F ′ defined by Eq. (A.2) with F defined by Eq. (3) and used the same form for the anisotropy terms for the background, although the exact form for the background function is subject to other alterations discussed in Sect. 2.3.2.
Our alteration does not change the space of functions that can be fit with F(θ) but it does not suffer from the issue of indeterminate phase for nearly isotropic power spectra. In the original parameterization, the amplitudes of the anisotropic part of the model function (q ′ 5,n and q ′ 3,BG ) are usually much smaller than one. As a consequence, the phase (q ′ 6,n or q ′ 4,BG ) does not matter much in the fitting, and, hence, it is unstable. In the particular case of isotropic power spectra with q ′ 5,n = 0 for all n and q ′ 3,BG = 0, the phases q ′ 6,n and q ′ 4,BG are indeterminate.
Parameters of the background model
We also changed the background parameterizations for the sake of stability. The background model (Eq. (A.3)) originally contained five parameters. In this section we explain our motivation for reducing this to the four parameters shown in Eq. (4). The background model in Eq. (A.3) is based on the model of Harvey (1985): where τ is the characteristic timescale of the velocity field in question and σ rms is the rms velocity. In this case the index in the original background model (Eq. (A.3)) would be q ′ 2,BG = 2. Appourchaux et al. (2002) suggested a generalization of the Harvey (1985) model, where q ′ 2,BG can be the range of 2 to 6. In the ATLAS fittings, however, we typically obtain the index q ′ 2,BG ∼ 1 for HMI observations. Also, the roll-off frequency obtained from the fitting of HMI observations, q ′ 1,BG , is quite low (∼ 1 µHz) and below the frequency resolution for 28.8-hour power spectra (9.7 µHz), which is the typical observation length for 15-degree tiles in HMI pipeline and 16-degree tiles in AT-LAS.
This suggests that the background is not related to either the supergranulation (τ ≈ 10 5 sec, or ν ≈ 10 µHz) or granulation time scales (τ ≈ 4 × 10 2 sec, ν ≈ 2.5 mHz), based on Table 1 of Harvey (1985). As these background parameters are not consistent with the original physical model, we might need to reconsider the background model. At this moment, however, this is a task for the future, and we retain this model in the altered form mentioned below.
In the case of ν ≫ q ′ 1,BG , the original background model, Eq. (A.3), can be simplified: we therefore redefine the background model B(θ, ν) with four parameters q i,BG (i = 0, 1, 2, 3) with the altered anisotropy terms in the form of Eq. (4). As the new background model (q i,BG ) has only four parameters instead of the original five, the index i has different meanings in the two models.
Performance of the updated code
Monte-Carlo simulations are a powerful tool for testing fitting methods. In this section, we use this approach to characterize the performance of the updated fitting code. In Sect. 3.1 we compare the scatter of the fitting results of Monte-Carlo simulations with the noise estimated by the updated code. In Sect. 3.2 we measure the bias of the flow estimates for some simple cases.
Error estimates
We use the approach of Gizon & Birch (2004) to generate realizations of the wavefield. The assumption of this approach is that the real and imaginary parts of the wavefield at each point (k x , k y , ν) are independent Gaussian random variables. In more detail, the method is as follows: 1) create a Lorentzian model (Eq. (2)) for the limit spectrum using a set of input parameters, 2) pick two standard normally distributed random numbers at each grid point (k x , k y , ν) and take sum of the squared numbers divided by two to make a chi-square distribution with two degrees of freedom, and 3) multiply the limit spectrum by these random numbers to obtain one realization of the power spectrum. After each realization of the power spectrum is generated in (k x , k y , ν) space, it is then remapped and rebinned in the same manner as the local power spectra computed from the observations. To obtain the parameters for the input model power spectrum in the first step above, we used the average over Carrington rotation 2211 of the power spectra for the disc center tile from the SDO/HMI pipeline (these average power spectra were obtained using the Data Record Management System (DRMS) specification A&A proofs: manuscript no. Nagashima_arxiv hmi.rdvavgpspec_fd15 [2211][0][0]). We then used the updated code to fit these average power spectra. The parameters resulting from these fits at ℓ = 328, 492, and 984 (k pix = 14, 21, and 42, respectively) are shown in Tables C.1, C.2, and C.3, respectively, in Appendix C. We later use these parameters to generate Monte-Carlo synthetic data. Figure 1 shows a few slices of the input limit spectrum, namely, Eq. (2) evaluated for the parameters q listed in Tables in Appendix C. The original observed power spectrum is also shown. Figure 1 shows that the model power spectrum is reasonable compared to the observables, although the details of the peak shapes and the background slopes are not always reproduced by the model and there is still room for improvement with regard to the model function.
We created 500 realizations of the power spectrum using the above procedure. The fitting code occasionally produces outliers and for these computations, we removed them. Specifically, we removed outliers iteratively: in each iteration we computed the standard deviation of the samples between the tenth and 90th percentile and we discarded points at more than five times the standard deviation away from the mean. We repeated this procedure until no further points were removed. After this outlier removal, the number of valid samples are 489 (ℓ = 328, k pix = 14), 500 (ℓ = 492, k pix = 21), and 496 (ℓ = 984, k pix = 42) out of 500 samples. Figure 2 shows the average and scatter of the output peak parameters associated with the n = 0, . . . 5 ridges for ℓ = 492 (k pix = 21) from fitting 500 Monte-Carlo realizations of the power spectrum. Table 1 shows the average and scatter of the output background parameters from the same set of fitting results. From Fig. 2 and Table 1, we see that most of the averages of the parameters estimated by the fitting are within the expected scatter in the mean (σ scat / N sample , where N sample is the number of Monte-Carlo realizations, hence N sample = 500 here). The amplitude is an exception; it is always smaller than the input. This is the result of taking the logarithm of the smoothed power; see Appendix D for details. Also, from Fig. 2 and Table 1, we see that the error estimated by the updated code σ code is consistent with the scatter of the samples σ scat . This shows that the error estimates produced by the updated code are reasonable.
We carried out the analogous Monte-Carlo simulations for the cases of twice larger k and two-thirds k. Although the number of peaks to be fitted is not the same in these cases as the number of prominent peaks is smaller for larger k, the behavior of the fitting results and their error estimates are similar to those in the case shown in Fig. 2 and Table 1.
Bias of the flow estimates and correlations between the fitting parameters
To measure the bias of the flow estimates and the correlations between the fitting parameters, we make models with a simple flow: isotropic models, except for a specific flow u x for the n = 3 ridge.
The parameters for the model are identical to those from Sect. 3.1, except for the parameters related to the azimuthal angle θ; namely, the peak parameters q 3,n , q 4,n , q 5,n , and q 6,n (for all n) and the background parameters q 2,b and q 3,b are all set to zero. The flow u x,n (q 3,n ) is non-zero for a single n, n = 3. The limit spectrum is constructed with Eq. (2) and these input parameters, and the realizations of the power spectrum are generated in the same way as those in Sect. 3.1.
In this subsection we compare results from the updated code with results from the original code and also a modified version of the original code. While the original code fits a Lorentzian model to the square root of the power, the modified original code fits a Lorentzian model to the power itself. Although the original and modified original codes assumes the five-parameter background model (Eq. (A.3)), here we created the limit spectrum using the four-parameter background model (Eq. (4)), which we use in our updated code. As described in Sect. 2.3.2, under the current conditions, our updated four-parameter background model can approximate well the original five-parameter background model and the original and the modified original codes are applicable, although we need to keep in mind that the meaning of the background parameters in the results from different codes are different. Comparable tests using input parameters obtained by the original code give similar correlation coefficients maps. Figure 3 shows the fitting results for u x,n=3 at ℓ = 492 (k pix = 21) in the form of normalized histograms of 500 Monte-Carlo simulations. The input models have q 3,n = u x,n = 0, 80, . . . 400 m s −1 for the n = 3 ridge. Before plotting, we removed outliers from the output fit parameters. We removed outliers as described in Sect. 3.1. After the outlier removal procedure, the sample numbers for each methods are 394-466 depending on the value of u x,n=3 (79-93%, modified original), 436-488 (87-98%, original), and 499-500 (100%, updated). The updated code provides almost no outliers, while there are some outliers in the fitting results by the modified original and original codes. The number of outliers depends on u x,n , although there is no clear trend with u x,n ; for example, we cannot say that the stronger flow produces more outliers. We did not further investigate the details of the outliers in the fitting results by the original and modified original codes. Figure 3 shows that the original fitting code underestimates the input flow by about 3%. This trend is consistent with what was reported by Greer et al. (2014). The modified original and updated codes produce less-biased flow estimates. Figure 3 also compares the errors estimated by the code and the scatter in the Monte Carlo simulation. The errors from the updated code are consistent with the scatter of the Monte-Carlo results, while the original code overestimates the errors. Figure 4 shows the correlation coefficients between the fitting parameters of 500 Monte Carlo samples. For this computation, outliers were removed as described in Sect. 3.1. The original and modified original codes both produce stronger correlations in some of the output parameters than the updated code. In particular, for the original code, these parameters show the strongest correlations: the amplitude A n (q ′ 1,n ) and the width Γ n ′ (q ′ 2,n ′ ) at peaks |n − n ′ | ≤ 1, the roll-off frequency ν bg (q ′ 1,BG ) and the index b (q ′ 2,BG ) of the background, and the index b (q ′ 2,BG ) of the background and A n (q ′ 1,n ) or Γ n (q ′ 2,n ) of higher-n peaks. The modified original code and the updated code did not show such strong biases, except for the width and amplitude of each peak, along with the background index and some weaker peaks (smaller and larger n). In terms of correlations between the parameters, the updated code shows an overall improvement in comparison with the original, but the correlation coefficients between u x and u y on the same peaks or on the peaks next to each other are ∼ 0.1 at most for any u x and any fitting results by all three codes shown in Fig. 4.
Summary of the Monte-Carlo test
The Monte-Carlo tests show that the updated code is able to reasonably recover the parameters, apart from the amplitude, that are used to generate the input power spectra. Other than the amplitudes, we were not able to measure a statistically meaningful bias in the fit results using 500 Monte-Carlo simulations. The underestimation of the amplitude is the result of taking the logarithm of the rebinned power. Appendix D discusses this issue in detail. Since current ring-diagram flow inversions are carried out using the mode shifts, (q 3,n , q 4,n ) = (u x,n , u y,n ), and the other fitting parameters including mode amplitudes are not used, we believe underestimated amplitudes will not substantially impact on further analysis.
Further rebinning in azimuth
We explored the idea of rebinning the data further in θ. If we rebin further, the PDF of the resulting power spectrum is closer to Gaussian. Additional rebinning has the additional benefit of reducing the underestimation of the amplitude. Another benefit is that if we can reduce the number of pixels involved in the fit without a significant decrease of the fitting quality, it will reduce calculation costs. Figure 5 shows the fitting results corresponding to the fourtime further rebinning. Specifically, we rebin the 64-pixel azimuth grid into 16 pixels. In this case, valid sample numbers for the three codes are 351-429 (70-86%, modified original), 406-487 (81-97%, original), and 500 (100%, updated). While the updated code had no outliers, the numbers of outliers in the results by the original and modified original codes increase compared to the case without further rebinning (see Sect. 3.2). This also confirms the stability of the updated code.
This figure tells us that our updated code is better than the original for this further rebinned case as well, in terms of a more reasonable error estimate and a smaller bias. Moreover, using our updated code, we can rebin further up to 16 pixels at this k (k pix = 21) without a significant increase in the noise or the underestimate of the parameters or outliers in comparison with the original 64-pixel case. In the further rebinned case, the correlations between parameters show a trend that is essentially similar to that of Fig. 4. The only exception is in the fitting results obtained by the modified original code; they show less correlation even in the case of u x,n=3 = 400 m s −1 in the further rebinned case.
In the original code, there is no scaling factor related to the bin number and the error estimate by the code is needed to be scaled properly. For these 64-pixel and 16-pixel cases, the scatters are not significantly changed and it is the case as well for the error estimated with the proper scaling. For these plots, again we note that we need a sufficient number of pixels for a good fitting; in this case 8 or 4 bins were too small, because the functional form of the model has terms that vary as cos θ and sin θ but also as cos 2θ and sin 2θ as parameters.
Future work
The modifications described in this work are mainly corrections of problems in the original code. The exception to this is the change of the algorithm from the maximum likelihood method based on the chi-square distribution function to the one based on the normal distribution function, namely the least-squares method, and the fitting not to the square-root of power but to the logarithm of the power. There are several potential improvements of the analysis. While implementing such further alterations are beyond the scope of this paper, we will briefly discuss some potential future improvements.
One of the open issues is the remapping and rebinning. Currently the power is remapped from Cartesian to polar coordinates and then smoothed in the azimuthal direction. However, it is possible to do the analysis in the original Cartesian system as it is done in the HMI pipeline fitsc module (Bogart et al. 2011a), which fits in slices at constant temporal frequency. The fitting approach shown here would only need to be slightly modified to be carried out in a region with k 2 x + k 2 y near k. We expect that the main issue would be extending the model to allow for small variations in k. As most parameters are presumably smooth in k, we speculate the fitting small range in the k rather than single k would help the stability of the fits.
In the current code, we approximate the PDF of the remapped and smoothed power spectrum as a normal distribution function. But how we rebin, including the topic mentioned in Sect. 4.1, is still open. In the method shown here, we do not rebin the model function but use the model function calculated on the same grid as the rebinned data. This is an approximation in the limit where the rebinning does not significantly change the shape of the limit spectrum and we expect that it will cause a bias in the case of extreme rebinning. Figures 3 and 5 show that the bias in the fitting results of the updated code is less than about 5 m s −1 in the range of |u x | ≤ 400 m s −1 even in the further rebinned case (Fig. 5).
The choice of the model function (currently Eq. (2)) is also an open issue. As shown in Fig. 1, the current model function does not reproduce the detailed structure of the observed power spectrum. For example, the current model function does not take into account the asymmetry of the ridge shape in terms of frequency. Currently, we use the power spectrum of the tile at the disc center only but that is the simplest case and when we investigate the deep convection, we cannot avoid using the tiles on various locations on the disc. In such a case, in order to construct a model function, we need to take into account further effects, such as the effect of the center-to-limb variations (Zhao et al. 2012), the line-of-sight effect on shape of the power, or the effect of the Postel projection. Also, the effects of differential rotation on eigenfunctions are to be accounted for in future.
Conclusions
We identified several problems in the multi-ridge fitting code ATLAS (Greer et al. 2014) and we updated the code in response. We confirmed that flow-estimate biases and error overestimates exist in the fitting results by the original code. The biases that we found are insufficient to resolve the discrepancy presented in Hanasoge et al. (2016).
The updated code is based on a consistent model and an appropriate likelihood function. Monte Carlo tests show the fitting results and their error estimates by the updated code are reasonable and confirm the improvement of the fitting.
The work shown here is limited to the fitting part of the ringdiagram analysis codes. The next step in the ring-diagram analysis is flow inversion using the mode-shift parameters (q 3,n and q 4,n in Eq. (2)). Unlike the HMI ring-diagram pipeline in which A&A proofs: manuscript no. Nagashima_arxiv the inversion is done at each tile, Greer et al. (2015) used a 3-D inversion using multiple tiles. This unique step should be be the focus of examination in future works. Fig. 1. Slices through the input limit spectrum (black) and observational spectrum averaged over one Carrington rotation (gray), which was used to obtain the input parameters, at θ = 0 and a few different values of k. Vertical dashed lines indicate the fitting ranges and the models are plotted only within these ranges. Lower limits for all k are fixed at 0.4 mHz, while the upper limits depend on the initial guess for each k, and they are the highest peak frequencies plus the widths of the the peak. . In the middle part of each panel, the deviation of the mean fitting results from the input, δq i,n = q output i,n − q input i,n , are illustrated by the circles and the expected scatter of the mean computed from the square root of variance σ scat of the 500 Monte Carlo samples divided by the square root of the sample number, N sample (here it is 500) are shown as the error bars. The dashed horizontal lines are at δq = 0. In the lower part of each panel the scatter of the 500 samples, σ scat and the scaled error estimated by the updated code, σ code , are depicted by the thick gray lines and the thin lines with short horizontal bars on the edge. Errors estimated by the updated code are scaled by σ(α = 1.5, k pix , n pix ) = 0.569 as described in Appendix B.2. The error bars on the middle panel of δq 1,n are tiny at this scale; the underestimation is relatively large. However, the errors of δq 1,n are σ scat / N sample and, therefore, available from σ scat in the lower panel and N sample = 500; they are ∼ 0.02 at most. A&A proofs: manuscript no. Nagashima_arxiv The short vertical lines on the horizontal lines indicates the error estimated by the codes (thin lines with short horizontal bars) and the standard deviations of the 500 fitting results (thick lines) in the same color as the histograms centered at the means (horizontal lines) by the three codes. We note that the fitting results with overly large deviation are omitted. See text for details. Table 1. Background parameters in the input model for the Monte Carlo simulation and the fitting results at ℓ = 492 (k pix = 21). 500 realizations were used. Figure 2 shows the corresponding fitting results for the peak parameters. The standard deviations over the 500 realizations and the scaled errors provided by the updated code are consistent. and for the isotropic model except with u x = 400 m s −1 for the n = 3 peak (on the next page) at ℓ = 492 (k pix = 21). Each box indicates the parameters for n-th peak (n = 0, 1, . . . 5), q i,n (i = 0, . . . 6, from left to right and from bottom to top in each box) and the background parameters (BG). As we defined in Sect. 2 q 0,n = ν n is the frequency of the n-th peak, q 1,n = A n is the amplitude, q 2,n = Γ n is the width, (q 3,n , q 4,n ) = u = (u x,n , u y,n ) is the horizontal velocity, q 5,n = f c,n and q 6,n = f s,n are parameters to handle anisotropy. Background parameters are q 0,BG = B 0 is the amplitude, q 1,BG = b is the power-law index, q 2,BG = f c,bg and q 3,BG = f s,bg are the parameters to handle the anisotropy of the background. Color scale is shown by the color bar on the right side. We note that the last columns and rows (i = 4) of BG on the top panels are empty because the number of the background parameters in the model in the updated code is four, instead of five.
Article number, page 9 of 18 A&A proofs: manuscript no. Nagashima_arxiv In the original ATLAS code (Greer et al. 2014(Greer et al. , 2015, the power at each k is modeled with the sum of Lorentzians with seven parameters (q ′ i,n , i = 0, . . . 6) for each ridge n = 0, . . . n r − 1, where n r is the number of ridges, and a background model with five parameters (q ′ i,BG , i = 0, . . . 4) at each k. The total number of fit parameters at each k is 7n r + 5.
Appendix A.3: Problems in the original ATLAS code
First, the Lorentzian model for the power in the original code was fit to the square root of the observed power. This is inconsistent with what is stated in Greer et al. (2014Greer et al. ( , 2015 and it is also an inconsistency between the model and the observable. Therefore, we made them consistent in the updated code (see Sect. 2.2).
Second, the cost function to be minimized in the original code was not a good approximation. The cost function to be minimized in the maximum likelihood method based on the chisquare-distribution with two degrees of freedom as PDF is where O i is the observed spectrum and P i is the model, as given in Sect. A.2. The sum is taken over all grid points (θ i , ν i ) in the fitting range. Instead, the original code minimizes using the mpfit.c code 1 . This is conceptually inconsistent with the description in Greer (2015), although the likelihood function itself is not explicitly stated there. While it can be shown that the results of minimizing of the exact cost function (Eq. (B.3)) and the wrong one (Eq. (A.5)) are identical in the limit of linear perturbations, there is no reasonable computational or physical reason to take the extra square in the calculations. Moreover, it is not useful for the error estimates. Therefore, we decided to correct this issue; Sect. 2.2 describes our changes. Third, in the original ATLAS code, error estimates are obtained by an independent calculation of the Fischer information using the final fitting results and using the chi-square distribution with two degrees of freedom as the likelihood function. It is not consistent to compute the error estimates with a likelihood function that is different than what was used in the fitting itself. In Sect. 2.2, we also describe a consistent approach to the computation of error estimates in our updated code.
Fourth, several parameters are unstable. Therefore, we have changed the parameterizations as we discussed in Sects. 2.3.1 and 2.3.2.
Appendix B: Probability distribution function (PDF)
and maximum likelihood method based on the PDF Appendix B.1: PDF of the raw power spectrum Woodard (1984) demonstrated that the PDF of a single observed power spectrum divided by the expectation value of the spectrum is the chi-square distribution with two degrees of freedom. On this basis, Duvall & Harvey (1986) and Anderson et al. (1990) introduced the probability density at a given grid point in the where O i is the observed spectrum, P i is the model for the limit spectrum, and q is the model parameters. The joint probability density for the experimental outcome at horizontal independent wavevectors and frequencies is given by This is the likelihood function, and the model parameters which maximize this L are the targets in the maximum likelihood method. In practice − ln L is minimized to use standard minimization procedures. To make the computation simpler, ln O i is subtracted from − ln L. The minimization of − ln L − ln O i in terms of q is identical to that of − ln L, because O i is not a function of the model parameter q. In conclusion, is minimized.
Appendix B.2: PDF of logarithm of averaged power The PDF of a well-averaged power spectrum is also a chi-square distribution but with many more than the two degrees of freedom of the original and given the central limit theorem, it can be approximated by a normal distribution. To carry out the leastsquares fitting based on the normal distribution function and estimate errors of the fitting results, we need the variance of the normal distribution function. We therefore take advantage of the fact that the logarithm of the averaged spectrum obeys a normal distribution function with a constant variance. In Appendix B.3, we show that if we have a spectrum obtained by averaging over N spectra, each normally distributed with the same expectation value M, the logarithm of averaged spectra, y, obeys the normal distribution function N(ln M, σ 2 N ): (B.4) where σ N = 1/ √ N, therefore, σ N is a constant over y. We note that the function f (y) is linearized around the mean to derive Eq. (B.4); see Appendix B.3 for details.
In the present case, we fit one k at a time. To do this, the spectra are, at each frequency, linearly interpolated in k x and k y to a circle with radius k pix , where k pix is k in units of bins, and smoothed to n pix azimuths. On this basis, the question is if we can still use a least-squares fit with a diagonal covariance and a single σ N and, if so, which value of σ N (or equivalently N) in Eq. (B.4) should be used. In particular, it needs to be considered that more than 2πk pix are used for the interpolation and that the averaged values will be correlated.
In the limit of averaging over the entire circle, namely n pix = 1, and in the case with k pix ≫ 1, it might be reasonable to assume that N = 2πk pix α, with α accounting for the fact that the averaging is essentially over an annulus around k pix . Indeed, a Monte-Carlo test shows that α ≈ 1.5 gives a very good estimate of the error on the average, which will be shown in Fig. B.1 later in this subsection.
For n pix ≫ 1 the variances, as well as the off-diagonal elements of the covariance matrix of the interpolated datapoints will, in general, depend on azimuth. Assuming that the fitted function may be linearized in the fitted parameters, a linear fit implies that the fitted parameters are given by a linear combination of the observed values. Assuming that the functions are smooth (as in the present case where they are low-order harmonic functions), the coefficients in the linear combination are also smooth. From this it follows that if the variations in the properties of the covariance matrix average on the scale of the variations in the fitted functions, then the same scaling factor may be used and that Standard deviation of the logarithm of chi-square distributed white noise rebinned on the annulus with the radius of k pix pixels, σ scat , is plotted against the square-root of the pixel number on the annulus after rebinning, √ n pix , at three k pix shown with different symbols in different colors. In the case of n pix ≪ 2πk pix , σ scat = σ α (α = 1.5), which is shown with the dotted lines, is a valid approximation, while in the limit of n pix 2πk pix , σ scat deviates from σ α and approaches a constant of 2/3. which a Monte-Carlo test again confirms.
To validate the arguments above, a simple Monte-Carlo test is carried out to measure the scatter (standard deviation, σ scat ) of the logarithm of averaged white noise. White-noise fields with the chi-square distribution with two degree of freedom in a three-dimensional Cartesian Fourier space (k x , k y , ν) = (384,384,1152)[pix] are remapped at several annuli with specific radii k pix in the same way as the data in the updated analysis code. At first, there are 256 pixels on each annulus after remapping, we then rebin them into n pix = 4, 8, 16, 32, 64, and 128 pixels. The standard deviation of the logarithm of the rebinned white noise, σ scat , are computed and plotted against the square root of the number of datapoints after rebin on the annuli, n pix = 14, 21, and 42 in Fig. B.1. As we mentioned above, in the case of n pix ≪ 2πk pix , σ scat ≃ σ α (α = 1.5), where σ α (α) ≡ n pix /(2πk pix α). In the case of n pix 2πk pix the deviation of σ scat from σ α is not negligible, and σ scat approaches a constant 2/3; this constant can be derived from the bi-linear interpolations of chi-square distributed random variables in two dimensions. Also we note that we have not included the apodization effect in the data to create the noise field in this test calculation. Without apodization σ scat might be overestimated; in any case, this does not affect the parameter estimates but only the error estimates.
In our updated code, σ N is set to 1 (see the cost function Eq. (1)), as we mentioned in Sect. 2.2, and after the fitting is done the error provided by the code is scaled by σ α . Although calling the minimization code with an incorrect error estimate can lead to worse convergence, the difference in this case should be negligible. Implementing σ α in the code is a task for the future. Logarithm of average and average of logarithm Figure 2 shows that the amplitudes (q 1,n for all n and q 0,b in Eq. (2)) measured by the updated code are underestimated. This is the result of taking the logarithm of the rebinned power and the effect can be reproduced in the following simple test.
To see why this is the case, we define x i (i = 0, 1, . . . n − 1) as a series of random variables whose distribution function is the chi-square distribution with two degrees of freedom. The expectation values and the variance of x are x = 2 and σ 2 x = 2 2 , respectively, and y j ( j = 0, 1, . . . n/a−1) is smoothed x i over each Left panels show (P − P )/ P , and right panels show ln P − ln P , where indicates the expectation value (model). The input model of the power is isotropized (q i,n = 0 for 3 ≤ i ≤ 6 for all n and q i,b = 0 for i = 2, 3) but otherwise it is the same as the model with the parameters given in Table C.2. For this plot, we use 50 realizations for the top two rows and 500 realizations for the rebinned data (lower three rows). The black dashed lines on the lower four panels are the Gaussian function centered at zero whose width is the standard deviation of the samples (shown on the panels). This validates our approximation of the distribution function of the logarithm of the power by the Gaussian function. The thick red dashed curves on the left panels are the chi-square distribution function of various degrees of freedom. The degree of freedom (dof) for each distribution is determined from the standard deviation of each set of normalized power. See the text for details. a-pixel range: In this case y = x = 2, and ln y = ln(2), but ln(y) ≤ ln y . The more the array is rebinned (the larger a becomes), the less the scatter of y becomes: ln(y) approaches ln y . This trend is alleviated if the spectra are more strongly averaged. Figure D.1 shows a simple test calculation to illustrate how the average of logarithm is reduced from the logarithm of the average; the ratio of ln(y) to ln y is plotted against the pixel number ratio of the rebinned data to the original. In this simple test calculation, we make 500 realizations of sets of x i (i = 0, 1, . . . n − 1, where n = 256), calculate y with a = 1, 2, 4, 8, . . . 256, and compute how much smaller the expectation value of the logarithm of y than the logarithm of the expectation value is. The linear regression of the data points are also shown in the plot. For comparison the amplitude fitting re-sults shown in Fig. D.2 are replotted here with squares. This explains how the logarithm of amplitude is reduced. In the case of no rebinning (X = n rebin /n original = 1) or only 2-pixel rebin (X = 1/2) the fitting results are deviated significantly from this simple test calculation, but this comes as no surprise because in these cases, the assumption of the well-averaged power as data is not appropriate. Figure D.2 shows the dependence of the amplitude measurements on the amount of rebinning. The input model is identical to the one used in the calculation in Sect. 3.1. The default rebinning is from 256 pixels to 64 pixels and in this case, the output amplitude is about 88% of the input amplitude but with further rebinning to 16 pixels, it increases up to 96%. Input parameter set at ℓ = 328 (k pix = 14) obtained from the fitting the average power spectrum over one Carrington rotation to the model by the updated code. See text for more details. These parameters are used to construct the limit spectrum for Monte-Carlo simulations in Sect. 3. Dependence of the peak amplitude parameters at ℓ = 492 (k pix = 21) measured by the updated code on the rebinning strength. Left panel shows the input (black dotted lines with squares) and measurements with three different rebinning with error bars (scatter of 500 realizations): no rebinning (256 pixels on the azimuthal grid, blue), original rebinning (64 pixels, green), and extra rebinning (16 pixels, red). The right panel shows the ratio of the amplitude to the input. The numbers are the grid number and the average ratio of six peaks. The input parameters are identical to those in Fig. 2. | 11,856.6 | 2019-11-18T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
Sentiment analysis of pets using deep learning technologies in artificial intelligence of things system
This research paper proposes sentiment analysis of pets using deep learning technologies in the artificial intelligence of things system. Mask R-CNN is used to detect image objects and generate contour mask maps, pose analysis algorithms are used to obtain object posture information, and at the same time object sound signals are converted into spectrograms as features, using deep learning image recognition technology to obtain object emotion information. By using the fusion of object posture and emotional characteristics as the basis for pet emotion identification and analysis, the detected specific pet behaviour states will be actively notified to the owner for processing. Compared with traditional speech recognition, which uses mel-frequency cepstral coefficients for feature extraction, coupled with a Gaussian mixture model-hidden Markov model for voice recognition, the experimental method of this research paper effectively improves the accuracy by 70%. Prior work on the implementation of smart pet surveillance systems has used the pet's tail and mouth as features, and has combined these with sound features to analyse the pet's emotions. This research paper proposes a new method of sentiment analysis in pets, and our method is compared with previous related work. Experimental results show that our approach increases the accuracy rate by 70%.
Introduction
Pet sentiment analysis can be used to analyse whether a pet is suffering from anxiety, hypothetical disorders and other mental illnesses. As the number of pets increases, the demand for pet sentiment analysis will increase. Pet sentiment analysis can also be used to obtain more subtle information where pets hide the emotion. For example, when pets are in a vigilant mood, the hidden subtle messages are the response to strangers or strange objects. Traditional sentiment analysis uses voice recognition to analyse emotions, and this uses mel-frequency cepstral coefficients to extract features of the input audio. The main process of using MFCC for feature extraction is obtained by performing the following eight steps on the input source. The first step is pre-emphasis, used to highlight the high-frequency formant, and the second step is frame blocking, which combines x sampling points into a sound frame, where x is usually 256 or 512, and the third step is the Hamming window, which multiplies each sound frame by the Hamming window to increase the continuity between the left and right ends of the sound frame; the fourth step is fast Fourier transform, the fifth step is triangular bandpass filters, and the sixth step is discrete cosine transform; the seventh step is log energy, and the eighth step is the delta cepstrum. After obtaining the features through the above eight steps, a Gaussian mixture model-Hidden Markov model is finally used for speech recognition analysis. Because the MFCC is based on the human ear which can accurately distinguish human speech, it simulates the operation of the human ear in an artificial way, and the main sensitive frequency range of the human ear is 200-5000 Hz, so mel-cepstral coefficients are not suitable for processing sounds other than from human. In traditional sentiment analysis, the target object of the analysis is human beings. The characteristic media used in sentiment analysis can be divided into two main categories: the first involves image characteristics (Bartlett et al. 2005;Cohn 2007; Khattak et al. 2021;Sebe et al. 2007;Ittichaichareon et al. 2012;Hasan et al. 2004;Gales and Young 2007;Xie et al. 2019;Rabiner 1989;Schuller et al. 2003), while the second involves sound characteristics (Muda et al. 2010;Chen and Bilmes 2007;Benba et al. 2016;Dave 2013;Brigham 1988;Ahmed et al. 1974;Deng et al. 2021Deng et al. , 2020Kalarani and Brunda 2019;Lee and Narayanan 2005). In the first method, most of the image features are based on the human face as the target area of emotion analysis. However, it is impractical to rely solely on either image or sound characteristics for emotion analysis.
Subsequent methods of sentiment analysis were developed based on a combination of the characteristics of images and sounds (Zeng et al. 2006). In this research work, pets are the object of sentiment analysis. This is a novel method in which we use sentiment analysis to obtain subtle information about the hidden emotions of pets. For example, when a pet is wary, there is hidden subtle information based on its reaction to strangers or strange objects. The method used in this research paper is based on the framework of human emotion analysis, which is adapted for use with pets. We analyse the emotions of pets by combining image and sound features. Since pets do not have facial muscles that have developed in the same way as humans, the posture of the pet will be used as an indicator for sentiment analysis based on image features. In terms of sound features, a sentiment analysis is carried out using a spectrogram in the same way as in image analysis, and this is used as an analysis index. Overall, the aim of our method is to simulate the behaviour of humans, who judge the emotions of a pet based on visual cues. In prior related work, a Smart Pet Surveillance System Implementation (SPSSI) framework has been proposed for sentiment analysis in pets (Tsai et al. 2020). This framework combines image and sound characteristics to analyse the emotions of pets. In terms of image features, the pet's mouth and tail are used as analysis indicators. However, this previously developed approach is not able to accurately analyse emotions when the image does not contain clear features relating to the mouth and tail. Hence, in this research paper, we use the pet's pose as an indicator for sentiment analysis. The posture of the animal can be obtained at the same time as an image is detected. The approach proposed in this paper can obtain more accurate results for pet sentiment analysis than previous related schemes that require a clear image of the pet's mouth and tail.
This research paper proposes sentiment analysis of pets using deep learning technologies in the artificial intelligence of things system, using Mask R-CNN ) to detect and recognise object tags and generate corresponding contour masks to obtain posture features, and uses object sound signals to convert into spectrograms for recognition and analysis to obtain emotions features in order to realise pet emotion analysis through a non-contact smart Intelligence of Things system. The second chapter explains the pet sentiment analysis system architecture, system process and algorithm of the smart Intelligence of Things system. The third chapter explains the experimental environment setting and performance analysis of pet sentiment analysis of the smart Internet of Things system. The fourth chapter is the conclusion and recommends future work.
2 Sentiment analysis of pets in artificial intelligence of things system
System overview
The overview of the pet sentiment analysis system of the Smart Internet of Things system is shown in Fig. 1. A smart web camera is used to capture pet video and audio information, pet posture analysis for continuous images, and pet sentiment analysis for sound. Pet emotion analysis and recognition is performed based on the posture and emotion information of the above-mentioned deep learning image recognition. When the specific emotional state of the pet is determined, it will be notified to the owner in real time through the communication software for processing.
System architecture
The structure of the pet sentiment analysis system of the smart IoT system is shown in Fig. 2 and consists of three parts: the hardware layer, the software layer and the application layer. The hardware layer mainly uses network cameras for video and audio capture and computing core platforms for data analysis and calculation. The software layer mainly uses the Tensorflow framework as a machine learning development environment platform, OpenCV-Python for image display and storage functions, PyEmail for email processing kit tools, PyAudio for sound file processing kit tools, and MoviePy for sound file extraction and storage. The application layer has functions of deep learning emotion recognition, deep learning gesture recognition and pet specific state notification. The pet emotion analysis process of the smart Internet of Things system is shown in Fig. 3, which includes three parts: the user side, the hardware side, and the software side. The pet body on the user side is the target of the analysed emotional state, and the owner's smart handheld device is the carrier for receiving notifications of pet state analysis. The hardware side is a smart webcam and a computing core platform. The smart webcam is used to capture pet video and audio files, and the pet video and audio information is input to the computing core platform for analysis, identification and notification. The software side is the environment and package tools of the computing core platform. Pet audio files are extracted and stored through MoviePy, and sound analysis preprocessing is performed through PyAudio, the Tensorflow deep learning image recognition framework is used for emotion analysis and recognition, and then Mask R-CNN is used for mask generation and OpenCV-Python for pose analysis and recognition. The above emotion and posture analysis recognition results are used to determine the specific behaviour state of the pet. The specific behaviour state result is stored in the database and the owner is notified by the email package PyEmail for subsequent processing. The pet sentiment analysis network architecture of the smart Internet of Things system is shown in Fig. 4, including data preprocessing, the Faster R-CNN neural network (Ren et al. 2017), Mask R-CNN neural network, and specific behaviour state analysis. The data preprocessing part performs image framing for the images recorded by the webcam, as well as having the function of extracting sound files and generating spectrograms. After dividing the image into frames, the Mask R-CNN neural network is performed to generate the contour mask map and the pose analysis algorithm is used to obtain the pose analysis result. The spectrogram uses Faster R-CNN neural network image recognition to obtain the sentiment analysis results. According to the above-mentioned posture and emotion analysis results, the pet's specific behaviour state is determined.
System functions
The main functional flow of the system is shown in Fig. 5. It uses a web camera to capture the video and audio of the pet body, and the core computing platform performs framed image and sound files to analyse the pet's posture and emotional information. When the pet's emotional analysis indicates a specific state such as alert, the owner will be notified for processing. The system randomly samples the framed images of the pet body for Mask R-CNN object detection and obtains the contour mask map and then uses the posture analysis algorithm to obtain the pet posture information. The system converts sound files into spectrograms and uses Faster R-CNN for emotion recognition to obtain pet emotion information. After the system has successfully obtained the pet's posture and emotion information, it makes specific state association judgments to notify the owner of subsequent processing.
Mask R-CNN contour mask
The system is based on Mask R-CNN object detection to identify pets and generate contour masks. The sample set of contour masks in Fig. 6 includes posture categories such as pets standing, sitting, and lying. The system sets the label category as background and pet. Two types are used for deep learning recognition model training to generate weight files for contour mask recognition. Figure 7 shows spectrograms for the pet's barking in different moods. The left picture is the angry spectrogram, the middle picture is the sad spectrogram, and the right picture is the normal barking spectrogram. The system is based on the Faster R-CNN network architecture to recognise the spectrogram of the pet's emotional bark, and uses the deep learning recognition model to train and generate the weight file for emotion recognition.
Posture analysis algorithm
The system posture analysis algorithm is shown in Fig. 8. It performs image framing for the recorded video, and randomly selects b pieces of framed images as posture analysis samples, where b is less than or equal to the number of frames. When the selected framed image is judged to have a pet based on Mask R-CNN, the pose is judged to be empty; otherwise, a pet contour mask map of the framed image will be generated. The position of the pet in the image is found by using the contour mask. We use x min ; y min ð Þto represent the coordinate value of the upper left corner of the object box, and use x max ; y max ð Þto represent the coordinate value of the lower right corner of the object box. According to formula (1), we calculate the row position value of the most pet-rich area (white value) in the object frame and set it to the max x value. According to formula (2), we calculate the position value of the column with the most pet areas (white value) in the object box and set it to the max y value. We use IMG to represent the contour mask image array. The white value is 255, and the black value is 0.
According to the above values of x min ; y min ; x max ; y max ; max x ; max y 2 Z þ , the head direction of the pet is judged by the distance between the object's head and the left and right borders of the object. If the condition of formula (3) is met, the head of the pet in the framed image faces to the left; otherwise, it faces to the right. The posture is judged by judging the distance between the head of the object and the upper and lower boundaries of the object. If the framed image with the pet's head facing to the left meets the condition of formula (4), it is judged to be standing. If the standing posture is not met, the ratio of the pet area to the background area of the framed image is judged. If the condition of formula (5) is met, it is judged as the prone posture; otherwise, it is the sitting posture. The framed image with the pet's head facing to the right still uses formula (4) to determine the standing posture conditions. If the standing posture is not met, formula (6) is used to determine the lying posture conditions. If none of them meets formula (4) and formula (6), the condition is sitting. In this research paper, the variable a represents the threshold for judging whether the animal has a standing posture, whereas the variable j represents the threshold for judging whether the posture is prone. a and j are adjusted using empirical methods. This research paper designs aandj 2 R þ and sets a to be a real number of 1.2 and j to a real number of 0.38.
Sentiment analysis algorithm
The system sentiment analysis algorithm is shown in Fig. 9. It extracts sound information from video files for sentiment analysis. Using the spectrogram as a sentiment analysis feature, the horizontal coordinate of the spectrogram is time, the vertical coordinate is frequency, and the coordinate point value is the speech data energy, as shown in Fig. 7. The system defines sentiment analysis categories as angry, sad, and normal, and is based on the Faster R-CNN network architecture to train and recognise the spectrogram model. Faster R-CNN arranges the identified possibility results from high to low reliability to form a one-dimensional array. From the identified one-dimensional array results, the top five emotion results are obtained as the voting results of the emotion analysis, and finally the emotion with the highest number of votes is used as the final emotion analysis result of the voice.
Specific state analysis algorithm
The system specific state analysis algorithm is shown in Fig. 10, which makes association judgments based on the results of posture and emotion analysis. As shown in Fig. 11, the system determines that the alert state is defined as a pet standing and making angry sounds. If the above specific status occurs, an email is sent to the owner's smart handheld device application as a reminder notification.
Experimental platform and environment
The experimental platform information is shown in Table 1. It uses Logitech Webcam C925 as the network camera, the core computing platform is embedded systems, the system is written using Python programming language with a library for the Tensorflow deep learning development environment, and Pycocotools uses COCO library, OpenCV-Python image processing library, PyAudio speech processing library, MoviePy video editing library, and PyEmail email library.
Mask R-CNN mask training
The system implemented Mask R-CNN network architecture recognition model training with 475 pet images and trained 60,000 steps to generate model weight files for contour mask recognition. The success rate of generating the contour mask map with the training sample image set is 100% and the average cosine similarity accuracy is 96.78%. We add 10%, 30%, 50%, and 70% of the salt and pepper noise to the training sample image set to generate the contour mask. The respective success rates are 72.94%, 51.29%, 37.87%, and 5.84%, and the average cosine is similar. The accuracy is 92.90%, 88.89%, 81.87%, and 62.23%, respectively. The values are shown in Table 2. Figure 12 shows the similarity percentage data of the
Posture analysis
The system is based on the contour mask map generated by Mask R-CNN to perform pose analysis. The results of the algorithm are shown in Fig. 13 as lying, sitting, and standing. The red frame line is the target object position, the green line is the vertical position where the outline mask image contains the most target object information, and the cyan line is the horizontal position where the outline mask image contains the most target object
Sentiment analysis
The system uses 30 voice files to perform emotion analysis accuracy experiments, including three emotional states, angry, sad, and normal, and each emotional state has ten data. When the one-dimensional array identified by Faster R-CNN uses the TOP-1 result as the basis for emotion voting, the emotion with the highest vote is the final emotion result. The analysis accuracy of angry, sad, and normal states is 80%, 60%, and 90%, respectively. The average accuracy is 76.6%, as shown in the histogram on the left of Fig. 14. When the one-dimensional array identified by Faster R-CNN uses the TOP-3 result as the basis for emotion voting, the analysis accuracy of angry, sad, and normal states is 80%, 60%, and 70%, respectively. The average accuracy is 70%. The value is shown in the middle histogram in Fig. 14. When the one-dimensional array identified by Faster R-CNN uses the TOP-5 result as the basis for emotion voting, the analysis accuracy of angry, sad, and normal states are 80%, 90%, and 90%, respectively. The average accuracy is 86.6%, as shown in the histogram on the right side of Fig. 14. Using the TOP-5 results as the basis for emotion voting in the system, the average emotion accuracy is the best and the accuracy of each emotion is the most stable. In traditional voice recognition, MFCC plus GMM-HMM is used for voice recognition, and the same 30 voice files are used to perform emotion analysis accuracy experiments, including ten data for each of the three emotional states of angry, sad, and normal. The accuracy of the analysis in angry, sad, and normal states is 10%, 80%, and 10%, respectively. The average accuracy is 33.3%. The values are shown in Table 3.
State recognition
When the system uses the pet specific state analysis algorithm to determine that it is alert, it will immediately send an email to notify the owner, as shown in Fig. 15. The system uses seven audiovisual files to determine the alert state, and its accuracy is 85.71%. If the sentiment analysis uses MFCC plus GMM-HMM for voice recognition, the accuracy of judging the alert state is 14.29%.
Comparison of results for execution time and accuracy
For comparison, we use the results from a prior scheme for smart pet surveillance system called SPSSI, as shown in Fig. 16. The experiment used 14 test images, representing the category of alarm. The average execution time for the proposed method was 1.32 min. In terms of the average execution time, our method was better than SPSSI, as shown in Fig. 16. The accuracy of the proposed method was 85.71%, while the accuracy of SPSSI was 14.29%, as shown in Table 4. This experiment proves that when clear information on the positions of the mouth and tail is not available from a video, the SPSSI method cannot be used to accurately analyse the pet's emotions. However, the method presented in this paper combines the pet's posture
Conclusions
This research paper proposes a method of pet sentiment analysis that is based on the artificial Intelligence of Things system; a Mask R-CNN deep learning approach is used for pet object detection and the generation of contour masks, and Faster R-CNN deep learning is used for the recognition and classification of the animal's emotions. Our sentiment analysis method combines object posture and emotional characteristics, and uses these as a basis for the identification and analysis of the animal's emotions. Our pet sentiment identification method has an accuracy of 85.71% for successful recognition in a non-contact manner, and informs the owner for processing. The approach proposed in this paper is compared with related work on smart pet surveillance systems, and an implementation of our method is shown to improve the accuracy by 70%. Moreover, the experiments presented here prove that when the object of emotion analysis is a pet, the accuracy of emotion analysis based on a combination of the posture and sound features will be better than an analysis of the pet's body part features (for example, using the mouth and tail characteristics) combined with sound features. The main challenge of this approach is that when the distance between the pet and the focal distance of camera is too small, most of the captured images will contain the pet's body or other parts of the pet, which makes it impossible to analyse the characteristics of the pet's posture. In this case, the method presented here will rely only on sound features for pet emotion analysis, which will affect the accuracy of the results.
Author's contributions M-FT was involved in supervision. M-FT and J-YH were involved in writing-original draft. All authors have read and agreed to the published version of the manuscript.
Funding This research was funded by National United University, Taiwan.
Declarations
Conflict of interest The authors declare that they have no conflict of interest. Ethical standards This article does not contain any studies with human participants or animals performed by any of the authors. | 5,271 | 2021-03-22T00:00:00.000 | [
"Computer Science"
] |
Revealing giant planet interiors beneath the cloudy veil
Observations from the Juno and Cassini missions provide essential constraints on the internal structures and compositions of Jupiter and Saturn, resulting in profound revisions of our understanding of the interior and atmospheres of Gas Giant planets. The next step to understand planetary origins in our Solar System requires a mission to their Ice Giant siblings, Uranus and Neptune.
a deep stable region where g-modes (with gravity as the restoring force) are present 10 . In that case as well, the hypothesis of a fully convective interior led way to a more complex structure.
Thus, the interiors of these planets are not as simple as previously thought. They are only partially mixed. For Jupiter, the reason certainly lies in the initial conditions: if the planet formed with a large diluted core, only partial mixing would have occurred, with large composition gradients remaining present throughout the planet's lifetime 11 . However, the causes of such a large diluted core are not clear, and the possibility that the planet witnessed a giant impact early in its history is, at present, the most likely explanation 12 . In Saturn, the situation is different: the presence of a large stable region may be the result of an extensive helium phase-separation leading to the formation of an almost pure-helium core 13 . A detailed characterization of the deep interiors will require seismology. But progress will come also from a better coupling of interior, dynamo and atmospheric models constrained by the Juno and Cassini measurements.
Ever-changing atmospheres
Giant planet atmospheres provide a lens through which we can catch a glimpse of the hidden depths. Peering through clearings in the clouds, visible and infrared observations reveal a complex story involving composition changes driven by vertical and horizontal motions 14 . Conventional theoretical models predict the formation of clouds above a well-defined condensation level, with uniform mixing below that point. Microwave observations, which penetrate through the clouds and can probe very deep compared to visible or infrared observations, provide a strikingly different picture. In fact, ground-based observations from the Very Large Array 30 years ago and since then have showed that ammonia is depleted across most of Jupiter except near its equatorial zone 15 . Juno's MicroWave Radiometer (MWR) further demonstrated that Jupiter's ammonia has a variable abundance as a function of depth and latitude down to at least 200 km below the cloud tops, far beneath the expected cloud base 1,16 . Juno's key contributions have been revealing the great depth of the ammonia depletion and the fact that it affects most of the planet.
With hindsight, the strong spatial variability in the sub-cloud layers is not altogether unexpected-the spatial distribution of disequilibrium species (like phosphine, arsine, germane, and para-hydrogen) and lightning (indicating the vigor of moist convection) all hint that the strength of vertical mixing changes significantly with latitude, primarily on the length scales of the belts and zones 14 . But whereas lightning may be a feature only of the weather layer and the water clouds, disequilibrium species are being dredged from their quench levels, at great depths where temperatures exceed 1000 K. Thus, results from Juno suggest that the cloud-top bands are merely the tip of the iceberg, with atmospheric circulation (horizontal winds and vertical motions) penetrating to great depths.
The unexpected behavior of the volatiles has made constraining Jupiter's deep water abundance even more challenging. Because the signature of water is much smaller than that of ammonia, uncertainties caused by the spatial variability in ammonia severely affect our ability to constrain water. By focusing on the MWR data at the equator where ammonia is wellmixed, the deep water abundance could be constrained to between 1.0 and 5.1 times the solar (O/H) value 17 . The uncertainty remains large however, and other regions of the planet have not been explored. We should expect that the distribution of condensing species will be highly variable, with depth and latitude, on the other giant planets as well.
Jupiter and Saturn have one more trick to shield their secrets from view: they are variable with time. Jupiter's bands can expand, contract, fade away and re-appear with spectacular storms, over well-defined multi-year time periods that we are yet to fully understand 18 . Saturn exhibits enormous storm outbursts that contribute a significant fraction of the planet's energy budget 19 . Do these episodes reflect changes in the deep interior, or are they a consequence of shallower weather-layer processes? As Juno's mission continues, and as Europe prepares the Jupiter Icy Moons Explorer (JUICE) for arrival in 2029, we may finally answer these questions and understand the complex interplay between interior and atmosphere.
Uranus and Neptune hold the keys
The last few decades have thus provided tremendous leaps in our understanding of the Gas Giants, whilst the Ice Giants Uranus and Neptune remain poorly explored and mysterious, out in the distant solar system. Ice Giant volatiles show strong equator-topole gradients, being massively depleted over the poles and enriched at the equator 20 . Clouds are organized into banded patterns, but these are not apparently reflected in wind and temperature contrasts 14 . Storms erupt and drift in latitude, and episodic outbursts may teach us about convection in environments where strong density gradients serve to stabilize atmospheric layers, maybe even separating them from the deeper interiors.
Uranus and Neptune's abundant methane clouds and storms ( Fig. 1) have properties similar to those of water clouds in Jupiter and Saturn in terms of abundance and heat content. However, the methane clouds are located at much lower optical depths and are thus much easier to access and study. A mission to Uranus or Neptune including an orbiter with deep remote sensing capability and a probe would enable mapping the deep atmospheric temperature and composition while having a fixed, reliable reference profile. It would thus fully characterize the deep atmosphere of an Ice Giant and constrain its interior structure. This would be key to understand the mechanisms that govern the physics of clouds and storms in planets with hydrogen atmospheres. Indeed, the expanding census of planets beyond our Solar System suggest that Ice-Giant-sized worlds are a common endpoint of the planet formation process, such that future exploration of the Ice Giant atmospheres and interiors, and how they differ from Jupiter and Saturn, is the vital next step in our exploration of the Solar System, filling in the missing link between Gas Giants and terrestrial worlds.
We have begun lifting the veil on the interiors and atmospheres of Jupiter and Saturn. Doing so on Uranus and Neptune is within reach with an ambitious robotic mission to the Ice Giants. It is needed to understand the origin of the Solar System and to analyze with confidence data obtained for the numerous planets with hydrogen atmospheres in our Galaxy. | 1,593.2 | 2020-03-25T00:00:00.000 | [
"Geology",
"Physics"
] |
A rapid and noninvasive method to detect dried saliva stains from human skin using fluorescent spectroscopy
Objective : Saliva is one of the vital fluids secreted in human beings. Significant amount of saliva is deposited on the skin during biting, sucking or licking, and can act as an important source in forensic evidence. An enzyme, α amylase, gives a characteristic emission spectrum at 345–355 nm when excited at 282 nm and this can be identified by using fluorescent spectroscopy and can help in forensic identification. This study describes a rapid method to detect dried saliva on the human skin by fluorescent spectroscopy. Materials and Methods: This study included 10 volunteers, who deposited their own saliva on skin of their ventral forearm by licking and water on the contralateral arm as control. This study was carried out at Central Leather Research Institute, Chennai. Study design: Ten volunteers deposited their own saliva on skin of their ventral forearm by licking. A control sample of water was deposited at the contralateral arm. Each sample was excited at 282 nm and emission spectrum was recorded. Results: The emission spectra of 10 swab samples taken from dried saliva were characterized at the primary peak of 345 to 355 nm whereas the emission spectrum of water as a control was recorded at 362 nm. Conclusion: The presence of emission spectrum at 345–355 nm with excitation at 282 nm proves to be a strong indicator of saliva deposited on human skin.
INTRODUCTION
In forensic cases of sexual assault and child abuse, bite marks analysis is very difficult because human dentition does not always leave identifying features imprinted on the skin surface. [1] Saliva is one of the vital fluids secreted in human beings, which is deposited on the human skin through biting, sucking, licking, kissing, and possibly through other behaviors. [2,3] Detection of saliva stains encountered in forensic science casework is one of the primary objectives for forensic serologist as saliva is an important source of DNA. [4] Detection of saliva from human skin can be an important source for identifying an individual. Unfortunately, dried saliva stains are invisible to the human eye, which adds to difficulty of recognizing and collecting. The DNA present in saliva on skin is difficult to collect and extract than similar stains on clothing, paper or other inanimate objects since substrate on which saliva is deposited (skin) cannot be submitted directly to extraction procedures. Therefore, an improved collection method is required first to identify the invisible saliva stains on human skin and then proceed with other methods of extracting DNA to identify the suspect and exclude the innocent. [5] Various methods for detecting dried saliva stains have been tried out like use of chemicals, lasers and fluorescence, but each test has its own limitations. [6] Fluorescence spectroscopy is noninvasive, having a high sensitivity and selectivity which allows measurement under physiological conditions and is cost effective (approximately 1400 per sample). This makes fluorescent spectroscopy a realtime diagnostic technique in the field of forensic science. [7]
MATERIALS AND METHODS
This study was divided into three phases. a) Determination of optimum excitation wavelength of undiluted saliva Undiluted saliva from two volunteers was excited with a wavelength between 200 and 320 nm. The peak excitation wavelength was used to obtain the emission spectrum for the dried saliva sample collected. b) Fluorescence spectroscopy of saliva and control samples from skin Ten volunteers deposited their own saliva on the marked area of their forearm by licking at normal room temperature in the morning. Before depositing saliva, the forearm was cleaned with soap and dried to prevent any source of contamination. As a control, water was deposited on the forearm of the opposite side. Both saliva and water were allowed to air dry for 30-45 minutes. A fiber-free cotton dipped in pH 7.4 phosphate buffer 0.1 M KCl with an excess solution removed was rubbed over the marked area. Second swab was taken from the control site of the opposite arm. Each swab was mixed in a separate cuvette containing 2 ml KCl solution for 10 seconds. Finally, the contents of each cuvette were then transferred to a quartz cuvette and the fluorescence emission spectrum was recorded from 300 to 540 nm using a spectrofluorimeter. c) Fluorescence spectroscopy of tryptophan The emission spectrum of tryptophan was recorded by dissolving 0.5 mg/ml of tryptophan in 5 mM KCl. This solution was excited at a wavelength of 282 nm. The emission spectrum obtained was compared with the emission spectrum obtained from 10 volunteers' saliva samples.
a) Absorption spectra of undiluted saliva samples
The maximum absorption spectra of undiluted liquid saliva samples were characterized by an excitation peak at 282 nm [ Figure 1] which was considered as the maximum excitation wavelength for obtaining emission scan of swab contents. b) Emission spectra and fluorescence intensity of saliva and control samples The emission spectra of 10 swab samples taken from dried saliva were characterized at the primary peak of 345 to 355 nm [ Figure 2], whereas the emission spectrum of water as a control was recorded at 362 nm [ Figure 3]. c) Emission spectra of tryptophan The peak emission spectrum of tryptophan was recorded at 350 nm [ Figure 4] which matched well with the emission spectrum obtained from 10 saliva swab contents from human skin.
DISCUSSION
There are many procedures applied in detecting dried saliva, such as use of various lights and chemicals; but due to the limitations of each test, they are not able to match the efficiency and rapid nature of fluorescent spectroscopy. [8] Researches in the field of biophysics after fluorescent spectroscopy have brought revolution in the field of forensic science. Fluorescence spectroscopy is widely used to analyze structure, dynamics and functional interactions of proteins. [7] It is based on the principle that when a fluorescent material is excited at a particular wavelength, it emits radiation of longer wavelength which can be recorded. [9] The aromatic amino acid, tryptophan, which is one of the important amino acids in α salivary amylase, an enzyme present in saliva, gives a characteristic emission spectrum at 345-355 nm when excited at a particular wavelength of 282 nm. [10] The bands obtained from samples of dried saliva when analyzed with fluorescence spectroscopy conformed well to those obtained from pure tryptophan. This proved well that the swab samples collected were of saliva. The peak of fluorescence intensity in saliva was found to vary among the 10 volunteers, which may be due to different protein content of saliva from each individual [ Figure 2].
Use of various light sources like UV light and laser has been suggested as a simple screening technique in identifying stains of body fluids like dried saliva, but they were detected in only 13 and 21% of cases, respectively. [6] Similarly, use of quartz arch tube and argon ion laser has been tried in detecting dried saliva stains and proved to be useful in only 48 and 30% of cases, respectively, [11] as compared to 100% in the present study. Short UV luminescence using Nd:YAG laser emitting at 266 nm has also been tried in detecting saliva stains invisible to the naked eye in a preliminary study, which has certain disadvantages like risk of burning one's hand, conjunctivitis and lack of portability. [8] Recent studies done by the same research group concerning the use of mercury xenon lamp and CCD camera for detection of the fluorescence of different body fluids including saliva did not show any clear data. [12] Various chemicals like enzymes and salts have also been tried out to detect dried saliva stains. Most commonly used enzymes are alkaline phosphatase, starch and amylase. [13][14][15] Unfortunately, there are limitations of each test; alkaline phosphatase is not very specific as it gives a false-positive result. [13] Starch or iodine test for amylase has been used for many years, but the major limitation is that excess of starch gives a negative reaction which leads to false-positive result. [14] The Phadebas amylase [15] test has a main disadvantage that amylase only above a certain limit of 0.02 units can be regarded as a strong indicator of the presence of saliva and no clear threshold has been defined for detecting amylase. [16,17] Using salts like nitrate and thiocyanate has been tried, but the limitation with nitrate is that this test is applicable to recent samples of 2 days only, whereas thiocyanate test is not always present in saliva. [13] Fluorescent spectroscopy has a good sensitivity in detecting dried saliva stains on skin. [3] It can be a useful tool for forensic examiners who face problem in cases of bite marks' analysis because human dentition does not leave identifying features imprinted on the skin surface. [1] Other advantages are that same sample can be used for DNA analysis after fluorescence measurement [10] and the time required for the whole procedure is less than 10 minutes. From a practical point of view, this technique could detect saliva in samples obtained from skin area of suspicion, but if we do not know the exact site of dried saliva deposition, then to scan quickly a large area of body, laser and fiber-based instruments can be used as an adjunct to fluorescence spectroscop y. [7,8,12] The results of our study suggest that tryptophan can act as one of the prevalent probes in dried saliva stains on human skin for fluorescence analysis and can be used for the detection of saliva in forensic cases. Larger sample size will help us to better define the usefulness of fluorescence spectroscopy as a diagnostic tool.
CONCLUSION
Fluorescence spectroscopy is a rapid, sensitive and noninvasive technique for the detection of dried saliva stains on skin. This method, which has mainly been used for diagnostic application, could significantly contribute to forensic science. | 2,279.2 | 2011-01-01T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
Lkb1 deletion in periosteal mesenchymal progenitors induces osteogenic tumors through mTORC1 activation
Bone osteogenic sarcoma has a poor prognosis, as the exact cell of origin and the signaling pathways underlying tumor formation remain undefined. Here, we report an osteogenic tumor mouse model based on the conditional knockout of liver kinase b1 (Lkb1, also known as Stk11) in Cathepsin K–Cre–expressing (Ctsk-Cre–expressing) cells. Lineage-tracing studies demonstrated that Ctsk-Cre could label a population of periosteal cells. The cells functioned as mesenchymal progenitors with regard to markers and functional properties. LKB1 deficiency increased proliferation and osteoblast differentiation of Ctsk+ periosteal cells, while downregulation of mTORC1 activity, using a Raptor genetic mouse model or mTORC1 inhibitor treatment, ameliorated tumor progression of Ctsk-Cre Lkb1fllfl mice. Xenograft mouse models using human osteosarcoma cell lines also demonstrated that LKB1 deficiency promoted tumor formation, while mTOR inhibition suppressed xenograft tumor growth. In summary, we identified periosteum-derived Ctsk-Cre–expressing cells as a cell of origin for osteogenic tumor and suggested the LKB1/mTORC1 pathway as a promising target for treatment of osteogenic tumor.
Introduction
Osteogenic tumor is the most common primary tumor of the bone tissue, including benign bone-forming neoplasm, osteoma, osteoblastoma, and a malignant neoplasm called osteogenic sarcoma, also referred to as osteosarcoma (1). The hallmark diagnostic feature of osteogenic tumor is the detection of mineralized bone or osteoid matrix produced by the neoplastic cells, with a very broad spectrum of histological appearances (2). Osteogenic sarcoma arises primarily in children and adolescents, with a second incidence peak in aged people. It confers a poor prognosis, as the survival benefit of traditional chemotherapy treatment remains unsatisfactory. More targeted and personalized therapies for osteogenic sarcoma treatment are urgently required (1). A growing body of evidence indicates that mouse models can recapitulate the fundamental aspects of human osteogenic sarcoma and offer the ability to yield therapeutic targets that will eventually allow customized cancer treatment (3). Furthermore, modeling human cancers using transgenic mice or knockout mice has been proven to further our understanding of the exact cell of origin and the signaling pathways underling tumor formation (4).
Mutations in p53 and/or Rb genes, as well as other components involved in their pathways, have been identified in human osteogenic sarcoma patients, and mouse models for studying the cell of origin for osteogenic sarcoma have been developed via conditional mesenchymal/osteogenic lineage-restricted knockout of p53 and/or Rb genes (5)(6)(7)(8)(9). The disruption of p53/Rb in mesenchymal progenitors (Prx1-cre), osteoblast precursors (Osx-Cre), and osteoblast committed cells (Col1a1-Cre and OCN-Cre) leading to osteogenic sarcoma confirmed that cells with mesenchymal origin and osteogenic lineage were responsible for osteogenic tumor formation (4,(10)(11)(12). Moreover, NOTCH activation in committed osteoblasts (Col1a1-Cre) was sufficient to induce osteogenic sarcoma, also suggesting committed osteoblasts as the potential sources of osteogenic tumor (13). However, the exact cell of origin with distinct genetic mutations that is responsible for the individual subtypes remains to be described (14,15).
Liver kinase b1 (LKB1, also known as Stk11), is a master serine/ threonine kinase that links energy homeostasis and cell growth through the mTORC1 pathway (16). Loss of Lkb1 in a variety of organs has been reported to initiate both hyperplasia and tumorigenesis (17). Cancers with Lkb1 inactivation tend to exhibit aggressive clinical characteristics, and their therapeutic sensitivity differs from those without Lkb1 inactivation (18)(19)(20)(21). Previous studies indicated that Lkb1 may also be involved in bone cancer. Lkb1 heterozygous germline mutant (Lkb1 +/-) mice develop gastrointestinal polyps and multifocal osteogenic tumors (22,23). A recent study showed that 41% of osteosarcoma patients lost LKB1 protein expression and that most of them showed mTORC1 activation Bone osteogenic sarcoma has a poor prognosis, as the exact cell of origin and the signaling pathways underlying tumor formation remain undefined. Here, we report an osteogenic tumor mouse model based on the conditional knockout of liver kinase b1 (Lkb1, also known as Stk11) in Cathepsin K-Cre-expressing (Ctsk-Cre-expressing) cells. Lineage-tracing studies demonstrated that Ctsk-Cre could label a population of periosteal cells. The cells functioned as mesenchymal progenitors with regard to markers and functional properties. LKB1 deficiency increased proliferation and osteoblast differentiation of Ctsk + periosteal cells, while downregulation of mTORC1 activity, using a Raptor genetic mouse model or mTORC1 inhibitor treatment, ameliorated tumor progression of Ctsk-Cre Lkb1 fllfl mice. Xenograft mouse models using human osteosarcoma cell lines also demonstrated that LKB1 deficiency promoted tumor formation, while mTOR inhibition suppressed xenograft tumor growth. In summary, we identified periosteum-derived Ctsk-Cre-expressing cells as a cell of origin for osteogenic tumor and suggested the LKB1/mTORC1 pathway as a promising target for treatment of osteogenic tumor.
Lkb1 deletion in periosteal mesenchymal progenitors induces osteogenic tumors through mTORC1 activation and that this phenotype aggravated with age ( Figure 1, C and D, and Supplemental Figure 2B). μCT analysis showed disorganized bone architecture and the presence of ossified spicules outside the periosteum in both axial and appendicular skeletons of Ctsk-CKO mice (Figure 1, C and D). H&E staining of tibiae from Ctsk-CKO mice showed progressive histopathological features of osteogenic tumor: expansive osteoid lesions with mushroom-shaped appearance located in the cortical bone and beginning of invasion of the medullary cavity from the age of 20 weeks ( Figure 1E). The tumor gradually formed a large mass, transgressing the cortex and invading into adjacent muscle and fat tissues at the age of 40 weeks (Figure 1, E and F, and Supplemental Figure 2C), mimicking malignant human osteogenic sarcoma. Nuclear atypia of cells that compose the osteoid matrix gradually increased from mild to severe with age ( Figure 1E). The tumor presented a high proliferation rate, measured via elevated cell proliferation marker Ki67 ( Figure 1G).
As lack of Lkb1 in Ctsk + cells led to a tumor-like mass in the cortical bone, the expression levels of genes involved in the cell cycle, including Ccnd1, Cdkn1a, Cdkn2a, and Cdkn2b, were determined to characterize the tumor and it was found that they were significantly increased in Ctsk-CKO tibiae at the age of 20 weeks ( Figure 1H). We furthermore observed increased expression of the osteogenic sarcoma oncogene Mdm2 and Notch target gene Hey1 in Ctsk-CKO mice, which were frequently upregulated in osteosarcoma patients or mouse models (13,30). The expression levels of the tumor suppressors Rb and Bub3 were decreased in Ctsk-CKO mice as expected, while decreased expression levels of the tumor suppressor genes p53, Wif, and Fgfr2 were not detected in Ctsk-CKO mice (8,11,31,32) (Figure 1H). We also examined expression of these genes between control and mutant mice before the tumor mass appeared at the age of 5 weeks and found that expression levels of cell cycle-related genes showed more moderate changes (Supplemental Figure 2D). Interestingly, we found an obvious elevated expression of Fgfr2 at the age of 5 weeks (Supplemental Figure 2D), which was highly involved in cell fate, cell proliferation, and tumor induction, prompting us to trace the phenotype and gene expression of Ctsk-CKO mice before tumor formation.
Lkb1 deletion in Ctsk-Cre-expressing cells results in enhanced bone formation in mice. To examine bone architecture of Ctsk-CKO mice before tumor formation, we did a quantitative μCT analysis and found an increase in the cortical bone thickness and heterotopic bone formation within the cortex in Ctsk-CKO mice ( Figure 2, A and B). However, the percentage of bone volume per tissue volume (BV/TV) within the cortical bone of Ctsk-CKO mice was decreased ( Figure 2, A and B). von Kossa staining confirmed the mineralization of the heterotopic bone within the cortex ( Figure 2C). The thickened diaphyseal cortex then prompted us to test the bone formation rate (BFR) in Ctsk-CKO mice. To determine the BFR, dynamic histomorphometry analysis was performed by double labeling with calcein and alizarin red, which are markers of newly formed bone. Compared with Ctsk-Ctrl mice, Ctsk-CKO mice displayed more heterotopic newly synthesized osteoid in the cortical bones ( Figure 2D). The mineral apposition rate (MAR) and BFR at the periosteal surface of the tibiae of 5-week-old Ctsk-CKO mice were significantly increased compared with those of Ctsk-Ctrl mice (Figure 2, D and E). Moreover, higher mRNA levels of marker genes, representing stages of osteoblast differentiation (24). Although the loss of Lkb1 has been suggested as correlating with osteogenic tumor, the involved cell type and the underlying pathway remain unclear; however, these details are central for a complete understanding of osteogenic tumor formation.
Cathepsin K (CTSK) is a cysteine protease secreted by osteoclasts and is essential for the degradation of matrix collagen during bone resorption (25). The Ctsk promoter has been suggested as being active in osteoclasts only (26), and Ctsk-Cre mice have been widely used to study osteoclast function (27). A recent study demonstrated that Ctsk-Cre-expressing cells can be chondroprogenitor cells, as Ptpn11 deletion in Ctsk-Cre-expressing cells resulted in metachondromatosis by activating Hedgehog signaling (28). Lack of Lkb1 within chondrocytes (Col2a1-Cre) of the endochondral skeleton caused a dramatic disruption of the skeletal growth plate and formation of cartilage tumors (29). This suggests that Lkb1 deletion in Ctsk-Cre-expressing cells causes cartilage tumors. Interestingly, in this study, we found that deletion of Lkb1 in Ctsk-Cre-expressing cells caused an osteogenic tumor-like phenotype, but not cartilage tumors. The features included an overall disruption of cortical bones as well as increased osteoid formation and bone turnover. Lineage tracing indicated that Ctsk-Cre could label a population of periosteum-derived cells, which could function as mesenchymal progenitors in terms of markers and functional properties.
In this study, we identified a cell of origin for osteogenic tumor and suggested Lkb1 as a tumor suppressor in the primary bone tumor, thus advancing our knowledge of both the cell of origin and the molecular genetics of osteogenic tumor. Furthermore, our data supported that Ctsk-Cre-expressing cells could serve as progenitors of both cartilage tumor and osteogenic tumor under the regulatory effects of different signaling. Moreover, these results indicated the therapeutic potential of mTORC1 inhibitors for the treatment of osteogenic sarcoma.
Results
Lkb1 deficiency in Ctsk-Cre-expressing cells causes osteogenic tumorlike phenotype. To investigate the role of Lkb1 in Ctsk-Cre-expressing cells, we generated Ctsk-Cre; Lkb1 fl/fl mice (hereafter named Ctsk-CKO). Lack of Lkb1 within chondrocytes (Col2a1-Cre) of the endochondral skeleton caused cartilage tumors (29), and Ctsk-Cre-expressing cells were identified as the source of metachondroma (28); therefore, Lkb1 loss in Ctsk + cells was supposed to lead to cartilage tumors. Strikingly, Ctsk-CKO mice did not display cartilage tumors, as indicated by H&E staining and safranin O (SO) staining in both the femurs and tibiae and the sternums (Supplemental Figure 1, A and B; supplemental material available online with this article; https://doi.org/10.1172/JCI124590DS1), but these mice exhibited a specific skeleton phenotype (Supplemental Figure 2A). However, neither Lkb1 fl/fl nor Ctsk-Cre; Lkb1 fl/+ mice showed a discernible phenotype (Supplemental Figure 2A). Therefore, Ctsk-Cre; Lkb1 fl/+ mice (hereafter named Ctsk-Ctrl) were used as controls in the following study.
Ctsk-CKO mice displayed overgrowth before the age of 13 weeks and began to lose weight from the age of 13 weeks ( Figure 1A), and 85% died before the age of 30 weeks ( Figure 1B). Radiographic examination showed that 100% of Ctsk-CKO mice displayed progressively thicker bones at sites of the femur, tibia, vertebrae, sternum, cranium, and mandible from the age of 20 weeks staining of the osteoblastic markers osterix (OSX) ( Figure 2G) and osteopontin (OPN) ( Figure 2H). Lkb1 deficiency in osteoclast precursors does not induce osteogenic tumor-like phenotype. Ctsk-Cre mice have been widely used to study osteoclast function due to the abundant and selective expression of Ctsk in osteoclasts (27). We next assessed whether this increased bone mass in Ctsk-CKO mice was the result of impaired osteoclast activity. We used tartrate-resistant (9), were detected in tumors of 20-week-old Ctsk-CKO mice compared with the cortical bone of tibiae from Ctsk-Ctrl mice (Figure 2F). The osteoblast markers were also examined at the age of 2 weeks. The results showed that preosteoblast markers Alp and Col1a1 were specifically increased, indicating activation of bone formation in Ctsk-CKO mice (Supplemental Figure 2E). In accordance with observed increased osteoblast activity, the tumor was largely composed of osteoblastic cells, as indicated by immuno- Ctsk-Cre-expressing periosteal mesenchymal cells are potential sources for osteogenic tumors. In response to the results presented so far, we hypothesized that a Ctsk-Cre-positive but LysM-Cre-negative mesenchymal cell population might cause osteogenic tumor in Ctsk-CKO mice. To identify this cell population, we performed lineage-tracing studies using Rosa 26-mT/ mG mice, which constitutively express membrane-targeted Tomato fluorescent protein and membrane-targeted GFP upon Cremediated recombination (34) ( Figure 4A). As expected, a Ctsk-Cre-positive but LysM-Cre-negative population was found in the periosteum of cortical bone ( Figure 4B), although both Ctsk-Cre and LysM-Cre were expressed in osteoclasts on the surface of trabecular bone at the age of 4 weeks (Supplemental Figure 3A). More importantly, Ctsk-Cre-positive cells expanded and filled within the cortical bone of tibiae from Ctsk-CKO mice with age growth (Supplemental Figure 3B). The osteoid tumor area in Ctsk-CKO tibiae was mainly formed by GFP-positive cells by the age of 20 week ( Figure 4C and Supplemental Figure 3B), indicating that the osteogenic tumor in Ctsk-CKO mice was caused by intrinsic acid phosphatase (TRAP) staining and found that osteoclast numbers in the periosteum, endosteum, and trabecular bone were increased in Ctsk-CKO mice when compared with the control mice ( Figure 3A). We also cultured bone marrow (BM) cells from Ctsk-Ctrl and Ctsk-CKO mice; then the cells were differentiated into osteoclasts in the presence of monocyte/macrophage CSF (M-CSF) and RANKL. Quantification of TRAP activity demonstrated an increased osteoclast formation ability in Ctsk-CKO BM cells ( Figure 3, B and C). These data indicate that the increased bone mass of Ctsk-CKO mice was not due to impaired resorption ability. To further rule out that the increased bone mass of Ctsk-CKO mice originated from the extrinsic role of Lkb1-deficient osteoclasts, we generated LysM-Cre; Lkb1 fl/fl mice in which the LysM promoter was active in monocytes, macrophages, and osteoclast precursors (33). LysM-CKO mice did not show a discernible osteogenic tumor-like phenotype at the age of 20 weeks (Figure 3, D-F) and 40 weeks (data not shown), indicating that the osteogenic tumor-like phenotype in Ctsk-CKO mice was not the result of altered osteoclast function. We then examined LKB1 expression in the periosteum of Ctsk-positive cells via immunofluorescence. The results confirmed deletion of LKB1 expression in Ctsk + periosteal cells from Ctsk-CKO; Rosa-Ai9 mice, but not in Ctsk-Ai9 cells from Ctsk-Ctrl; Rosa-Ai9 mice ( Figure 4, G and H). Interestingly, we found that not all the Ctsk + cells expressed LKB1 (Figure 4, G and H). FACS analysis further confirmed the percentage of Ctsk + LKB1 + cells in Ctsk + periosteal cells. Interestingly, we found a higher percentage of Lin -CD90.2 -CD105 -CD200 + cells in the Ctsk + Lkb1 + population when compared with Ctsk + Lkb1cells (Supplemental Figure 4D), indicating the expression of LKB1 within Ctsk + stem cells and tumorigenesis might occur specifically within this stem cell population.
Next, we asked whether periosteal Ctsk-Ai9 cells have osteogenic tumor initiation potential in Ctsk-CKO mice. We found increased expression levels of osteoblast markers, including OSX, COL1A1, and OPN, in periosteal mesenchymal Ctsk-Ai9 cells of Lkb1 deletion in Ctsk-Cre-expressing cells. To further determine the role of Ctsk + cells in tumor formation, another mouse reporter strain (Rosa26-Ai9) was used, which conditionally expresses fluorescent protein tdTomato in response to Cre recombinase activation (35) ( Figure 4D). The Rosa26-Ai9 reporter mice showed bright single fluorescence that greatly facilitated in vivo imaging. Expansion of Ctsk-Cre-expressing cells was also observed in the cortical bone of Ctsk-CKO; Rosa-Ai9 tibiae at the age of 4 weeks (Figure 4E). We further examined cell identity of Ctsk + cells through FACS analyses. Three populations, including CD200 + CD105 -, previously described as skeletal stem cell (SSC), CD200 -CD105pre-bone-cartilage-stromal progenitor (pre-BCSP), and CD105 + BCSP (36) were identified from CD31 -, CD45 -, TER119 -(Lin -) CD90.2 -6C3 -Ctsk-Ai9 + cells ( Figure 4F and Supplemental Figure 4B), which is consistent with the recent study showing that Ctsk-Cre labels periosteal mesenchymal cells (37). Furthermore, we also analyzed the expression of common stem cell markers in Ctsk + cells and found expression of Sca1, CD24, CD44, CD49f, and CD146 in Ctsk + cells (Supplemental Figure 4C). We next investigated the differentiation potential of periosteal Ctsk + cells and found that they are capable of differentiating into osteoblast, We next examined the effects of Lkb1 deficiency on osteoblast differentiation. We cultured periosteal cells from Lkb1 fl/fl mice and infected these cells with adenovirus expressing EGFP (Adv-EGFP) and Cre (Adv-Cre). Adv-Cre-infected cells showed increased ALP staining ( Figure 5G) and higher supernatant ALP activity ( Figure 5H) compared with Adv-EGFP-infected cells. Consistently, expression of osteoblast marker genes, including Osx, Col1a1 and Opn, increased in Adv-Cre-infected cells ( Figure 5I). In summary, these results support the idea that LKB1 deletion could increase the osteoblast differentiation ability of periosteal mesenchymal stem cells.
To further investigate whether Lkb1-deficient cells are sufficient to drive osteogenic tumor formation in normal mice, we transplanted the periosteum-derived cells from Ctsk-Ctrl; Rosa-Ai9 and Ctsk-CKO; Rosa-Ai9 mice to nude mice subcutaneously. . In summary, our data suggested that transplantation of the Lkb1-deficient Ctsk + periosteal cells was sufficient to drive osteogenic tumor formation in normal mice.
Inhibition of mTORC1 signaling ameliorates tumor progression in Ctsk-CKO mice. To investigate the mechanism with which Lkb1 deletion induces the osteogenic tumor from Ctsk-Cre-positive periosteal mesenchymal stem cells, we focused on the mTORC1 pathway, which is a critical target downstream of LKB1-dependent AMP kinases (AMPKs). Phosphorylation of mTORC1 catalytic substrate ribosomal protein S6 (S6) and eukaryotic translation initiation factor 4E-binding protein 1 (4E-BP1) were increased in Ctsk-CKO mice ( Figure 6A), indicating hyperactivation of mTORC1 signaling in Ctsk-CKO mice. We postulated that if LKB1 deletion indeed induces the osteogenic tumor-like phenotype via activation of the mTORC1 pathway, the deletion of Raptor (the core binding factor of mTORC1) in vivo should lead to amelioration of the osteogenic tumor-like phenotype in Lkb1 fl/fl ; Ctsk-CKO mice. We constructed Ctsk-DKO mice (Ctsk Cre; Lkb1 fl/fl ; Raptor fl/fl ) and found that delayed tumor progression in Ctsk-DKO mice was indicated by an extended median life span of 42.3 weeks compared with the life span of 23.9 weeks for Lkb1 fl/fl ; Ctsk-CKO mice ( Figure 6B). The disorganized architecture in the tibiae was partially rescued in Ctsk-DKO mice, as indicated by the results of x-ray and μCT analyses ( Figure 6C and Supplemental Figure 6A). cells isolated from the cortical bone of Ctsk-Ctrl; Rosa-Ai9 mice showed expression of osteoblastic markers OSX, COL1A1, and OPN ( Figure 4K), suggesting an in vitro osteoblast differentiation ability of Ctsk-Ai9 cells. Combined with the previous findings that Ctsk-positive cells represent a subset of perichondrial cells within the groove of Ranvier (28), we hypothesize that periosteumderived Ctsk + cells could act as periosteal mesenchymal stem cells, which can develop into osteoblasts.
Prx1 + cells have been reported as mesenchymal progenitors residing in both the periosteum of cortical bone and the BM (38).
To determine whether Lkb1-deficient mesenchymal progenitors could be the origin of osteogenic tumors, we next generated Prx1-Cre; Lkb1 fl/fl (Prx1-CKO) mice. At the age of 20 weeks, Prx1-CKO mice showed abnormal nodules in the long bones, hip bones, and calvarial bones, but not in the vertebra bones (x-ray images; see Supplemental Figure 5A). H&E staining indicated that the osteoid tumor transgressed the cortex and BM cavity (Supplemental Figure 5B) and calcein-alizarin red double-labeled fluorescence showing a large mass of irregular and diffuse fluorochrome labeling, confirming a dramatic increase in new bone formation within the cortical bone of Prx1-CKO tibiae (Supplemental Figure 5C). This was consistent with the observation in Ctsk-CKO mice. Notably, SO staining indicated a profound disorganization of the growth plate in both femur and tibia of Prx1-CKO mice, which displayed an enchondroma-like phenotype (Supplemental Figure 5D). This phenotype had not been observed in Ctsk-CKO mice (Supplemental Figure 1, A and B). We found Prx1 + cells were expanded from the cortex to the marrow cavity in the periosteum and also expanded to form a mass of cartilage in the growth plate of Prx1-CKO mice (Supplemental Figure 5E). We then compared the distributions of Ctsk + and Prx1 + cells using 4-week-old Ctsk-Cre; Rosa26-mT/mG and Prx1-Cre; Rosa26-mT/mG mice and found that, in the periosteum, both Cstk and Prx1 can label periosteum cells, but in the growth plate and articular cartilage, only Prx1 + cells but not Ctsk + cells were seen (Supplemental Figure 5F). This indicated Ctsk and Prx1 might represent 2 subsets of mesenchymal stem cell with different anatomic distributions and functions.
LKB1 inhibits self-renewal and osteoblast differentiation ability of Ctsk + periosteal mesenchymal stem cells. To investigate the effects of Lkb1 deficiency on Ctsk + periosteal mesenchymal stem cells, we first assessed the effects of Lkb1 deficiency on self-renewal ability of periosteum Ctsk + cells. Proliferating cell nuclear antigen (PCNA) staining showed a rapid proliferation of Ctsk + cells in the periosteum of Ctsk-CKO; Rosa-Ai9 tibiae in vivo ( Figure 5, A and B). Consistently, Ctsk-Ai9 cells were isolated via flow cytometry from the cortical bone of both Ctsk-Ctrl; Rosa-Ai9 and Ctsk-CKO; Rosa-Ai9 tibiae and the same number of sorted Ai9-positive cells were seeded. After 7-day culture, the cells from Ctsk-CKO; Rosa-Ai9 mice showed enhanced proliferative ability compared with cells from Ctsk-Ctrl; Rosa-Ai9 mice ( Figure 5, C and D).
Based on the report that CD44 + cancer stem cells (CSCs) are responsible for self-renewal and tumor growth in heterogeneous cancer tissue (39) and that CD44 has also been identified as a self-renewal marker in osteosarcoma (40) has been proven that hypoactivation of Erk signaling in Ctsk + cells could induce cartilage tumor (28). However, Lkb1 deficiency in Ctsk + cells could not inhibit Erk/Ihh signaling and induce cartilage tumor. Our study expanded the definition of Ctsk + as not only a progenitor of metachondroma, but also a source of osteosarcoma. Consistent with the recent publication showing that Ctsk + cells can function as periosteal stem cells to mediate intramembranous bone formation (37), our data also identified Ctsk + periosteal cells as containing a stem cell population (CD31 -CD45 -Ter119 -CD90.2 -6C3 -CD105 -CD200 + ) (36). Deletion of Lkb1 in Ctsk + cells leading to osteosarcoma formation demonstrated that Ctsk + cells cannot only serve as a physiologic precursor of periosteal osteoblasts, but also as a pathological precursor in osteogenic tumor. Regulation of the cell fate of this periosteal stem cell by deleting transcription factor OSX or tumor-suppressor LKB1 can lead to abnormal cortical architecture or even tumor formation, which shows the importance of understanding the cellular basis of skeletal pathology.
To clarify the role of Lkb1 function in human osteosarcomagenesis, we first searched for genetic mutations in Lkb1 in human osteosarcoma patients. However, genetic mutations in Lkb1 have not yet been commonly found in human osteosarcoma, with the exception of one case report (c.937C>A) (42), which might be restricted by the low incidence rate of the disease and the low availability of samples for sequencing. In addition, allelic loss, LKB1 promoter hypermethylation, or reduced LKB1 expression is observed in a wide variety of sporadic cancers (43,44). The study demonstrated that 41% of osteosarcoma patient loss of LKB1 protein expression was due to posttranslational regulation by SIRT1 deacetylase, but not due to genetic anomalies or loss of LKB1 mRNA levels (24), suggesting that decreases in LKB1 expression could also be important for osteosarcoma formation. Our study has proven that loss of Lkb1 in Ctsk + cells can lead to osteosarcoma formation in mice and that LKB1 deficiency in a human osteosarcoma cell line could accelerate tumor formation, suggesting that further examination of mutations in Lkb1 and/or its upstream genes in human osteosarcoma is warranted.
As a known tumor suppressor, LKB1 is inactivated in a wide range of sporadic cancers, most of which show inactivation of AMPKs and a resulting hyperactivation of mTORC1 signaling (18,19,45). Another study used Sleeping Beauty transposon-based somatic forward genetic screening and reported that PI3K/AKT/ mTOR signaling was involved in enhancing osteosarcomagenesis (46). Our results also suggest treatment of osteosarcoma by targeting LKB1/AMPK/mTORC1 signaling. The selective mTORC1 inhibitors sirolimus (rapamycin) (47,48), ridaforolimus (49), and everolimus (50) have been studied in clinical trials for their osteosarcoma treatment potential, showing either complete or partial response in a portion of patients (1). Our mouse model can help with interpreting the mechanism of mTORC1 inhibitors for the treatment of osteosarcoma and adopting a targeted therapy through assessing the genetic mutation, epigenetic modification, and expression levels of the genes that are involved in the LKB1/mTORC1 signaling. The recent accelerated development of techniques for the rapid assessment of both the genetic and epigenetic statuses of tumor biopsies has birthed the concept of personalized medicine. Osteosarcoma presents a challenge for personalized medicine due to the absence of pathognomonic mutations combined with the rarity and could be pathogenic in tumor formation, which prompted us to test to determine whether an mTORC1 signaling inhibitor could slow tumor progression. Rapamycin, an mTORC1 inhibitor, was administered to both Ctsk-Ctrl and Ctsk-CKO mice via intraperitoneal injection twice per week starting at 2 weeks of age, when the cortical bone began to expand in Ctsk-CKO mice (data not shown). X-ray images and H&E staining indicated that rapamycin treatment significantly delayed tumor growth and improved the mobility of Ctsk-CKO mice at the age of 20 weeks ( Figure 6G and Supplemental Figure 6C). To further examine the effects of rapamycin on advanced tumor, this drug was intraperitoneally injected daily from the age of 16 weeks. X-ray images and H&E staining indicated that daily rapamycin treatment for 4 weeks significantly relieved symptoms of osteogenic tumors in Ctsk-CKO mice (Figure 6H and Supplemental Figure 6D).
Inhibition of mTORC1 signaling prevents tumorigenesis in a xenograft model. To further investigate the clinical relevance of LKB1 loss with osteogenic sarcoma and the therapeutic effects of rapamycin treatment in human osteosarcoma, we used LKB1 shRNA (shLKB1) lentivirus to knock down LKB1 expression in the human osteosarcoma cell line HOS-MNNG cell, which has been reported to express high levels of LKB1 (24,41). The knockdown efficiency of LKB1 and hyperactivation of the mTORC1 pathway in shLKB1 lentivirus-infected HOS-MNNG cells were confirmed via Western blot analysis ( Figure 7A; see complete unedited blots in the supplemental material). We then injected HOS-MNNG cells that expressed either shEGFP or shLKB1 into nude mice. After tumor establishment, the mice were treated daily with either 4 mg/kg rapamycin or vehicle. As shown in Figure 7, B-D, tumor size was significantly enlarged in the shLKB1 group compared with the shEGFP control group. Histologically, shLKB1 tumors displayed an apparent nuclear variability, increased Ki67 expression, decreased OSX expression, and elevated S6 phosphorylation ( Figure 7E). Rapamycin treatment was able to decrease the growth rate and tumor volume in both the shLKB1 group and the shEGFP control group (Figure 7, B-D). Rapamycin treatment decreased cell density, nuclear variability, Ki67 expression, OSX expression, and S6 phosphorylation. Immunohistochemical analysis confirmed the lowered expression of LKB1 in the shLKB1 group compared with that in the shEGFP control ( Figure 7E). These data demonstrate that LKB1 knockdown promotes tumor formation of human osteosarcoma cells, while rapamycin inhibits tumor growth in established tumors using human osteosarcoma cell xenografts.
Discussion
Our results suggest that Lkb1 deletion in Ctsk + periosteal cells caused osteogenic tumor-like phenotype by increasing mTORC1 activity. Previous reports demonstrated that Ptpn11 deficiency in Ctsk + cells induced metachondroma with decreased ERK activity and more production of the growth stimulators Ihh and Pthrp (28) (Figure 7F). To further understand why Lkb1-deficient mice developed osteogenic tumor, but not cartilage tumor, we examined expression of pErk in Lkb1-deficient Ctsk + cells by immunostaining and Western blot and found that Lkb1 deletion had no obvious effect on inhibition of Erk signaling (Supplemental Figure 7, A, B, and C; see complete unedited blots in the supplemental material) and production of Ihh and Pthrp (Supplemental Figure 7D). It and cortical thickness (Ct.Th) were calculated for the cortical bone of diaphysis of the distal femur. Histology analysis. For dynamic histomorphometry, 4-week-old mice received intraperitoneal injections of 20 mg/kg body weight calcein (MilliporeSigma, C0875) and 25 mg/kg body weight alizarin red (MilliporeSigma, A5533) with an interval of 4 days. The mice were euthanized with CO 2 at 3 days after alizarin red injection. Bone MAR and BFR were measured as previously described (58).
For undecalcified bone section, tibiae were fixed in 4% paraformaldehyde (PFA) for 48 hours at 4°C and dehydrated in a gradient of ethanol and acetone, followed by resin embedding. The tibiae were then cut into sections of 4 μm thickness using a Leica RM2265 microtome. Calcium deposits in the bone tissue were visualized by von Kossa staining using 4% silver nitrate followed by 5% sodium thiosulfate and Van Gieson's counterstaining.
For paraffin sectioning, femurs and tibiae were fixed in 4% PFA for 48 hours at 4°C and decalcified in 15% EDTA. Specimens were dehydrated using a gradient ethanol and xylene and embedded with paraffin, and sections of 8 μm thickness were cut using a Leica RM2235 microtome. Sections were dewaxed and rehydrated and then stained with H&E and SO. TRAP (MilliporeSigma, S387A-1KIT) staining was performed according to the manufacturer's instructions.
For frozen sectioning, freshly dissected tibiae were fixed in 4% PFA for 48 hours at 4°C, decalcified in 15% EDTA, and dehydrated in 30% sucrose for 48 hours. The tissue samples were then embedded with OCT (Tissue-Tek, 4583), and sections of 8 μm thickness were cut using a Leica CM3050S cryostat.
It would be helpful to treat osteosarcoma using a combined therapeutic approach targeting different involved pathways to obtain synergistic effects. Angiogenesis is essential for cancer development and growth, and VEGF is a key mediator of angiogenesis in tumor (53). Previous evidence indicates that LKB1 alterations contribute to cancer progression by modulating VEGF production (54,55). We examined the expression of VEGF in the tumor region of Ctsk-CKO mice and found increased VEGF production in Ctsk-CKO mice (Supplemental Figure 7, E and F; see complete unedited blots in the supplemental material). It might be rational to study combination therapy using rapamycin and anti-VEGF therapy to treat tumorigenesis in osteosarcoma patients with LKB1 loss.
The fact that deletion of Lkb1 in LysM + cells does not lead to tumor mass formation within the cortical bone confirms that osteoclasts are not involved in the pathogenesis of Lkb1-related osteosarcomas. Actually, both Ctsk-CKO and LysM-CKO mice showed increased osteoclast formation (Figure 3, B and C, and Supplemental Figure 7, G and H), and the tumor mass in Ctsk-CKO mice contained a large number of osteoclasts and osteoclast precursors ( Figure 3A). These data indicate that Lkb1 might play a role in osteoclastogenesis. Our previous study demonstrated that mTORC1 signaling plays a determinative role in osteoclast differentiation, as Raptor deficiency in osteoclasts resulted in increased bone mass with decreased bone resorption (56). It is possible that LKB1 deficiency in osteoclasts could also lead to mTORC1 activation. Further study is required to demonstrate the role of Lkb1 in osteoclastogenesis.
Radiographic assessment. For x-ray image analysis, mice were euthanized with CO 2 , followed by removal of skin and internal organs. The skeletons were then fixed in 70% ethanol and analyzed via wholebody x-ray using an Eagle III Microspot X-ray Fluorescence (Exda Inc. USA) instrument and a Faxitron SR radiograph digital imaging system.
For μCT analysis, tibiae and vertebraes isolated from age-and sexmatched mice were fixed in 70% ethanol and scanned using a SkyScan 1176 (Bruker Biospin) at a 20 μm resolution for qualitative analysis or at a 10 μm resolution for quantitative analysis. 3D images were reconstructed using a fixed threshold. The percentages of trabecular BV/TV In vitro differentiation assays. For osteoblast differentiation, primary Lkb1 fl/fl cortical bone cells that had been infected with either GFP-adenovirus or Cre-adenovirus or Ctsk-Ai9 cortical bone cells that were sorted by flow cytometry were plated in a 96-well plate at 2 × 10 4 cells per well. Cells were cultured in osteoblast induction medium (α-MEM containing 10% FBS, 5 mM β-glycerophosphate [MilliporeSigma, G9422], 50 μg/ml l-ascorbic acid [MilliporeSigma, A5960], and 1% penicillin/streptomycin [Gibco, Thermo Fisher Scientific, 15140-122]). The medium was changed every 3 days. After 7 days of induction, cells were fixed with 10% neutral buffered formalin (MilliporeSigma, HT501320) and stained with the BCIP/NBT ALP Staining Kit (Beyotime, C3206) following the manufacturer's instructions. For quantitative analysis of cell growth and ALP activity, cells cultured for 4, 7, and 10 days were incubated with alamarBlue (Thermo Fisher Scientific, 88951) for 2 hours at 37°C and read using a multimode plate reader (Envision, PerkinElmer) with excitation at 540 nm and emission detection at 590 nm. After aspirating alamarBlue reaction mixture, cells were incubated with ALP substrate solution containing 6.5 mM Na 2 CO 3 , 18.5 mM NaHCO 3 , 2 mM MgCl 2 , and 1 mg/ ml phosphatase substrate (MilliporeSigma, S0942) for 20 minutes and read with Envision using the absorption detection mode at 405 nm.
For chondrocyte differentiation, a micromass culture method was used to determine the chondrocyte differentiation ability. Sorted Ctsk-Ai9 cortical bone cells (5 × 10 6 cells/ml) were seeded as a 10 μl micromass drop in a culture well in a 24-well plate and incubated at 5% CO 2 and 37°C for 2 hours to allow cell attachment. [MilliporeSigma, 91077c], and 1% penicillin/ streptomycin) for 1 day, followed by adipocyte maintenance medium (α-MEM with 10% FBS, 10 mg/ml insulin, and 1% penicillin/streptomycin) for 3 days. Cells were then fixed and stained with 2 mg/ml of either oil red O (MilliporeSigma, O1391) or Bodipy FL(Invitrogen).
Total RNA preparation and quantitative RT-PCR analysis. Total RNA was extracted from cortical bone ground in liquid nitrogen using TRIzol (MilliporeSigma, T9424), following the instructions of the manufacture. An aliquot of 500 ng total RNA was reverse-transcribed to cDNA using TaKaRa PrimeScript Reverse Transcriptase (TaKaRa, RR037A). quantitative PCR (qPCR) was performed using a SYBR green mixture (Takara) and a Bio-Rad CFX96 Real-Time PCR Detection System (Bio-Rad Laboratories). Primers used for specific transcripts are listed in Supplemental Table 1.
Flow cytometry. To sort Ctsk-Ai9 cells, primary isolated cortical bone cells were cultured for 7 days and then digested to a single cell suspension. Cell sorting was performed with an Arial II Cell Sorter (BD Biosciences). Adhesive cells and debris were excluded via forward scatter (FSC) and side scatter (SSC) profiles. Sorted cells were plated at a density of 30,000 cells per cm 2 in growth medium.
For the analysis of periosteum-derived Ctsk-Ai9 cells from cortical bone of Ctsk-Ctrl and Ctsk-CKO mice, equal numbers of cells were added into each individual tube for different antibodies before immunostaining, following the method previously described ( 36,37,59). RBCs were first removed by RBC lysis buffer (Beyotime, C3702). Then the cells were stained with eFluor 450 anti-CD31 (eBioscience, 48-0311-80), PerCP/Cy5. 150 ng/ml RANKL, and 1% penicillin/streptomycin) for 4 days. The cultured supernatant was harvested for detection of TRAP activity following the method previously described (56). TRAP staining of the cell was performed using a TRAP staining kit (MilliporeSigma, S387A-1KIT) following manufacturer's instructions.
Rapamycin treatment. Rapamycin (Selleck, S1039) treatment was conducted in male and female littermates of Ctsk-Ctrl and Ctsk-CKO mice. Rapamycin was dissolved in 2% DMSO and 5% TWEEN-80 in water following the manufacturer's instructions. Rapamycin and its solvent control were given intraperitoneally at a dose of either 4 mg/ kg twice per week beginning at 2 weeks or at 4 mg/kg daily beginning at 16 weeks to an age of 20 weeks. X-ray images of the whole skeleton and H&E staining of the tibiae were obtained to estimate the overall effect on tumor growth.
Xenograft tumors and drug therapy in nude mice. Four-week-old female nude mice were subcutaneously injected (close to the groin) with 1 × 10 6 HOS-MNNG cells or 1 × 10 7 total periosterium-derived cells isolated from Ctsk-Ctrl; Rosa-Ai9 and Ctsk-CKO; Rosa-Ai9 mice in 0.1 ml PBS. For drug therapy, as soon as the tumors reached 100 mm 3 , nude mice were randomly assigned to either vehicle or rapamycin (4 mg/kg) treatment groups and intraperitoneally injected daily. Tumor volume was measured using digital calipers every 2 days. Tumor volume was calculated by the formula: volume = length × (width) 2 /2. Data points were expressed as an average tumor volume.
Statistics. Data were generated from independently obtained data sets and were expressed as mean ± SEM. Statistical significance was determined using 2-tailed t tests or 2-way ANOVA. P values below 0.05 were considered to indicate statistically significant differences. GraphPad Prism 6 was used for all statistical analyses.
Study approval. Animal experiments were approved and conducted in full accordance with protocols that were approved by the Institutional Animal Care and Research Advisory Committee of Shanghai Institute of Biochemistry and Cell Biology. | 8,533 | 2019-03-25T00:00:00.000 | [
"Biology"
] |